scipy cleanup
This commit is contained in:
parent
40b4395465
commit
f0b0f2adf1
1845 changed files with 0 additions and 732366 deletions
|
@ -1,374 +0,0 @@
|
|||
Building and installing SciPy
|
||||
+++++++++++++++++++++++++++++
|
||||
|
||||
See http://www.scipy.org/scipy/scipy/wiki/GetCode
|
||||
for updates of this document.
|
||||
|
||||
.. Contents::
|
||||
|
||||
INTRODUCTION
|
||||
============
|
||||
|
||||
It is *strongly* recommended that you use the binary packages on your platform
|
||||
if they are available, in particular on Windows and Mac OS X. You should not
|
||||
attempt to build SciPy if you are not familiar with compiling softwares from
|
||||
sources.
|
||||
|
||||
PREREQUISITES
|
||||
=============
|
||||
|
||||
SciPy requires the following software installed for your platform:
|
||||
|
||||
1) Python__ 2.4.x or newer
|
||||
|
||||
__ http://www.python.org
|
||||
|
||||
2) NumPy__ 1.4.1 or newer (note: SciPy trunk at times requires latest NumPy
|
||||
trunk).
|
||||
|
||||
__ http://www.numpy.org/
|
||||
|
||||
Windows
|
||||
-------
|
||||
|
||||
Compilers
|
||||
~~~~~~~~~
|
||||
|
||||
It is recommended to use the mingw__ compilers on Windows: you will need gcc
|
||||
(C), g++ (C++) and g77 (Fortran) compilers.
|
||||
|
||||
__ http://www.mingw.org
|
||||
|
||||
Blas/Lapack
|
||||
~~~~~~~~~~~
|
||||
|
||||
Blas/Lapack are core routines for linear algebra (vector/matrix operations).
|
||||
You should use ATLAS__ with a full LAPACK, or simple BLAS/LAPACK built with g77
|
||||
from netlib__ sources. Building those libraries on windows may be difficult, as
|
||||
they assume a unix-style environment. Please use the binaries if you don't feel
|
||||
comfortable with cygwin, make and similar tools.
|
||||
|
||||
__ http://math-atlas.sourceforge.net/
|
||||
__ http://www.netlib.org/lapack/
|
||||
|
||||
Mac OS X
|
||||
--------
|
||||
|
||||
Compilers
|
||||
~~~~~~~~~
|
||||
|
||||
It is recommended to use gcc. gcc is available for free when installing
|
||||
Xcode__, the developer toolsuite on Mac OS X. You also need a fortran compiler,
|
||||
which is not included with Xcode: you should use gfortran from this page:
|
||||
|
||||
__ http://r.research.att.com/tools/
|
||||
|
||||
Please do NOT use gfortran from hpc.sourceforge.net, it is known to generate
|
||||
buggy scipy binaries.
|
||||
|
||||
__Xcode: http://developer.apple.com/TOOLS/xcode
|
||||
|
||||
Blas/Lapack
|
||||
~~~~~~~~~~~
|
||||
|
||||
Mac OS X includes the Accelerate framework: it should be detected without any
|
||||
intervention when building SciPy.
|
||||
|
||||
Linux
|
||||
-----
|
||||
|
||||
Most common distributions include all the dependencies. Here are some
|
||||
instructions for the most common ones:
|
||||
|
||||
Ubuntu >= 8.10
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
You can get all the dependencies as follows::
|
||||
|
||||
sudo apt-get install python python-dev libatlas3-base-dev gcc gfortran g++
|
||||
|
||||
Ubuntu < 8.10, Debian
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can get all the dependencies as follows::
|
||||
|
||||
sudo apt-get install python python-dev atlas3-base-dev gcc g77 g++
|
||||
|
||||
OpenSuse >= 10
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
RHEL
|
||||
~~~~
|
||||
|
||||
Fedora Core
|
||||
~~~~~~~~~~~
|
||||
|
||||
GETTING SCIPY
|
||||
=============
|
||||
|
||||
For the latest information, see the web site:
|
||||
|
||||
http://www.scipy.org
|
||||
|
||||
|
||||
Development version from Subversion (SVN)
|
||||
-----------------------------------------
|
||||
Use the command::
|
||||
|
||||
svn co http://svn.scipy.org/svn/scipy/trunk scipy
|
||||
|
||||
Before building and installing from SVN, remove the old installation
|
||||
(e.g. in /usr/lib/python2.4/site-packages/scipy or
|
||||
$HOME/lib/python2.4/site-packages/scipy). Then type::
|
||||
|
||||
cd scipy
|
||||
rm -rf build
|
||||
python setup.py install
|
||||
|
||||
|
||||
|
||||
INSTALLATION
|
||||
============
|
||||
|
||||
First make sure that all SciPy prerequisites are installed and working
|
||||
properly. Then be sure to remove any old SciPy installations (e.g.
|
||||
/usr/lib/python2.4/site-packages/scipy or $HOME/lib/python2.4/
|
||||
site-packages/scipy). On windows, if you installed scipy previously from a
|
||||
binary, use the remove facility from the add/remove softwares panel, or remote
|
||||
the scipy directory by hand if you installed from sources (e.g.
|
||||
C:\Python24\Lib\site-packages\scipy for python 2.4).
|
||||
|
||||
From tarballs
|
||||
-------------
|
||||
Unpack ``SciPy-<version>.tar.gz``, change to the ``SciPy-<version>/``
|
||||
directory, and run
|
||||
::
|
||||
|
||||
python setup.py install
|
||||
|
||||
This may take several minutes to an hour depending on the speed of your
|
||||
computer. To install to a user-specific location instead, run::
|
||||
|
||||
python setup.py install --prefix=$MYDIR
|
||||
|
||||
where $MYDIR is, for example, $HOME or $HOME/usr.
|
||||
|
||||
** Note 1: On Unix, you should avoid installing in /usr, but rather in
|
||||
/usr/local or somewhere else. /usr is generally 'owned' by your package
|
||||
manager, and you may overwrite a packaged scipy this way.
|
||||
|
||||
TESTING
|
||||
=======
|
||||
|
||||
To test SciPy after installation (highly recommended), execute in Python
|
||||
|
||||
>>> import scipy
|
||||
>>> scipy.test()
|
||||
|
||||
To run the full test suite use
|
||||
|
||||
>>> scipy.test('full')
|
||||
|
||||
Please note that you must have version 0.10 or later of the 'nose' test
|
||||
framework installed in order to run the tests. More information about nose is
|
||||
available on the website__.
|
||||
|
||||
__ http://somethingaboutorange.com/mrl/projects/nose/
|
||||
|
||||
COMPILER NOTES
|
||||
==============
|
||||
|
||||
Note that SciPy is developed mainly using GNU compilers. Compilers from
|
||||
other vendors such as Intel, Absoft, Sun, NAG, Compaq, Vast, Porland,
|
||||
Lahey, HP, IBM are supported in the form of community feedback.
|
||||
|
||||
gcc__ compiler is recommended. gcc 3.x and 4.x are known to work.
|
||||
If building on OS X, you should use the provided gcc by xcode tools, and the
|
||||
gfortran compiler available here:
|
||||
|
||||
http://r.research.att.com/tools/
|
||||
|
||||
You can specify which Fortran compiler to use by using the following
|
||||
install command::
|
||||
|
||||
python setup.py config_fc --fcompiler=<Vendor> install
|
||||
|
||||
To see a valid list of <Vendor> names, run::
|
||||
|
||||
python setup.py config_fc --help-fcompiler
|
||||
|
||||
IMPORTANT: It is highly recommended that all libraries that scipy uses (e.g.
|
||||
blas and atlas libraries) are built with the same Fortran compiler. In most
|
||||
cases, if you mix compilers, you will not be able to import scipy at best, have
|
||||
crashes and random results at worse.
|
||||
|
||||
__ http://gcc.gnu.org/
|
||||
|
||||
Using non-GNU Fortran compiler with gcc/g77 compiled Atlas/Lapack libraries
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
When Atlas/Lapack libraries are compiled with GNU compilers but
|
||||
one wishes to build scipy with some non-GNU Fortran compiler then
|
||||
linking extension modules may require -lg2c. You can specify it
|
||||
in installation command line as follows::
|
||||
|
||||
python setup.py build build_ext -lg2c install
|
||||
|
||||
If using non-GNU C compiler or linker, the location of g2c library can
|
||||
be specified in a similar manner using -L/path/to/libg2c.a after
|
||||
build_ext command.
|
||||
|
||||
Intel Fortran Compiler
|
||||
----------------------
|
||||
|
||||
Note that code compiled by the Intel Fortran Compiler (IFC) is not
|
||||
binary compatible with code compiled by g77. Therefore, when using IFC,
|
||||
all Fortran codes used in SciPy must be compiled with IFC. This also
|
||||
includes the LAPACK, BLAS, and ATLAS libraries. Using GCC for compiling
|
||||
C code is OK. IFC version 5.0 is not supported (because it has bugs that
|
||||
cause SciPy's tests to segfault).
|
||||
|
||||
Minimum IFC flags for building LAPACK and ATLAS are
|
||||
::
|
||||
|
||||
-FI -w90 -w95 -cm -O3 -unroll
|
||||
|
||||
Also consult 'ifc -help' for additional optimization flags suitable
|
||||
for your computers CPU.
|
||||
|
||||
When finishing LAPACK build, you must recompile ?lamch.f, xerbla.f
|
||||
with optimization disabled (otherwise infinite loops occur when using
|
||||
these routines)::
|
||||
|
||||
make lapacklib # in /path/to/src/LAPACK/
|
||||
cd SRC
|
||||
ifc -FI -w90 -w95 -cm -O0 -c ?lamch.f xerbla.f
|
||||
cd ..
|
||||
make lapacklib
|
||||
|
||||
|
||||
KNOWN INSTALLATION PROBLEMS
|
||||
===========================
|
||||
|
||||
BLAS sources shipped with LAPACK are incomplete
|
||||
-----------------------------------------------
|
||||
Some distributions (e.g. Redhat Linux 7.1) provide BLAS libraries that
|
||||
are built from such incomplete sources and therefore cause import
|
||||
errors like
|
||||
::
|
||||
|
||||
ImportError: .../fblas.so: undefined symbol: srotmg_
|
||||
|
||||
Fix:
|
||||
Use ATLAS or the official release of BLAS libraries.
|
||||
|
||||
LAPACK library provided by ATLAS is incomplete
|
||||
----------------------------------------------
|
||||
You will notice it when getting import errors like
|
||||
::
|
||||
|
||||
ImportError: .../flapack.so : undefined symbol: sgesdd_
|
||||
|
||||
To be sure that SciPy is built against a complete LAPACK, check the
|
||||
size of the file liblapack.a -- it should be about 6MB. The location
|
||||
of liblapack.a is shown by executing
|
||||
::
|
||||
|
||||
python /lib/python2.4/site-packages/numpy/distutils/system_info.py
|
||||
|
||||
(or the appropriate installation directory).
|
||||
|
||||
To fix: follow the instructions in
|
||||
|
||||
http://math-atlas.sourceforge.net/errata.html#completelp
|
||||
|
||||
to create a complete liblapack.a. Then copy liblapack.a to the same
|
||||
location where libatlas.a is installed and retry with scipy build.
|
||||
|
||||
Using non-GNU Fortran Compiler
|
||||
------------------------------
|
||||
If import scipy shows a message
|
||||
::
|
||||
|
||||
ImportError: undefined symbol: s_wsfe
|
||||
|
||||
and you are using non-GNU Fortran compiler, then it means that any of
|
||||
the (may be system provided) Fortran libraries such as LAPACK or BLAS
|
||||
were compiled with g77. See also compilers notes above.
|
||||
|
||||
Recommended fix: Recompile all Fortran libraries with the same Fortran
|
||||
compiler and rebuild/reinstall scipy.
|
||||
|
||||
Another fix: See `Using non-GNU Fortran compiler with gcc/g77 compiled
|
||||
Atlas/Lapack libraries` section above.
|
||||
|
||||
|
||||
TROUBLESHOOTING
|
||||
===============
|
||||
|
||||
If you experience problems when building/installing/testing SciPy, you
|
||||
can ask help from scipy-user@scipy.org or scipy-dev@scipy.org mailing
|
||||
lists. Please include the following information in your message:
|
||||
|
||||
NOTE: You can generate some of the following information (items 1-5,7)
|
||||
in one command::
|
||||
|
||||
python -c 'from numpy.f2py.diagnose import run; run()'
|
||||
|
||||
1) Platform information::
|
||||
|
||||
python -c 'import os,sys;print os.name,sys.platform'
|
||||
uname -a
|
||||
OS, its distribution name and version information
|
||||
etc.
|
||||
|
||||
2) Information about C,C++,Fortran compilers/linkers as reported by
|
||||
the compilers when requesting their version information, e.g.,
|
||||
the output of
|
||||
::
|
||||
|
||||
gcc -v
|
||||
g77 --version
|
||||
|
||||
3) Python version::
|
||||
|
||||
python -c 'import sys;print sys.version'
|
||||
|
||||
4) NumPy version::
|
||||
|
||||
python -c 'import numpy;print numpy.__version__'
|
||||
|
||||
5) ATLAS version, the locations of atlas and lapack libraries, building
|
||||
information if any. If you have ATLAS version 3.3.6 or newer, then
|
||||
give the output of the last command in
|
||||
::
|
||||
|
||||
cd scipy/Lib/linalg
|
||||
python setup_atlas_version.py build_ext --inplace --force
|
||||
python -c 'import atlas_version'
|
||||
|
||||
7) The output of the following commands
|
||||
::
|
||||
|
||||
python INSTALLDIR/numpy/distutils/system_info.py
|
||||
|
||||
where INSTALLDIR is, for example, /usr/lib/python2.4/site-packages/.
|
||||
|
||||
8) Feel free to add any other relevant information.
|
||||
For example, the full output (both stdout and stderr) of the SciPy
|
||||
installation command can be very helpful. Since this output can be
|
||||
rather large, ask before sending it into the mailing list (or
|
||||
better yet, to one of the developers, if asked).
|
||||
|
||||
9) In case of failing to import extension modules, the output of
|
||||
::
|
||||
|
||||
ldd /path/to/ext_module.so
|
||||
|
||||
can be useful.
|
||||
|
||||
You may find the following notes useful:
|
||||
|
||||
http://www.tuxedo.org/~esr/faqs/smart-questions.html
|
||||
|
||||
http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
|
|
@ -1,9 +0,0 @@
|
|||
The Subversion tree for this distribution contains the latest code.
|
||||
It can be downloaded using the subversion client as
|
||||
|
||||
svn co http://svn.scipy.org/svn/scipy/trunk scipy
|
||||
|
||||
which will create a directory named scipy in your current directory
|
||||
and fill it with the current version of scipy.
|
||||
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
Copyright (c) 2001, 2002 Enthought, Inc.
|
||||
All rights reserved.
|
||||
|
||||
Copyright (c) 2003-2009 SciPy Developers.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
a. Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
b. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
c. Neither the name of the Enthought nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software
|
||||
without specific prior written permission.
|
||||
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
||||
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
|
||||
DAMAGE.
|
||||
|
|
@ -1,19 +0,0 @@
|
|||
# Use .add_data_files and .add_data_dir methods in a appropriate
|
||||
# setup.py files to include non-python files such as documentation,
|
||||
# data, etc files to distribution. Avoid using MANIFEST.in for that.
|
||||
#
|
||||
include MANIFEST.in
|
||||
include *.txt
|
||||
include setupscons.py
|
||||
include setupegg.py
|
||||
include setup.py
|
||||
include scipy/*.py
|
||||
# Adding scons build relateed files not found by distutils
|
||||
recursive-include scipy SConstruct SConscript
|
||||
# Add documentation: we don't use add_data_dir since we do not want to include
|
||||
# this at installation, only for sdist-generated tarballs
|
||||
include doc/Makefile doc/postprocess.py
|
||||
recursive-include doc/release *
|
||||
recursive-include doc/source *
|
||||
recursive-include doc/sphinxext *
|
||||
prune scipy/special/tests/data/boost
|
|
@ -1,40 +0,0 @@
|
|||
Metadata-Version: 1.0
|
||||
Name: scipy
|
||||
Version: 0.8.0
|
||||
Summary: SciPy: Scientific Library for Python
|
||||
Home-page: http://www.scipy.org
|
||||
Author: SciPy Developers
|
||||
Author-email: scipy-dev@scipy.org
|
||||
License: BSD
|
||||
Download-URL: http://sourceforge.net/project/showfiles.php?group_id=27747&package_id=19531
|
||||
Description: SciPy (pronounced "Sigh Pie") is open-source software for mathematics,
|
||||
science, and engineering. The SciPy library
|
||||
depends on NumPy, which provides convenient and fast N-dimensional
|
||||
array manipulation. The SciPy library is built to work with NumPy
|
||||
arrays, and provides many user-friendly and efficient numerical
|
||||
routines such as routines for numerical integration and optimization.
|
||||
Together, they run on all popular operating systems, are quick to
|
||||
install, and are free of charge. NumPy and SciPy are easy to use,
|
||||
but powerful enough to be depended upon by some of the world's
|
||||
leading scientists and engineers. If you need to manipulate
|
||||
numbers on a computer and display or publish the results,
|
||||
give SciPy a try!
|
||||
|
||||
|
||||
Platform: Windows
|
||||
Platform: Linux
|
||||
Platform: Solaris
|
||||
Platform: Mac OS-X
|
||||
Platform: Unix
|
||||
Classifier: Development Status :: 4 - Beta
|
||||
Classifier: Intended Audience :: Science/Research
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved
|
||||
Classifier: Programming Language :: C
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Topic :: Software Development
|
||||
Classifier: Topic :: Scientific/Engineering
|
||||
Classifier: Operating System :: Microsoft :: Windows
|
||||
Classifier: Operating System :: POSIX
|
||||
Classifier: Operating System :: Unix
|
||||
Classifier: Operating System :: MacOS
|
|
@ -1,135 +0,0 @@
|
|||
=================================================
|
||||
Developing SciPy
|
||||
=================================================
|
||||
|
||||
.. Contents::
|
||||
|
||||
|
||||
What is SciPy?
|
||||
--------------
|
||||
|
||||
SciPy (pronounced "Sigh Pie") is open-source software for mathematics,
|
||||
science, and engineering. It includes modules for statistics, optimization,
|
||||
integration, linear algebra, Fourier transforms, signal and image processing,
|
||||
ODE solvers, and more. It is also the name of a very popular conference on
|
||||
scientific programming with Python.
|
||||
|
||||
The SciPy library depends on NumPy, which provides convenient and fast
|
||||
N-dimensional array manipulation. The SciPy library is built to work with
|
||||
NumPy arrays, and provides many user-friendly and efficient numerical routines
|
||||
such as routines for numerical integration and optimization. Together, they
|
||||
run on all popular operating systems, are quick to install, and are free of
|
||||
charge. NumPy and SciPy are easy to use, but powerful enough to be depended
|
||||
upon by some of the world's leading scientists and engineers. If you need to
|
||||
manipulate numbers on a computer and display or publish the results, give
|
||||
SciPy a try!
|
||||
|
||||
|
||||
SciPy structure
|
||||
---------------
|
||||
|
||||
SciPy aims at being a robust and efficient "super-package" of a number
|
||||
of modules, each of a non-trivial size and complexity. In order for
|
||||
"SciPy integration" to work flawlessly, all SciPy modules must follow
|
||||
certain rules that are described in this document. Hopefully this
|
||||
document will be helpful for SciPy contributors and developers as a
|
||||
basic reference about the structure of the SciPy package.
|
||||
|
||||
Currently SciPy consists of the following files and directories:
|
||||
|
||||
INSTALL.txt
|
||||
SciPy prerequisites, installation, testing, and troubleshooting.
|
||||
|
||||
THANKS.txt
|
||||
SciPy developers and contributors. Please keep it up to date!!
|
||||
|
||||
README.txt
|
||||
SciPy structure (this document).
|
||||
|
||||
setup.py
|
||||
Script for building and installing SciPy.
|
||||
|
||||
MANIFEST.in
|
||||
Additions to distutils-generated SciPy tar-balls. Its usage is
|
||||
deprecated.
|
||||
|
||||
scipy/
|
||||
Contains SciPy __init__.py and the directories of SciPy modules.
|
||||
|
||||
SciPy modules
|
||||
+++++++++++++
|
||||
|
||||
In the following, a *SciPy module* is defined as a Python package, say
|
||||
xxx, that is located in the scipy/ directory. All SciPy modules should
|
||||
follow the following conventions:
|
||||
|
||||
* Ideally, each SciPy module should be as self-contained as possible.
|
||||
That is, it should have minimal dependencies on other packages or
|
||||
modules. Even dependencies on other SciPy modules should be kept to a
|
||||
minimum. A dependency on NumPy is of course assumed.
|
||||
|
||||
* Directory ``xxx/`` must contain
|
||||
|
||||
+ a file ``setup.py`` that defines
|
||||
``configuration(parent_package='',top_path=None)`` function.
|
||||
See below for more details.
|
||||
|
||||
+ a file ``info.py``. See below more details.
|
||||
|
||||
* Directory ``xxx/`` may contain
|
||||
|
||||
+ a directory ``tests/`` that contains files ``test_<name>.py``
|
||||
corresponding to modules ``xxx/<name>{.py,.so,/}``. See below for
|
||||
more details.
|
||||
|
||||
+ a file ``MANIFEST.in`` that may contain only ``include setup.py`` line.
|
||||
DO NOT specify sources in MANIFEST.in, you must specify all sources
|
||||
in setup.py file. Otherwise released SciPy tarballs will miss these sources.
|
||||
|
||||
+ a directory ``docs/`` for documentation.
|
||||
|
||||
For details, read:
|
||||
|
||||
http://projects.scipy.org/numpy/wiki/DistutilsDoc
|
||||
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
The documentation site is here
|
||||
http://docs.scipy.org
|
||||
|
||||
Web sites
|
||||
---------
|
||||
|
||||
The user's site is here
|
||||
http://www.scipy.org/
|
||||
|
||||
The developer's site is here
|
||||
http://projects.scipy.org/scipy/wiki
|
||||
|
||||
|
||||
Mailing Lists
|
||||
-------------
|
||||
|
||||
Please see the developer's list here
|
||||
http://projects.scipy.org/mailman/listinfo/scipy-dev
|
||||
|
||||
|
||||
Bug reports
|
||||
-----------
|
||||
|
||||
To search for bugs, please use the NIPY Bug Tracker at
|
||||
http://projects.scipy.org/scipy/query
|
||||
|
||||
To report a bug, please use the NIPY Bug Tracker at
|
||||
http://projects.scipy.org/scipy/newticket
|
||||
|
||||
|
||||
License information
|
||||
-------------------
|
||||
|
||||
See the file "LICENSE" for information on the history of this
|
||||
software, terms & conditions for usage, and a DISCLAIMER OF ALL
|
||||
WARRANTIES.
|
||||
|
|
@ -1,77 +0,0 @@
|
|||
SciPy is an open source library of routines for science and engineering
|
||||
using Python. It is a community project sponsored by Enthought, Inc.
|
||||
SciPy originated with code contributions by Travis Oliphant, Pearu
|
||||
Peterson, and Eric Jones. Travis Oliphant and Eric Jones each contributed
|
||||
about half the initial code. Pearu Peterson developed f2py, which is the
|
||||
integral to wrapping the many Fortran libraries used in SciPy.
|
||||
|
||||
Since then many people have contributed to SciPy, both in code development,
|
||||
suggestions, and financial support. Below is a partial list. If you've
|
||||
been left off, please email the "SciPy Developers List" <scipy-dev@scipy.org>.
|
||||
|
||||
Please add names as needed so that we can keep up with all the contributors.
|
||||
|
||||
Kumar Appaiah for Dolph Chebyshev window.
|
||||
Nathan Bell for sparsetools, help with scipy.sparse and scipy.splinalg.
|
||||
Robert Cimrman for UMFpack wrapper for sparse matrix module.
|
||||
David M. Cooke for improvements to system_info, and LBFGSB wrapper.
|
||||
Aric Hagberg for ARPACK wrappers, help with splinalg.eigen.
|
||||
Chuck Harris for Zeros package in optimize (1d root-finding algorithms).
|
||||
Prabhu Ramachandran for improvements to gui_thread.
|
||||
Robert Kern for improvements to stats and bug-fixes.
|
||||
Jean-Sebastien Roy for fmin_tnc code which he adapted from Stephen Nash's
|
||||
original Fortran.
|
||||
Ed Schofield for Maximum entropy and Monte Carlo modules, help with
|
||||
sparse matrix module.
|
||||
Travis Vaught for numerous contributions to annual conference and community
|
||||
web-site and the initial work on stats module clean up.
|
||||
Jeff Whitaker for Mac OS X support.
|
||||
David Cournapeau for bug-fixes, refactoring of fftpack and cluster,
|
||||
implementing the numscons build, building Windows binaries and
|
||||
adding single precision FFT.
|
||||
Damian Eads for hierarchical clustering, dendrogram plotting,
|
||||
distance functions in spatial package, vq documentation.
|
||||
Anne Archibald for kd-trees and nearest neighbor in scipy.spatial.
|
||||
Pauli Virtanen for Sphinx documentation generation, online documentation
|
||||
framework and interpolation bugfixes.
|
||||
Josef Perktold for major improvements to scipy.stats and its test suite and
|
||||
fixes and tests to optimize.curve_fit and leastsq.
|
||||
David Morrill for getting the scoreboard test system up and running.
|
||||
Louis Luangkesorn for providing multiple tests for the stats module.
|
||||
Jochen Kupper for the zoom feature in the now-deprecated plt plotting module.
|
||||
Tiffany Kamm for working on the community web-site.
|
||||
Mark Koudritsky for maintaining the web-site.
|
||||
Andrew Straw for help with the web-page, documentation, packaging,
|
||||
testing and work on the linalg module.
|
||||
Stefan van der Walt for numerous bug-fixes, testing and documentation.
|
||||
Jarrod Millman for release management, community coordination, and code
|
||||
clean up.
|
||||
Pierre Gerard-Marchant for statistical masked array functionality.
|
||||
Alan McIntyre for updating SciPy tests to use the new NumPy test framework.
|
||||
Matthew Brett for work on the Matlab file IO, bug-fixes, and improvements
|
||||
to the testing framework.
|
||||
Gary Strangman for the scipy.stats package.
|
||||
Tiziano Zito for generalized symmetric and hermitian eigenvalue problem
|
||||
solver.
|
||||
Chris Burns for bug-fixes.
|
||||
Per Brodtkorb for improvements to stats distributions.
|
||||
Neilen Marais for testing and bug-fixing in the ARPACK wrappers.
|
||||
Johannes Loehnert and Bart Vandereycken for fixes in the linalg
|
||||
module.
|
||||
David Huard for improvements to the interpolation interface.
|
||||
David Warde-Farley for converting the ndimage docs to ReST.
|
||||
Uwe Schmitt for wrapping non-negative least-squares.
|
||||
Ondrej Certik for Debian packaging.
|
||||
Paul Ivanov for porting Numeric-style C code to the new NumPy API.
|
||||
Ariel Rokem for contributions on percentileofscore fixes and tests.
|
||||
Yosef Meller for tests in the optimization module.
|
||||
|
||||
Institutions
|
||||
------------
|
||||
|
||||
Enthought for providing resources and finances for development of SciPy.
|
||||
Brigham Young University for providing resources for students to work on SciPy.
|
||||
Agilent which gave a genereous donation for support of SciPy.
|
||||
UC Berkeley for providing travel money and hosting numerous sprints.
|
||||
The University of Stellenbosch for funding the development of
|
||||
the SciKits portal.
|
|
@ -1,57 +0,0 @@
|
|||
=================================================
|
||||
Development Plans for SciPy 1.0
|
||||
=================================================
|
||||
|
||||
See http://www.scipy.org/scipy/scipy/wiki/DevelopmentPlan
|
||||
for updates of this document.
|
||||
|
||||
.. Contents::
|
||||
|
||||
|
||||
General
|
||||
--------
|
||||
|
||||
* distributions heavy use of extract and insert (could use fancy indexing?) -- but we should wait until we learn how slow fancy indexing is....)
|
||||
|
||||
* Use of old Numeric C-API. Using it means an extra C-level function call, but ...
|
||||
|
||||
* Make use of type addition to extend certain ufuncs with cephes quad types
|
||||
|
||||
* Use finfo(foo).bar instead of limits.foo_bar (see r3358 and r3362)
|
||||
|
||||
* Comply with Python Style Guide
|
||||
|
||||
* use CamelCase for class names
|
||||
|
||||
* Improve testing (e.g., increased coverage)
|
||||
|
||||
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
See http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines
|
||||
|
||||
* use new docstring format
|
||||
|
||||
|
||||
Packages
|
||||
--------
|
||||
|
||||
* consider reorganizing the namespace
|
||||
|
||||
* scipy.tests, scipy.misc, scipy.stsci
|
||||
|
||||
IO (scipy.io)
|
||||
+++++++++++++
|
||||
|
||||
* io rewritten to use internal writing capabilities of arrays
|
||||
|
||||
Image Processing (scipy.ndimage)
|
||||
++++++++++++++++++++++++++++++++
|
||||
|
||||
|
||||
Statistical Analysis (scipy.stats)
|
||||
++++++++++++++++++++++++++++++++++
|
||||
|
||||
* add statistical models
|
|
@ -1,163 +0,0 @@
|
|||
# Makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
PYVER =
|
||||
PYTHON = python$(PYVER)
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS =
|
||||
SPHINXBUILD = LANG=C sphinx-build
|
||||
PAPER =
|
||||
|
||||
NEED_AUTOSUMMARY = $(shell $(PYTHON) -c 'import sphinx; print sphinx.__version__ < "0.7" and "1" or ""')
|
||||
|
||||
# Internal variables.
|
||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||
PAPEROPT_letter = -D latex_paper_size=letter
|
||||
ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
|
||||
|
||||
.PHONY: help clean html web pickle htmlhelp latex changes linkcheck \
|
||||
dist dist-build
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
help:
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
@echo " html to make standalone HTML files"
|
||||
@echo " pickle to make pickle files (usable by e.g. sphinx-web)"
|
||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
||||
@echo " changes to make an overview over all changed/added/deprecated items"
|
||||
@echo " linkcheck to check all external links for integrity"
|
||||
@echo " dist PYVER=... to make a distribution-ready tree"
|
||||
@echo " upload USER=... to upload results to docs.scipy.org"
|
||||
|
||||
clean:
|
||||
-rm -rf build/* source/generated
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Automated generation of all documents
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
# Build the current scipy version, and extract docs from it.
|
||||
# We have to be careful of some issues:
|
||||
#
|
||||
# - Everything must be done using the same Python version
|
||||
# - We must use eggs (otherwise they might override PYTHONPATH on import).
|
||||
# - Different versions of easy_install install to different directories (!)
|
||||
#
|
||||
|
||||
INSTALL_DIR = $(CURDIR)/build/inst-dist/
|
||||
INSTALL_PPH = $(INSTALL_DIR)/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/lib/python$(PYVER)/dist-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/dist-packages
|
||||
|
||||
DIST_VARS=PYTHON="PYTHONPATH=$(INSTALL_PPH):$$PYTHONPATH python$(PYVER)" SPHINXBUILD="LANG=C PYTHONPATH=$(INSTALL_PPH):$$PYTHONPATH python$(PYVER) `which sphinx-build`"
|
||||
|
||||
UPLOAD_TARGET = $(USER)@docs.scipy.org:/home/docserver/www-root/doc/scipy/
|
||||
|
||||
upload:
|
||||
@test -e build/dist || { echo "make dist is required first"; exit 1; }
|
||||
@test output-is-fine -nt build/dist || { \
|
||||
echo "Review the output in build/dist, and do 'touch output-is-fine' before uploading."; exit 1; }
|
||||
rsync -r -z --delete-after -p \
|
||||
$(if $(shell test -f build/dist/scipy-ref.pdf && echo "y"),, \
|
||||
--exclude '**-ref.pdf' --exclude '**-user.pdf') \
|
||||
$(if $(shell test -f build/dist/scipy-chm.zip && echo "y"),, \
|
||||
--exclude '**-chm.zip') \
|
||||
build/dist/ $(UPLOAD_TARGET)
|
||||
|
||||
dist:
|
||||
make $(DIST_VARS) real-dist
|
||||
|
||||
real-dist: dist-build html
|
||||
test -d build/latex || make latex
|
||||
make -C build/latex all-pdf
|
||||
-test -d build/htmlhelp || make htmlhelp-build
|
||||
-rm -rf build/dist
|
||||
mkdir -p build/dist
|
||||
cp -r build/html build/dist/reference
|
||||
touch build/dist/index.html
|
||||
perl -pi -e 's#^\s*(<li><a href=".*?">SciPy.*?Reference Guide.*?»</li>)\s*$$#<li><a href="/">Numpy and Scipy Documentation</a> »</li> $$1#;' build/dist/*.html build/dist/*/*.html build/dist/*/*/*.html
|
||||
(cd build/html && zip -9qr ../dist/scipy-html.zip .)
|
||||
cp build/latex/scipy*.pdf build/dist
|
||||
-zip build/dist/scipy-chm.zip build/htmlhelp/scipy.chm
|
||||
cd build/dist && tar czf ../dist.tar.gz *
|
||||
chmod ug=rwX,o=rX -R build/dist
|
||||
find build/dist -type d -print0 | xargs -0r chmod g+s
|
||||
|
||||
dist-build:
|
||||
rm -f ../dist/*.egg
|
||||
cd .. && $(PYTHON) setupegg.py bdist_egg
|
||||
install -d $(subst :, ,$(INSTALL_PPH))
|
||||
$(PYTHON) `which easy_install` --prefix=$(INSTALL_DIR) ../dist/*.egg
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Basic Sphinx generation rules for different formats
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
generate: build/generate-stamp
|
||||
build/generate-stamp: $(wildcard source/*.rst)
|
||||
mkdir -p build
|
||||
ifeq ($(NEED_AUTOSUMMARY),1)
|
||||
$(PYTHON) \
|
||||
./sphinxext/autosummary_generate.py source/*.rst \
|
||||
-p dump.xml -o source/generated
|
||||
endif
|
||||
touch build/generate-stamp
|
||||
|
||||
html: generate
|
||||
mkdir -p build/html build/doctrees
|
||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html
|
||||
$(PYTHON) postprocess.py html build/html/*.html
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in build/html."
|
||||
|
||||
pickle: generate
|
||||
mkdir -p build/pickle build/doctrees
|
||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) build/pickle
|
||||
@echo
|
||||
@echo "Build finished; now you can process the pickle files or run"
|
||||
@echo " sphinx-web build/pickle"
|
||||
@echo "to start the sphinx-web server."
|
||||
|
||||
web: pickle
|
||||
|
||||
htmlhelp: generate
|
||||
mkdir -p build/htmlhelp build/doctrees
|
||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
||||
".hhp project file in build/htmlhelp."
|
||||
|
||||
htmlhelp-build: htmlhelp build/htmlhelp/scipy.chm
|
||||
%.chm: %.hhp
|
||||
-hhc.exe $^
|
||||
|
||||
latex: generate
|
||||
mkdir -p build/latex build/doctrees
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex
|
||||
$(PYTHON) postprocess.py tex build/latex/*.tex
|
||||
perl -pi -e 's/\t(latex.*|pdflatex) (.*)/\t-$$1 -interaction batchmode $$2/' build/latex/Makefile
|
||||
@echo
|
||||
@echo "Build finished; the LaTeX files are in build/latex."
|
||||
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
|
||||
"run these through (pdf)latex."
|
||||
|
||||
coverage: build
|
||||
mkdir -p build/coverage build/doctrees
|
||||
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) build/coverage
|
||||
@echo "Coverage finished; see c.txt and python.txt in build/coverage"
|
||||
|
||||
changes: generate
|
||||
mkdir -p build/changes build/doctrees
|
||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes
|
||||
@echo
|
||||
@echo "The overview file is in build/changes."
|
||||
|
||||
linkcheck: generate
|
||||
mkdir -p build/linkcheck build/doctrees
|
||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck
|
||||
@echo
|
||||
@echo "Link check complete; look for any errors in the above output " \
|
||||
"or in build/linkcheck/output.txt."
|
|
@ -1,55 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
"""
|
||||
%prog MODE FILES...
|
||||
|
||||
Post-processes HTML and Latex files output by Sphinx.
|
||||
MODE is either 'html' or 'tex'.
|
||||
|
||||
"""
|
||||
import re, optparse
|
||||
|
||||
def main():
|
||||
p = optparse.OptionParser(__doc__)
|
||||
options, args = p.parse_args()
|
||||
|
||||
if len(args) < 1:
|
||||
p.error('no mode given')
|
||||
|
||||
mode = args.pop(0)
|
||||
|
||||
if mode not in ('html', 'tex'):
|
||||
p.error('unknown mode %s' % mode)
|
||||
|
||||
for fn in args:
|
||||
f = open(fn, 'r')
|
||||
try:
|
||||
if mode == 'html':
|
||||
lines = process_html(fn, f.readlines())
|
||||
elif mode == 'tex':
|
||||
lines = process_tex(f.readlines())
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
f = open(fn, 'w')
|
||||
f.write("".join(lines))
|
||||
f.close()
|
||||
|
||||
def process_html(fn, lines):
|
||||
return lines
|
||||
|
||||
def process_tex(lines):
|
||||
"""
|
||||
Remove unnecessary section titles from the LaTeX file,
|
||||
and convert UTF-8 non-breaking spaces to Latex nbsps.
|
||||
|
||||
"""
|
||||
new_lines = []
|
||||
for line in lines:
|
||||
if re.match(r'^\\(section|subsection|subsubsection|paragraph|subparagraph){(numpy|scipy)\.', line):
|
||||
pass # skip!
|
||||
else:
|
||||
new_lines.append(line)
|
||||
return new_lines
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -1,348 +0,0 @@
|
|||
=========================
|
||||
SciPy 0.7.0 Release Notes
|
||||
=========================
|
||||
|
||||
.. contents::
|
||||
|
||||
SciPy 0.7.0 is the culmination of 16 months of hard work. It contains
|
||||
many new features, numerous bug-fixes, improved test coverage and
|
||||
better documentation. There have been a number of deprecations and
|
||||
API changes in this release, which are documented below. All users
|
||||
are encouraged to upgrade to this release, as there are a large number
|
||||
of bug-fixes and optimizations. Moreover, our development attention
|
||||
will now shift to bug-fix releases on the 0.7.x branch, and on adding
|
||||
new features on the development trunk. This release requires Python
|
||||
2.4 or 2.5 and NumPy 1.2 or greater.
|
||||
|
||||
Please note that SciPy is still considered to have "Beta" status, as
|
||||
we work toward a SciPy 1.0.0 release. The 1.0.0 release will mark a
|
||||
major milestone in the development of SciPy, after which changing the
|
||||
package structure or API will be much more difficult. Whilst these
|
||||
pre-1.0 releases are considered to have "Beta" status, we are
|
||||
committed to making them as bug-free as possible. For example, in
|
||||
addition to fixing numerous bugs in this release, we have also doubled
|
||||
the number of unit tests since the last release.
|
||||
|
||||
However, until the 1.0 release, we are aggressively reviewing and
|
||||
refining the functionality, organization, and interface. This is being
|
||||
done in an effort to make the package as coherent, intuitive, and
|
||||
useful as possible. To achieve this, we need help from the community
|
||||
of users. Specifically, we need feedback regarding all aspects of the
|
||||
project - everything - from which algorithms we implement, to details
|
||||
about our function's call signatures.
|
||||
|
||||
Over the last year, we have seen a rapid increase in community
|
||||
involvement, and numerous infrastructure improvements to lower the
|
||||
barrier to contributions (e.g., more explicit coding standards,
|
||||
improved testing infrastructure, better documentation tools). Over
|
||||
the next year, we hope to see this trend continue and invite everyone
|
||||
to become more involved.
|
||||
|
||||
Python 2.6 and 3.0
|
||||
------------------
|
||||
|
||||
A significant amount of work has gone into making SciPy compatible
|
||||
with Python 2.6; however, there are still some issues in this regard.
|
||||
The main issue with 2.6 support is NumPy. On UNIX (including Mac OS
|
||||
X), NumPy 1.2.1 mostly works, with a few caveats. On Windows, there
|
||||
are problems related to the compilation process. The upcoming NumPy
|
||||
1.3 release will fix these problems. Any remaining issues with 2.6
|
||||
support for SciPy 0.7 will be addressed in a bug-fix release.
|
||||
|
||||
Python 3.0 is not supported at all; it requires NumPy to be ported to
|
||||
Python 3.0. This requires immense effort, since a lot of C code has
|
||||
to be ported. The transition to 3.0 is still under consideration;
|
||||
currently, we don't have any timeline or roadmap for this transition.
|
||||
|
||||
Major documentation improvements
|
||||
--------------------------------
|
||||
|
||||
SciPy documentation is greatly improved; you can view a HTML reference
|
||||
manual `online <http://docs.scipy.org/>`__ or download it as a PDF
|
||||
file. The new reference guide was built using the popular `Sphinx tool
|
||||
<http://sphinx.pocoo.org/>`__.
|
||||
|
||||
This release also includes an updated tutorial, which hadn't been
|
||||
available since SciPy was ported to NumPy in 2005. Though not
|
||||
comprehensive, the tutorial shows how to use several essential parts
|
||||
of Scipy. It also includes the ``ndimage`` documentation from the
|
||||
``numarray`` manual.
|
||||
|
||||
Nevertheless, more effort is needed on the documentation front.
|
||||
Luckily, contributing to Scipy documentation is now easier than
|
||||
before: if you find that a part of it requires improvements, and want
|
||||
to help us out, please register a user name in our web-based
|
||||
documentation editor at http://docs.scipy.org/ and correct the issues.
|
||||
|
||||
Running Tests
|
||||
-------------
|
||||
|
||||
NumPy 1.2 introduced a new testing framework based on `nose
|
||||
<http://somethingaboutorange.com/mrl/projects/nose/>`__. Starting with
|
||||
this release, SciPy now uses the new NumPy test framework as well.
|
||||
Taking advantage of the new testing framework requires ``nose``
|
||||
version 0.10, or later. One major advantage of the new framework is
|
||||
that it greatly simplifies writing unit tests - which has all ready
|
||||
paid off, given the rapid increase in tests. To run the full test
|
||||
suite::
|
||||
|
||||
>>> import scipy
|
||||
>>> scipy.test('full')
|
||||
|
||||
For more information, please see `The NumPy/SciPy Testing Guide
|
||||
<http://projects.scipy.org/scipy/numpy/wiki/TestingGuidelines>`__.
|
||||
|
||||
We have also greatly improved our test coverage. There were just over
|
||||
2,000 unit tests in the 0.6.0 release; this release nearly doubles
|
||||
that number, with just over 4,000 unit tests.
|
||||
|
||||
Building SciPy
|
||||
--------------
|
||||
|
||||
Support for NumScons has been added. NumScons is a tentative new build
|
||||
system for NumPy/SciPy, using `SCons <http://www.scons.org/>`__ at its
|
||||
core.
|
||||
|
||||
SCons is a next-generation build system, intended to replace the
|
||||
venerable ``Make`` with the integrated functionality of
|
||||
``autoconf``/``automake`` and ``ccache``. Scons is written in Python
|
||||
and its configuration files are Python scripts. NumScons is meant to
|
||||
replace NumPy's custom version of ``distutils`` providing more
|
||||
advanced functionality, such as ``autoconf``, improved fortran
|
||||
support, more tools, and support for ``numpy.distutils``/``scons``
|
||||
cooperation.
|
||||
|
||||
Sandbox Removed
|
||||
---------------
|
||||
|
||||
While porting SciPy to NumPy in 2005, several packages and modules
|
||||
were moved into ``scipy.sandbox``. The sandbox was a staging ground
|
||||
for packages that were undergoing rapid development and whose APIs
|
||||
were in flux. It was also a place where broken code could live. The
|
||||
sandbox has served its purpose well, but was starting to create
|
||||
confusion. Thus ``scipy.sandbox`` was removed. Most of the code was
|
||||
moved into ``scipy``, some code was made into a ``scikit``, and the
|
||||
remaining code was just deleted, as the functionality had been
|
||||
replaced by other code.
|
||||
|
||||
Sparse Matrices
|
||||
---------------
|
||||
|
||||
Sparse matrices have seen extensive improvements. There is now
|
||||
support for integer dtypes such ``int8``, ``uint32``, etc. Two new
|
||||
sparse formats were added:
|
||||
|
||||
* new class ``dia_matrix`` : the sparse DIAgonal format
|
||||
* new class ``bsr_matrix`` : the Block CSR format
|
||||
|
||||
Several new sparse matrix construction functions were added:
|
||||
|
||||
* ``sparse.kron`` : sparse Kronecker product
|
||||
* ``sparse.bmat`` : sparse version of ``numpy.bmat``
|
||||
* ``sparse.vstack`` : sparse version of ``numpy.vstack``
|
||||
* ``sparse.hstack`` : sparse version of ``numpy.hstack``
|
||||
|
||||
Extraction of submatrices and nonzero values have been added:
|
||||
|
||||
* ``sparse.tril`` : extract lower triangle
|
||||
* ``sparse.triu`` : extract upper triangle
|
||||
* ``sparse.find`` : nonzero values and their indices
|
||||
|
||||
``csr_matrix`` and ``csc_matrix`` now support slicing and fancy
|
||||
indexing (e.g., ``A[1:3, 4:7]`` and ``A[[3,2,6,8],:]``). Conversions
|
||||
among all sparse formats are now possible:
|
||||
|
||||
* using member functions such as ``.tocsr()`` and ``.tolil()``
|
||||
* using the ``.asformat()`` member function, e.g. ``A.asformat('csr')``
|
||||
* using constructors ``A = lil_matrix([[1,2]]); B = csr_matrix(A)``
|
||||
|
||||
All sparse constructors now accept dense matrices and lists of lists.
|
||||
For example:
|
||||
|
||||
* ``A = csr_matrix( rand(3,3) )`` and ``B = lil_matrix( [[1,2],[3,4]] )``
|
||||
|
||||
The handling of diagonals in the ``spdiags`` function has been changed.
|
||||
It now agrees with the MATLAB(TM) function of the same name.
|
||||
|
||||
Numerous efficiency improvements to format conversions and sparse
|
||||
matrix arithmetic have been made. Finally, this release contains
|
||||
numerous bugfixes.
|
||||
|
||||
Statistics package
|
||||
------------------
|
||||
|
||||
Statistical functions for masked arrays have been added, and are
|
||||
accessible through ``scipy.stats.mstats``. The functions are similar
|
||||
to their counterparts in ``scipy.stats`` but they have not yet been
|
||||
verified for identical interfaces and algorithms.
|
||||
|
||||
Several bugs were fixed for statistical functions, of those,
|
||||
``kstest`` and ``percentileofscore`` gained new keyword arguments.
|
||||
|
||||
Added deprecation warning for ``mean``, ``median``, ``var``, ``std``,
|
||||
``cov``, and ``corrcoef``. These functions should be replaced by their
|
||||
numpy counterparts. Note, however, that some of the default options
|
||||
differ between the ``scipy.stats`` and numpy versions of these
|
||||
functions.
|
||||
|
||||
Numerous bug fixes to ``stats.distributions``: all generic methods now
|
||||
work correctly, several methods in individual distributions were
|
||||
corrected. However, a few issues remain with higher moments (``skew``,
|
||||
``kurtosis``) and entropy. The maximum likelihood estimator, ``fit``,
|
||||
does not work out-of-the-box for some distributions - in some cases,
|
||||
starting values have to be carefully chosen, in other cases, the
|
||||
generic implementation of the maximum likelihood method might not be
|
||||
the numerically appropriate estimation method.
|
||||
|
||||
We expect more bugfixes, increases in numerical precision and
|
||||
enhancements in the next release of scipy.
|
||||
|
||||
Reworking of IO package
|
||||
-----------------------
|
||||
|
||||
The IO code in both NumPy and SciPy is being extensively
|
||||
reworked. NumPy will be where basic code for reading and writing NumPy
|
||||
arrays is located, while SciPy will house file readers and writers for
|
||||
various data formats (data, audio, video, images, matlab, etc.).
|
||||
|
||||
Several functions in ``scipy.io`` have been deprecated and will be
|
||||
removed in the 0.8.0 release including ``npfile``, ``save``, ``load``,
|
||||
``create_module``, ``create_shelf``, ``objload``, ``objsave``,
|
||||
``fopen``, ``read_array``, ``write_array``, ``fread``, ``fwrite``,
|
||||
``bswap``, ``packbits``, ``unpackbits``, and ``convert_objectarray``.
|
||||
Some of these functions have been replaced by NumPy's raw reading and
|
||||
writing capabilities, memory-mapping capabilities, or array methods.
|
||||
Others have been moved from SciPy to NumPy, since basic array reading
|
||||
and writing capability is now handled by NumPy.
|
||||
|
||||
The Matlab (TM) file readers/writers have a number of improvements:
|
||||
|
||||
* default version 5
|
||||
* v5 writers for structures, cell arrays, and objects
|
||||
* v5 readers/writers for function handles and 64-bit integers
|
||||
* new struct_as_record keyword argument to ``loadmat``, which loads
|
||||
struct arrays in matlab as record arrays in numpy
|
||||
* string arrays have ``dtype='U...'`` instead of ``dtype=object``
|
||||
* ``loadmat`` no longer squeezes singleton dimensions, i.e.
|
||||
``squeeze_me=False`` by default
|
||||
|
||||
New Hierarchical Clustering module
|
||||
----------------------------------
|
||||
|
||||
This module adds new hierarchical clustering functionality to the
|
||||
``scipy.cluster`` package. The function interfaces are similar to the
|
||||
functions provided MATLAB(TM)'s Statistics Toolbox to help facilitate
|
||||
easier migration to the NumPy/SciPy framework. Linkage methods
|
||||
implemented include single, complete, average, weighted, centroid,
|
||||
median, and ward.
|
||||
|
||||
In addition, several functions are provided for computing
|
||||
inconsistency statistics, cophenetic distance, and maximum distance
|
||||
between descendants. The ``fcluster`` and ``fclusterdata`` functions
|
||||
transform a hierarchical clustering into a set of flat clusters. Since
|
||||
these flat clusters are generated by cutting the tree into a forest of
|
||||
trees, the ``leaders`` function takes a linkage and a flat clustering,
|
||||
and finds the root of each tree in the forest. The ``ClusterNode``
|
||||
class represents a hierarchical clusterings as a field-navigable tree
|
||||
object. ``to_tree`` converts a matrix-encoded hierarchical clustering
|
||||
to a ``ClusterNode`` object. Routines for converting between MATLAB
|
||||
and SciPy linkage encodings are provided. Finally, a ``dendrogram``
|
||||
function plots hierarchical clusterings as a dendrogram, using
|
||||
matplotlib.
|
||||
|
||||
New Spatial package
|
||||
-------------------
|
||||
|
||||
The new spatial package contains a collection of spatial algorithms
|
||||
and data structures, useful for spatial statistics and clustering
|
||||
applications. It includes rapidly compiled code for computing exact
|
||||
and approximate nearest neighbors, as well as a pure-python kd-tree
|
||||
with the same interface, but that supports annotation and a variety of
|
||||
other algorithms. The API for both modules may change somewhat, as
|
||||
user requirements become clearer.
|
||||
|
||||
It also includes a ``distance`` module, containing a collection of
|
||||
distance and dissimilarity functions for computing distances between
|
||||
vectors, which is useful for spatial statistics, clustering, and
|
||||
kd-trees. Distance and dissimilarity functions provided include
|
||||
Bray-Curtis, Canberra, Chebyshev, City Block, Cosine, Dice, Euclidean,
|
||||
Hamming, Jaccard, Kulsinski, Mahalanobis, Matching, Minkowski,
|
||||
Rogers-Tanimoto, Russell-Rao, Squared Euclidean, Standardized
|
||||
Euclidean, Sokal-Michener, Sokal-Sneath, and Yule.
|
||||
|
||||
The ``pdist`` function computes pairwise distance between all
|
||||
unordered pairs of vectors in a set of vectors. The ``cdist`` computes
|
||||
the distance on all pairs of vectors in the Cartesian product of two
|
||||
sets of vectors. Pairwise distance matrices are stored in condensed
|
||||
form; only the upper triangular is stored. ``squareform`` converts
|
||||
distance matrices between square and condensed forms.
|
||||
|
||||
Reworked fftpack package
|
||||
------------------------
|
||||
|
||||
FFTW2, FFTW3, MKL and DJBFFT wrappers have been removed. Only (NETLIB)
|
||||
fftpack remains. By focusing on one backend, we hope to add new
|
||||
features - like float32 support - more easily.
|
||||
|
||||
New Constants package
|
||||
---------------------
|
||||
|
||||
``scipy.constants`` provides a collection of physical constants and
|
||||
conversion factors. These constants are taken from CODATA Recommended
|
||||
Values of the Fundamental Physical Constants: 2002. They may be found
|
||||
at physics.nist.gov/constants. The values are stored in the dictionary
|
||||
physical_constants as a tuple containing the value, the units, and the
|
||||
relative precision - in that order. All constants are in SI units,
|
||||
unless otherwise stated. Several helper functions are provided.
|
||||
|
||||
New Radial Basis Function module
|
||||
--------------------------------
|
||||
|
||||
``scipy.interpolate`` now contains a Radial Basis Function module.
|
||||
Radial basis functions can be used for smoothing/interpolating
|
||||
scattered data in n-dimensions, but should be used with caution for
|
||||
extrapolation outside of the observed data range.
|
||||
|
||||
New complex ODE integrator
|
||||
--------------------------
|
||||
|
||||
``scipy.integrate.ode`` now contains a wrapper for the ZVODE
|
||||
complex-valued ordinary differential equation solver (by Peter
|
||||
N. Brown, Alan C. Hindmarsh, and George D. Byrne).
|
||||
|
||||
New generalized symmetric and hermitian eigenvalue problem solver
|
||||
-----------------------------------------------------------------
|
||||
|
||||
``scipy.linalg.eigh`` now contains wrappers for more LAPACK symmetric
|
||||
and hermitian eigenvalue problem solvers. Users can now solve
|
||||
generalized problems, select a range of eigenvalues only, and choose
|
||||
to use a faster algorithm at the expense of increased memory
|
||||
usage. The signature of the ``scipy.linalg.eigh`` changed accordingly.
|
||||
|
||||
Bug fixes in the interpolation package
|
||||
--------------------------------------
|
||||
|
||||
The shape of return values from ``scipy.interpolate.interp1d`` used to
|
||||
be incorrect, if interpolated data had more than 2 dimensions and the
|
||||
axis keyword was set to a non-default value. This has been fixed.
|
||||
Moreover, ``interp1d`` returns now a scalar (0D-array) if the input
|
||||
is a scalar. Users of ``scipy.interpolate.interp1d`` may need to
|
||||
revise their code if it relies on the previous behavior.
|
||||
|
||||
Weave clean up
|
||||
--------------
|
||||
|
||||
There were numerous improvements to ``scipy.weave``. ``blitz++`` was
|
||||
relicensed by the author to be compatible with the SciPy license.
|
||||
``wx_spec.py`` was removed.
|
||||
|
||||
Known problems
|
||||
--------------
|
||||
|
||||
Here are known problems with scipy 0.7.0:
|
||||
|
||||
* weave test failures on windows: those are known, and are being revised.
|
||||
* weave test failure with gcc 4.3 (std::labs): this is a gcc 4.3 bug. A
|
||||
workaround is to add #include <cstdlib> in
|
||||
scipy/weave/blitz/blitz/funcs.h (line 27). You can make the change in
|
||||
the installed scipy (in site-packages).
|
|
@ -1,263 +0,0 @@
|
|||
=========================
|
||||
SciPy 0.8.0 Release Notes
|
||||
=========================
|
||||
|
||||
.. contents::
|
||||
|
||||
SciPy 0.8.0 is the culmination of 17 months of hard work. It contains
|
||||
many new features, numerous bug-fixes, improved test coverage and
|
||||
better documentation. There have been a number of deprecations and
|
||||
API changes in this release, which are documented below. All users
|
||||
are encouraged to upgrade to this release, as there are a large number
|
||||
of bug-fixes and optimizations. Moreover, our development attention
|
||||
will now shift to bug-fix releases on the 0.8.x branch, and on adding
|
||||
new features on the development trunk. This release requires Python
|
||||
2.4 - 2.6 and NumPy 1.4.1 or greater.
|
||||
|
||||
Please note that SciPy is still considered to have "Beta" status, as
|
||||
we work toward a SciPy 1.0.0 release. The 1.0.0 release will mark a
|
||||
major milestone in the development of SciPy, after which changing the
|
||||
package structure or API will be much more difficult. Whilst these
|
||||
pre-1.0 releases are considered to have "Beta" status, we are
|
||||
committed to making them as bug-free as possible.
|
||||
|
||||
However, until the 1.0 release, we are aggressively reviewing and
|
||||
refining the functionality, organization, and interface. This is being
|
||||
done in an effort to make the package as coherent, intuitive, and
|
||||
useful as possible. To achieve this, we need help from the community
|
||||
of users. Specifically, we need feedback regarding all aspects of the
|
||||
project - everything - from which algorithms we implement, to details
|
||||
about our function's call signatures.
|
||||
|
||||
Python 3
|
||||
========
|
||||
|
||||
Python 3 compatibility is planned and is currently technically
|
||||
feasible, since Numpy has been ported. However, since the Python 3
|
||||
compatible Numpy 1.5 has not been released yet, support for Python 3
|
||||
in Scipy is not yet included in Scipy 0.8. SciPy 0.9, planned for fall
|
||||
2010, will very likely include experimental support for Python 3.
|
||||
|
||||
Major documentation improvements
|
||||
================================
|
||||
|
||||
SciPy documentation is greatly improved.
|
||||
|
||||
Deprecated features
|
||||
===================
|
||||
|
||||
Swapping inputs for correlation functions (scipy.signal)
|
||||
--------------------------------------------------------
|
||||
|
||||
Concern correlate, correlate2d, convolve and convolve2d. If the second input is
|
||||
larger than the first input, the inputs are swapped before calling the
|
||||
underlying computation routine. This behavior is deprecated, and will be
|
||||
removed in scipy 0.9.0.
|
||||
|
||||
Obsolete code deprecated (scipy.misc)
|
||||
-------------------------------------
|
||||
|
||||
The modules `helpmod`, `ppimport` and `pexec` from `scipy.misc` are deprecated.
|
||||
They will be removed from SciPy in version 0.9.
|
||||
|
||||
Additional deprecations
|
||||
-----------------------
|
||||
|
||||
* linalg: The function `solveh_banded` currently returns a tuple containing
|
||||
the Cholesky factorization and the solution to the linear system. In
|
||||
SciPy 0.9, the return value will be just the solution.
|
||||
* The function `constants.codata.find` will generate a DeprecationWarning.
|
||||
In Scipy version 0.8.0, the keyword argument 'disp' was added to the
|
||||
function, with the default value 'True'. In 0.9.0, the default will be
|
||||
'False'.
|
||||
* The `qshape` keyword argument of `signal.chirp` is deprecated. Use
|
||||
the argument `vertex_zero` instead.
|
||||
* Passing the coefficients of a polynomial as the argument `f0` to
|
||||
`signal.chirp` is deprecated. Use the function `signal.sweep_poly`
|
||||
instead.
|
||||
* The `io.recaster` module has been deprecated and will be removed in 0.9.0.
|
||||
|
||||
New features
|
||||
============
|
||||
|
||||
DCT support (scipy.fftpack)
|
||||
---------------------------
|
||||
|
||||
New realtransforms have been added, namely dct and idct for Discrete Cosine
|
||||
Transform; type I, II and III are available.
|
||||
|
||||
Single precision support for fft functions (scipy.fftpack)
|
||||
----------------------------------------------------------
|
||||
|
||||
fft functions can now handle single precision inputs as well: fft(x) will
|
||||
return a single precision array if x is single precision.
|
||||
|
||||
At the moment, for FFT sizes that are not composites of 2, 3, and 5, the
|
||||
transform is computed internally in double precision to avoid rounding error in
|
||||
FFTPACK.
|
||||
|
||||
Correlation functions now implement the usual definition (scipy.signal)
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
The outputs should now correspond to their matlab and R counterparts, and do
|
||||
what most people expect if the old_behavior=False argument is passed:
|
||||
|
||||
* correlate, convolve and their 2d counterparts do not swap their inputs
|
||||
depending on their relative shape anymore;
|
||||
* correlation functions now conjugate their second argument while computing
|
||||
the slided sum-products, which correspond to the usual definition of
|
||||
correlation.
|
||||
|
||||
Additions and modification to LTI functions (scipy.signal)
|
||||
----------------------------------------------------------
|
||||
|
||||
* The functions `impulse2` and `step2` were added to `scipy.signal`.
|
||||
They use the function `scipy.signal.lsim2` to compute the impulse and
|
||||
step response of a system, respectively.
|
||||
* The function `scipy.signal.lsim2` was changed to pass any additional
|
||||
keyword arguments to the ODE solver.
|
||||
|
||||
Improved waveform generators (scipy.signal)
|
||||
-------------------------------------------
|
||||
|
||||
Several improvements to the `chirp` function in `scipy.signal` were made:
|
||||
|
||||
* The waveform generated when `method="logarithmic"` was corrected; it
|
||||
now generates a waveform that is also known as an "exponential" or
|
||||
"geometric" chirp. (See http://en.wikipedia.org/wiki/Chirp.)
|
||||
* A new `chirp` method, "hyperbolic", was added.
|
||||
* Instead of the keyword `qshape`, `chirp` now uses the keyword
|
||||
`vertex_zero`, a boolean.
|
||||
* `chirp` no longer handles an arbitrary polynomial. This functionality
|
||||
has been moved to a new function, `sweep_poly`.
|
||||
|
||||
A new function, `sweep_poly`, was added.
|
||||
|
||||
New functions and other changes in scipy.linalg
|
||||
-----------------------------------------------
|
||||
|
||||
The functions `cho_solve_banded`, `circulant`, `companion`, `hadamard` and
|
||||
`leslie` were added to `scipy.linalg`.
|
||||
|
||||
The function `block_diag` was enhanced to accept scalar and 1D arguments,
|
||||
along with the usual 2D arguments.
|
||||
|
||||
New function and changes in scipy.optimize
|
||||
------------------------------------------
|
||||
|
||||
The `curve_fit` function has been added; it takes a function and uses
|
||||
non-linear least squares to fit that to the provided data.
|
||||
|
||||
The `leastsq` and `fsolve` functions now return an array of size one instead of
|
||||
a scalar when solving for a single parameter.
|
||||
|
||||
New sparse least squares solver
|
||||
-------------------------------
|
||||
|
||||
The `lsqr` function was added to `scipy.sparse`. `This routine
|
||||
<http://www.stanford.edu/group/SOL/software/lsqr.html>`_ finds a
|
||||
least-squares solution to a large, sparse, linear system of equations.
|
||||
|
||||
ARPACK-based sparse SVD
|
||||
-----------------------
|
||||
|
||||
A naive implementation of SVD for sparse matrices is available in
|
||||
scipy.sparse.linalg.eigen.arpack. It is based on using an symmetric solver on
|
||||
<A, A>, and as such may not be very precise.
|
||||
|
||||
Alternative behavior available for `scipy.constants.find`
|
||||
---------------------------------------------------------
|
||||
|
||||
The keyword argument `disp` was added to the function `scipy.constants.find`,
|
||||
with the default value `True`. When `disp` is `True`, the behavior is the
|
||||
same as in Scipy version 0.7. When `False`, the function returns the list of
|
||||
keys instead of printing them. (In SciPy version 0.9, the default will be
|
||||
reversed.)
|
||||
|
||||
Incomplete sparse LU decompositions
|
||||
-----------------------------------
|
||||
|
||||
Scipy now wraps SuperLU version 4.0, which supports incomplete sparse LU
|
||||
decompositions. These can be accessed via `scipy.sparse.linalg.spilu`.
|
||||
Upgrade to SuperLU 4.0 also fixes some known bugs.
|
||||
|
||||
Faster matlab file reader and default behavior change
|
||||
------------------------------------------------------
|
||||
|
||||
We've rewritten the matlab file reader in Cython and it should now read
|
||||
matlab files at around the same speed that Matlab does.
|
||||
|
||||
The reader reads matlab named and anonymous functions, but it can't
|
||||
write them.
|
||||
|
||||
Until scipy 0.8.0 we have returned arrays of matlab structs as numpy
|
||||
object arrays, where the objects have attributes named for the struct
|
||||
fields. As of 0.8.0, we return matlab structs as numpy structured
|
||||
arrays. You can get the older behavior by using the optional
|
||||
``struct_as_record=False`` keyword argument to `scipy.io.loadmat` and
|
||||
friends.
|
||||
|
||||
There is an inconsistency in the matlab file writer, in that it writes
|
||||
numpy 1D arrays as column vectors in matlab 5 files, and row vectors in
|
||||
matlab 4 files. We will change this in the next version, so both write
|
||||
row vectors. There is a `FutureWarning` when calling the writer to warn
|
||||
of this change; for now we suggest using the ``oned_as='row'`` keyword
|
||||
argument to `scipy.io.savemat` and friends.
|
||||
|
||||
Faster evaluation of orthogonal polynomials
|
||||
-------------------------------------------
|
||||
|
||||
Values of orthogonal polynomials can be evaluated with new vectorized functions
|
||||
in `scipy.special`: `eval_legendre`, `eval_chebyt`, `eval_chebyu`,
|
||||
`eval_chebyc`, `eval_chebys`, `eval_jacobi`, `eval_laguerre`,
|
||||
`eval_genlaguerre`, `eval_hermite`, `eval_hermitenorm`,
|
||||
`eval_gegenbauer`, `eval_sh_legendre`, `eval_sh_chebyt`,
|
||||
`eval_sh_chebyu`, `eval_sh_jacobi`. This is faster than constructing the
|
||||
full coefficient representation of the polynomials, which was previously the
|
||||
only available way.
|
||||
|
||||
Note that the previous orthogonal polynomial routines will now also invoke this
|
||||
feature, when possible.
|
||||
|
||||
Lambert W function
|
||||
------------------
|
||||
|
||||
`scipy.special.lambertw` can now be used for evaluating the Lambert W
|
||||
function.
|
||||
|
||||
Improved hypergeometric 2F1 function
|
||||
------------------------------------
|
||||
|
||||
Implementation of `scipy.special.hyp2f1` for real parameters was revised.
|
||||
The new version should produce accurate values for all real parameters.
|
||||
|
||||
More flexible interface for Radial basis function interpolation
|
||||
---------------------------------------------------------------
|
||||
|
||||
The `scipy.interpolate.Rbf` class now accepts a callable as input for the
|
||||
"function" argument, in addition to the built-in radial basis functions which
|
||||
can be selected with a string argument.
|
||||
|
||||
Removed features
|
||||
================
|
||||
|
||||
scipy.stsci: the package was removed
|
||||
|
||||
The module `scipy.misc.limits` was removed.
|
||||
|
||||
scipy.io
|
||||
--------
|
||||
|
||||
The IO code in both NumPy and SciPy is being extensively
|
||||
reworked. NumPy will be where basic code for reading and writing NumPy
|
||||
arrays is located, while SciPy will house file readers and writers for
|
||||
various data formats (data, audio, video, images, matlab, etc.).
|
||||
|
||||
Several functions in `scipy.io` are removed in the 0.8.0 release including:
|
||||
`npfile`, `save`, `load`, `create_module`, `create_shelf`,
|
||||
`objload`, `objsave`, `fopen`, `read_array`, `write_array`,
|
||||
`fread`, `fwrite`, `bswap`, `packbits`, `unpackbits`, and
|
||||
`convert_objectarray`. Some of these functions have been replaced by NumPy's
|
||||
raw reading and writing capabilities, memory-mapping capabilities, or array
|
||||
methods. Others have been moved from SciPy to NumPy, since basic array reading
|
||||
and writing capability is now handled by NumPy.
|
|
@ -1,180 +0,0 @@
|
|||
@import "default.css";
|
||||
|
||||
/**
|
||||
* Spacing fixes
|
||||
*/
|
||||
|
||||
div.body p, div.body dd, div.body li {
|
||||
line-height: 125%;
|
||||
}
|
||||
|
||||
ul.simple {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
padding-top: 0;
|
||||
padding-bottom: 0;
|
||||
}
|
||||
|
||||
/* spacing around blockquoted fields in parameters/attributes/returns */
|
||||
td.field-body > blockquote {
|
||||
margin-top: 0.1em;
|
||||
margin-bottom: 0.5em;
|
||||
}
|
||||
|
||||
/* spacing around example code */
|
||||
div.highlight > pre {
|
||||
padding: 2px 5px 2px 5px;
|
||||
}
|
||||
|
||||
/* spacing in see also definition lists */
|
||||
dl.last > dd {
|
||||
margin-top: 1px;
|
||||
margin-bottom: 5px;
|
||||
margin-left: 30px;
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide dummy toctrees
|
||||
*/
|
||||
|
||||
ul {
|
||||
padding-top: 0;
|
||||
padding-bottom: 0;
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
ul li {
|
||||
padding-top: 0;
|
||||
padding-bottom: 0;
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
ul li a.reference {
|
||||
padding-top: 0;
|
||||
padding-bottom: 0;
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Make high-level subsections easier to distinguish from top-level ones
|
||||
*/
|
||||
div.body h3 {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
div.body h4 {
|
||||
border: none;
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
/**
|
||||
* Scipy colors
|
||||
*/
|
||||
|
||||
body {
|
||||
background-color: rgb(100,135,220);
|
||||
}
|
||||
|
||||
div.document {
|
||||
background-color: rgb(230,230,230);
|
||||
}
|
||||
|
||||
div.sphinxsidebar {
|
||||
background-color: rgb(230,230,230);
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
div.related {
|
||||
background-color: rgb(100,135,220);
|
||||
}
|
||||
|
||||
div.sphinxsidebar h3 {
|
||||
color: rgb(0,102,204);
|
||||
}
|
||||
|
||||
div.sphinxsidebar h3 a {
|
||||
color: rgb(0,102,204);
|
||||
}
|
||||
|
||||
div.sphinxsidebar h4 {
|
||||
color: rgb(0,82,194);
|
||||
}
|
||||
|
||||
div.sphinxsidebar p {
|
||||
color: black;
|
||||
}
|
||||
|
||||
div.sphinxsidebar a {
|
||||
color: #355f7c;
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul.want-points {
|
||||
list-style: disc;
|
||||
}
|
||||
|
||||
.field-list th {
|
||||
color: rgb(0,102,204);
|
||||
}
|
||||
|
||||
/**
|
||||
* Extra admonitions
|
||||
*/
|
||||
|
||||
div.tip {
|
||||
background-color: #ffffe4;
|
||||
border: 1px solid #ee6;
|
||||
}
|
||||
|
||||
div.plot-output {
|
||||
clear-after: both;
|
||||
}
|
||||
|
||||
div.plot-output .figure {
|
||||
float: left;
|
||||
text-align: center;
|
||||
margin-bottom: 0;
|
||||
padding-bottom: 0;
|
||||
}
|
||||
|
||||
div.plot-output .caption {
|
||||
margin-top: 2;
|
||||
padding-top: 0;
|
||||
}
|
||||
|
||||
div.plot-output:after {
|
||||
content: "";
|
||||
display: block;
|
||||
height: 0;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
div.admonition-example {
|
||||
background-color: #e4ffe4;
|
||||
border: 1px solid #ccc;
|
||||
}*/
|
||||
|
||||
|
||||
/**
|
||||
* Styling for field lists
|
||||
*/
|
||||
|
||||
table.field-list th {
|
||||
border-left: 1px solid #aaa !important;
|
||||
padding-left: 5px;
|
||||
}
|
||||
|
||||
table.field-list {
|
||||
border-collapse: separate;
|
||||
border-spacing: 10px;
|
||||
}
|
||||
|
||||
/**
|
||||
* Styling for footnotes
|
||||
*/
|
||||
|
||||
table.footnote td, table.footnote th {
|
||||
border: none;
|
||||
}
|
Binary file not shown.
Before Width: | Height: | Size: 18 KiB |
|
@ -1,23 +0,0 @@
|
|||
{% extends "!autosummary/class.rst" %}
|
||||
|
||||
{% block methods %}
|
||||
{% if methods %}
|
||||
.. HACK
|
||||
.. autosummary::
|
||||
:toctree:
|
||||
{% for item in methods %}
|
||||
{{ name }}.{{ item }}
|
||||
{%- endfor %}
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
|
||||
{% block attributes %}
|
||||
{% if attributes %}
|
||||
.. HACK
|
||||
.. autosummary::
|
||||
:toctree:
|
||||
{% for item in attributes %}
|
||||
{{ name }}.{{ item }}
|
||||
{%- endfor %}
|
||||
{% endif %}
|
||||
{% endblock %}
|
|
@ -1,5 +0,0 @@
|
|||
<h3>Resources</h3>
|
||||
<ul>
|
||||
<li><a href="http://scipy.org/">Scipy.org website</a></li>
|
||||
<li> </li>
|
||||
</ul>
|
|
@ -1,14 +0,0 @@
|
|||
{% extends "!layout.html" %}
|
||||
|
||||
{% block sidebarsearch %}
|
||||
{%- if sourcename %}
|
||||
<ul class="this-page-menu">
|
||||
{%- if 'generated/' in sourcename %}
|
||||
<li><a href="/scipy/docs/{{ sourcename.replace('generated/', '').replace('.txt', '') |e }}">{{_('Edit page')}}</a></li>
|
||||
{%- else %}
|
||||
<li><a href="/scipy/docs/scipy-docs/{{ sourcename.replace('.txt', '.rst') |e }}">{{_('Edit page')}}</a></li>
|
||||
{%- endif %}
|
||||
</ul>
|
||||
{%- endif %}
|
||||
{{ super() }}
|
||||
{% endblock %}
|
|
@ -1,10 +0,0 @@
|
|||
========================================================
|
||||
Hierarchical clustering (:mod:`scipy.cluster.hierarchy`)
|
||||
========================================================
|
||||
|
||||
.. warning::
|
||||
|
||||
This documentation is work-in-progress and unorganized.
|
||||
|
||||
.. automodule:: scipy.cluster.hierarchy
|
||||
:members:
|
|
@ -1,10 +0,0 @@
|
|||
=========================================
|
||||
Clustering package (:mod:`scipy.cluster`)
|
||||
=========================================
|
||||
|
||||
.. toctree::
|
||||
|
||||
cluster.hierarchy
|
||||
cluster.vq
|
||||
|
||||
.. automodule:: scipy.cluster
|
|
@ -1,6 +0,0 @@
|
|||
====================================================================
|
||||
K-means clustering and vector quantization (:mod:`scipy.cluster.vq`)
|
||||
====================================================================
|
||||
|
||||
.. automodule:: scipy.cluster.vq
|
||||
:members:
|
|
@ -1,286 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys, os, re
|
||||
|
||||
# If your extensions are in another directory, add it here. If the directory
|
||||
# is relative to the documentation root, use os.path.abspath to make it
|
||||
# absolute, like shown here.
|
||||
sys.path.insert(0, os.path.abspath('../sphinxext'))
|
||||
|
||||
# Check Sphinx version
|
||||
import sphinx
|
||||
if sphinx.__version__ < "0.5":
|
||||
raise RuntimeError("Sphinx 0.5.dev or newer required")
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# General configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be extensions
|
||||
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.pngmath', 'numpydoc',
|
||||
'sphinx.ext.intersphinx', 'sphinx.ext.coverage', 'plot_directive']
|
||||
|
||||
if sphinx.__version__ >= "0.7":
|
||||
extensions.append('sphinx.ext.autosummary')
|
||||
else:
|
||||
extensions.append('autosummary')
|
||||
extensions.append('only_directives')
|
||||
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General substitutions.
|
||||
project = 'SciPy'
|
||||
copyright = '2008-2009, The Scipy community'
|
||||
|
||||
# The default replacements for |version| and |release|, also used in various
|
||||
# other places throughout the built documents.
|
||||
#
|
||||
import scipy
|
||||
# The short X.Y version (including the .devXXXX suffix if present)
|
||||
version = re.sub(r'^(\d+\.\d+)\.\d+(.*)', r'\1\2', scipy.__version__)
|
||||
if 'dev' in version:
|
||||
# retain the .dev suffix, but clean it up
|
||||
version = re.sub(r'(\.dev\d*).*?$', r'\1', version)
|
||||
else:
|
||||
# strip all other suffixes
|
||||
version = re.sub(r'^(\d+\.\d+).*?$', r'\1', version)
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = scipy.__version__
|
||||
|
||||
print "Scipy (VERSION %s) (RELEASE %s)" % (version, release)
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
#today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of documents that shouldn't be included in the build.
|
||||
#unused_docs = []
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all documents.
|
||||
default_role = "autolink"
|
||||
|
||||
# List of directories, relative to source directories, that shouldn't be searched
|
||||
# for source files.
|
||||
exclude_dirs = []
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
add_function_parentheses = False
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
#add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# HTML output
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
# The style sheet to use for HTML and HTML Help pages. A file of that name
|
||||
# must exist either in Sphinx' static/ path, or in one of the custom paths
|
||||
# given in html_static_path.
|
||||
html_style = 'scipy.css'
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
html_title = "%s v%s Reference Guide (DRAFT)" % (project, version)
|
||||
|
||||
# The name of an image file (within the static path) to place at the top of
|
||||
# the sidebar.
|
||||
html_logo = '_static/scipyshiny_small.png'
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# Correct index page
|
||||
#html_index = "index"
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
#html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
html_sidebars = {
|
||||
'index': 'indexsidebar.html'
|
||||
}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
html_use_modindex = True
|
||||
|
||||
# If true, the reST sources are included in the HTML build as _sources/<name>.
|
||||
#html_copy_source = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
#html_use_opensearch = ''
|
||||
|
||||
# If nonempty, this is the file name suffix for HTML files (e.g. ".html").
|
||||
html_file_suffix = '.html'
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'scipy'
|
||||
|
||||
# Pngmath should try to align formulas properly
|
||||
pngmath_use_preview = True
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# LaTeX output
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
# The paper size ('letter' or 'a4').
|
||||
#latex_paper_size = 'letter'
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#latex_font_size = '10pt'
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, document class [howto/manual]).
|
||||
_stdauthor = 'Written by the SciPy community'
|
||||
latex_documents = [
|
||||
('index', 'scipy-ref.tex', 'SciPy Reference Guide', _stdauthor, 'manual'),
|
||||
# ('user/index', 'scipy-user.tex', 'SciPy User Guide',
|
||||
# _stdauthor, 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
#latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
#latex_use_parts = False
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
latex_preamble = r'''
|
||||
\usepackage{amsmath}
|
||||
\DeclareUnicodeCharacter{00A0}{\nobreakspace}
|
||||
|
||||
% In the parameters section, place a newline after the Parameters
|
||||
% header
|
||||
\usepackage{expdlist}
|
||||
\let\latexdescription=\description
|
||||
\def\description{\latexdescription{}{} \breaklabel}
|
||||
|
||||
% Make Examples/etc section headers smaller and more compact
|
||||
\makeatletter
|
||||
\titleformat{\paragraph}{\normalsize\py@HeaderFamily}%
|
||||
{\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor}
|
||||
\titlespacing*{\paragraph}{0pt}{1ex}{0pt}
|
||||
\makeatother
|
||||
|
||||
% Fix footer/header
|
||||
\renewcommand{\chaptermark}[1]{\markboth{\MakeUppercase{\thechapter.\ #1}}{}}
|
||||
\renewcommand{\sectionmark}[1]{\markright{\MakeUppercase{\thesection.\ #1}}}
|
||||
'''
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
#latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
latex_use_modindex = False
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Intersphinx configuration
|
||||
# -----------------------------------------------------------------------------
|
||||
intersphinx_mapping = {
|
||||
'http://docs.python.org/dev': None,
|
||||
'http://docs.scipy.org/doc/numpy': None,
|
||||
}
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Numpy extensions
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
# If we want to do a phantom import from an XML file for all autodocs
|
||||
phantom_import_file = 'dump.xml'
|
||||
|
||||
# Edit links
|
||||
#numpydoc_edit_link = '`Edit </pydocweb/doc/%(full_name)s/>`__'
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Autosummary
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
if sphinx.__version__ >= "0.7":
|
||||
import glob
|
||||
autosummary_generate = glob.glob("*.rst")
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Coverage checker
|
||||
# -----------------------------------------------------------------------------
|
||||
coverage_ignore_modules = r"""
|
||||
""".split()
|
||||
coverage_ignore_functions = r"""
|
||||
test($|_) (some|all)true bitwise_not cumproduct pkgload
|
||||
generic\.
|
||||
""".split()
|
||||
coverage_ignore_classes = r"""
|
||||
""".split()
|
||||
|
||||
coverage_c_path = []
|
||||
coverage_c_regexes = {}
|
||||
coverage_ignore_c_items = {}
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Plot
|
||||
#------------------------------------------------------------------------------
|
||||
plot_pre_code = """
|
||||
import numpy as np
|
||||
import scipy as sp
|
||||
np.random.seed(123)
|
||||
"""
|
||||
plot_include_source = True
|
||||
plot_formats = [('png', 100), 'pdf']
|
||||
|
||||
import math
|
||||
phi = (math.sqrt(5) + 1)/2
|
||||
|
||||
import matplotlib
|
||||
matplotlib.rcParams.update({
|
||||
'font.size': 8,
|
||||
'axes.titlesize': 8,
|
||||
'axes.labelsize': 8,
|
||||
'xtick.labelsize': 8,
|
||||
'ytick.labelsize': 8,
|
||||
'legend.fontsize': 8,
|
||||
'figure.figsize': (3*phi, 3),
|
||||
'figure.subplot.bottom': 0.2,
|
||||
'figure.subplot.left': 0.2,
|
||||
'figure.subplot.right': 0.9,
|
||||
'figure.subplot.top': 0.85,
|
||||
'figure.subplot.wspace': 0.4,
|
||||
'text.usetex': False,
|
||||
})
|
|
@ -1,582 +0,0 @@
|
|||
==================================
|
||||
Constants (:mod:`scipy.constants`)
|
||||
==================================
|
||||
|
||||
.. module:: scipy.constants
|
||||
|
||||
Physical and mathematical constants and units.
|
||||
|
||||
Mathematical constants
|
||||
======================
|
||||
|
||||
============ =================================================================
|
||||
``pi`` Pi
|
||||
``golden`` Golden ratio
|
||||
============ =================================================================
|
||||
|
||||
Physical constants
|
||||
==================
|
||||
|
||||
============= =================================================================
|
||||
``c`` speed of light in vacuum
|
||||
``mu_0`` the magnetic constant :math:`\mu_0`
|
||||
``epsilon_0`` the electric constant (vacuum permittivity), :math:`\epsilon_0`
|
||||
``h`` the Planck constant :math:`h`
|
||||
``hbar`` :math:`\hbar = h/(2\pi)`
|
||||
``G`` Newtonian constant of gravitation
|
||||
``g`` standard acceleration of gravity
|
||||
``e`` elementary charge
|
||||
``R`` molar gas constant
|
||||
``alpha`` fine-structure constant
|
||||
``N_A`` Avogadro constant
|
||||
``k`` Boltzmann constant
|
||||
``sigma`` Stefan-Boltzmann constant :math:`\sigma`
|
||||
``Wien`` Wien displacement law constant
|
||||
``Rydberg`` Rydberg constant
|
||||
``m_e`` electron mass
|
||||
``m_p`` proton mass
|
||||
``m_n`` neutron mass
|
||||
============= =================================================================
|
||||
|
||||
|
||||
Constants database
|
||||
==================
|
||||
|
||||
In addition to the above variables containing physical constants,
|
||||
:mod:`scipy.constants` also contains a database of additional physical
|
||||
constants.
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
value
|
||||
unit
|
||||
precision
|
||||
find
|
||||
|
||||
.. data:: physical_constants
|
||||
|
||||
Dictionary of physical constants, of the format
|
||||
``physical_constants[name] = (value, unit, uncertainty)``.
|
||||
|
||||
Available constants:
|
||||
|
||||
====================================================================== ====
|
||||
``alpha particle mass``
|
||||
``alpha particle mass energy equivalent``
|
||||
``alpha particle mass energy equivalent in MeV``
|
||||
``alpha particle mass in u``
|
||||
``alpha particle molar mass``
|
||||
``alpha particle-electron mass ratio``
|
||||
``alpha particle-proton mass ratio``
|
||||
``Angstrom star``
|
||||
``atomic mass constant``
|
||||
``atomic mass constant energy equivalent``
|
||||
``atomic mass constant energy equivalent in MeV``
|
||||
``atomic mass unit-electron volt relationship``
|
||||
``atomic mass unit-hartree relationship``
|
||||
``atomic mass unit-hertz relationship``
|
||||
``atomic mass unit-inverse meter relationship``
|
||||
``atomic mass unit-joule relationship``
|
||||
``atomic mass unit-kelvin relationship``
|
||||
``atomic mass unit-kilogram relationship``
|
||||
``atomic unit of 1st hyperpolarizablity``
|
||||
``atomic unit of 2nd hyperpolarizablity``
|
||||
``atomic unit of action``
|
||||
``atomic unit of charge``
|
||||
``atomic unit of charge density``
|
||||
``atomic unit of current``
|
||||
``atomic unit of electric dipole moment``
|
||||
``atomic unit of electric field``
|
||||
``atomic unit of electric field gradient``
|
||||
``atomic unit of electric polarizablity``
|
||||
``atomic unit of electric potential``
|
||||
``atomic unit of electric quadrupole moment``
|
||||
``atomic unit of energy``
|
||||
``atomic unit of force``
|
||||
``atomic unit of length``
|
||||
``atomic unit of magnetic dipole moment``
|
||||
``atomic unit of magnetic flux density``
|
||||
``atomic unit of magnetizability``
|
||||
``atomic unit of mass``
|
||||
``atomic unit of momentum``
|
||||
``atomic unit of permittivity``
|
||||
``atomic unit of time``
|
||||
``atomic unit of velocity``
|
||||
``Avogadro constant``
|
||||
``Bohr magneton``
|
||||
``Bohr magneton in eV/T``
|
||||
``Bohr magneton in Hz/T``
|
||||
``Bohr magneton in inverse meters per tesla``
|
||||
``Bohr magneton in K/T``
|
||||
``Bohr radius``
|
||||
``Boltzmann constant``
|
||||
``Boltzmann constant in eV/K``
|
||||
``Boltzmann constant in Hz/K``
|
||||
``Boltzmann constant in inverse meters per kelvin``
|
||||
``characteristic impedance of vacuum``
|
||||
``classical electron radius``
|
||||
``Compton wavelength``
|
||||
``Compton wavelength over 2 pi``
|
||||
``conductance quantum``
|
||||
``conventional value of Josephson constant``
|
||||
``conventional value of von Klitzing constant``
|
||||
``Cu x unit``
|
||||
``deuteron magnetic moment``
|
||||
``deuteron magnetic moment to Bohr magneton ratio``
|
||||
``deuteron magnetic moment to nuclear magneton ratio``
|
||||
``deuteron mass``
|
||||
``deuteron mass energy equivalent``
|
||||
``deuteron mass energy equivalent in MeV``
|
||||
``deuteron mass in u``
|
||||
``deuteron molar mass``
|
||||
``deuteron rms charge radius``
|
||||
``deuteron-electron magnetic moment ratio``
|
||||
``deuteron-electron mass ratio``
|
||||
``deuteron-neutron magnetic moment ratio``
|
||||
``deuteron-proton magnetic moment ratio``
|
||||
``deuteron-proton mass ratio``
|
||||
``electric constant``
|
||||
``electron charge to mass quotient``
|
||||
``electron g factor``
|
||||
``electron gyromagnetic ratio``
|
||||
``electron gyromagnetic ratio over 2 pi``
|
||||
``electron magnetic moment``
|
||||
``electron magnetic moment anomaly``
|
||||
``electron magnetic moment to Bohr magneton ratio``
|
||||
``electron magnetic moment to nuclear magneton ratio``
|
||||
``electron mass``
|
||||
``electron mass energy equivalent``
|
||||
``electron mass energy equivalent in MeV``
|
||||
``electron mass in u``
|
||||
``electron molar mass``
|
||||
``electron to alpha particle mass ratio``
|
||||
``electron to shielded helion magnetic moment ratio``
|
||||
``electron to shielded proton magnetic moment ratio``
|
||||
``electron volt``
|
||||
``electron volt-atomic mass unit relationship``
|
||||
``electron volt-hartree relationship``
|
||||
``electron volt-hertz relationship``
|
||||
``electron volt-inverse meter relationship``
|
||||
``electron volt-joule relationship``
|
||||
``electron volt-kelvin relationship``
|
||||
``electron volt-kilogram relationship``
|
||||
``electron-deuteron magnetic moment ratio``
|
||||
``electron-deuteron mass ratio``
|
||||
``electron-muon magnetic moment ratio``
|
||||
``electron-muon mass ratio``
|
||||
``electron-neutron magnetic moment ratio``
|
||||
``electron-neutron mass ratio``
|
||||
``electron-proton magnetic moment ratio``
|
||||
``electron-proton mass ratio``
|
||||
``electron-tau mass ratio``
|
||||
``elementary charge``
|
||||
``elementary charge over h``
|
||||
``Faraday constant``
|
||||
``Faraday constant for conventional electric current``
|
||||
``Fermi coupling constant``
|
||||
``fine-structure constant``
|
||||
``first radiation constant``
|
||||
``first radiation constant for spectral radiance``
|
||||
``Hartree energy``
|
||||
``Hartree energy in eV``
|
||||
``hartree-atomic mass unit relationship``
|
||||
``hartree-electron volt relationship``
|
||||
``hartree-hertz relationship``
|
||||
``hartree-inverse meter relationship``
|
||||
``hartree-joule relationship``
|
||||
``hartree-kelvin relationship``
|
||||
``hartree-kilogram relationship``
|
||||
``helion mass``
|
||||
``helion mass energy equivalent``
|
||||
``helion mass energy equivalent in MeV``
|
||||
``helion mass in u``
|
||||
``helion molar mass``
|
||||
``helion-electron mass ratio``
|
||||
``helion-proton mass ratio``
|
||||
``hertz-atomic mass unit relationship``
|
||||
``hertz-electron volt relationship``
|
||||
``hertz-hartree relationship``
|
||||
``hertz-inverse meter relationship``
|
||||
``hertz-joule relationship``
|
||||
``hertz-kelvin relationship``
|
||||
``hertz-kilogram relationship``
|
||||
``inverse fine-structure constant``
|
||||
``inverse meter-atomic mass unit relationship``
|
||||
``inverse meter-electron volt relationship``
|
||||
``inverse meter-hartree relationship``
|
||||
``inverse meter-hertz relationship``
|
||||
``inverse meter-joule relationship``
|
||||
``inverse meter-kelvin relationship``
|
||||
``inverse meter-kilogram relationship``
|
||||
``inverse of conductance quantum``
|
||||
``Josephson constant``
|
||||
``joule-atomic mass unit relationship``
|
||||
``joule-electron volt relationship``
|
||||
``joule-hartree relationship``
|
||||
``joule-hertz relationship``
|
||||
``joule-inverse meter relationship``
|
||||
``joule-kelvin relationship``
|
||||
``joule-kilogram relationship``
|
||||
``kelvin-atomic mass unit relationship``
|
||||
``kelvin-electron volt relationship``
|
||||
``kelvin-hartree relationship``
|
||||
``kelvin-hertz relationship``
|
||||
``kelvin-inverse meter relationship``
|
||||
``kelvin-joule relationship``
|
||||
``kelvin-kilogram relationship``
|
||||
``kilogram-atomic mass unit relationship``
|
||||
``kilogram-electron volt relationship``
|
||||
``kilogram-hartree relationship``
|
||||
``kilogram-hertz relationship``
|
||||
``kilogram-inverse meter relationship``
|
||||
``kilogram-joule relationship``
|
||||
``kilogram-kelvin relationship``
|
||||
``lattice parameter of silicon``
|
||||
``Loschmidt constant (273.15 K, 101.325 kPa)``
|
||||
``magnetic constant``
|
||||
``magnetic flux quantum``
|
||||
``Mo x unit``
|
||||
``molar gas constant``
|
||||
``molar mass constant``
|
||||
``molar mass of carbon-12``
|
||||
``molar Planck constant``
|
||||
``molar Planck constant times c``
|
||||
``molar volume of ideal gas (273.15 K, 100 kPa)``
|
||||
``molar volume of ideal gas (273.15 K, 101.325 kPa)``
|
||||
``molar volume of silicon``
|
||||
``muon Compton wavelength``
|
||||
``muon Compton wavelength over 2 pi``
|
||||
``muon g factor``
|
||||
``muon magnetic moment``
|
||||
``muon magnetic moment anomaly``
|
||||
``muon magnetic moment to Bohr magneton ratio``
|
||||
``muon magnetic moment to nuclear magneton ratio``
|
||||
``muon mass``
|
||||
``muon mass energy equivalent``
|
||||
``muon mass energy equivalent in MeV``
|
||||
``muon mass in u``
|
||||
``muon molar mass``
|
||||
``muon-electron mass ratio``
|
||||
``muon-neutron mass ratio``
|
||||
``muon-proton magnetic moment ratio``
|
||||
``muon-proton mass ratio``
|
||||
``muon-tau mass ratio``
|
||||
``natural unit of action``
|
||||
``natural unit of action in eV s``
|
||||
``natural unit of energy``
|
||||
``natural unit of energy in MeV``
|
||||
``natural unit of length``
|
||||
``natural unit of mass``
|
||||
``natural unit of momentum``
|
||||
``natural unit of momentum in MeV/c``
|
||||
``natural unit of time``
|
||||
``natural unit of velocity``
|
||||
``neutron Compton wavelength``
|
||||
``neutron Compton wavelength over 2 pi``
|
||||
``neutron g factor``
|
||||
``neutron gyromagnetic ratio``
|
||||
``neutron gyromagnetic ratio over 2 pi``
|
||||
``neutron magnetic moment``
|
||||
``neutron magnetic moment to Bohr magneton ratio``
|
||||
``neutron magnetic moment to nuclear magneton ratio``
|
||||
``neutron mass``
|
||||
``neutron mass energy equivalent``
|
||||
``neutron mass energy equivalent in MeV``
|
||||
``neutron mass in u``
|
||||
``neutron molar mass``
|
||||
``neutron to shielded proton magnetic moment ratio``
|
||||
``neutron-electron magnetic moment ratio``
|
||||
``neutron-electron mass ratio``
|
||||
``neutron-muon mass ratio``
|
||||
``neutron-proton magnetic moment ratio``
|
||||
``neutron-proton mass ratio``
|
||||
``neutron-tau mass ratio``
|
||||
``Newtonian constant of gravitation``
|
||||
``Newtonian constant of gravitation over h-bar c``
|
||||
``nuclear magneton``
|
||||
``nuclear magneton in eV/T``
|
||||
``nuclear magneton in inverse meters per tesla``
|
||||
``nuclear magneton in K/T``
|
||||
``nuclear magneton in MHz/T``
|
||||
``Planck constant``
|
||||
``Planck constant in eV s``
|
||||
``Planck constant over 2 pi``
|
||||
``Planck constant over 2 pi in eV s``
|
||||
``Planck constant over 2 pi times c in MeV fm``
|
||||
``Planck length``
|
||||
``Planck mass``
|
||||
``Planck temperature``
|
||||
``Planck time``
|
||||
``proton charge to mass quotient``
|
||||
``proton Compton wavelength``
|
||||
``proton Compton wavelength over 2 pi``
|
||||
``proton g factor``
|
||||
``proton gyromagnetic ratio``
|
||||
``proton gyromagnetic ratio over 2 pi``
|
||||
``proton magnetic moment``
|
||||
``proton magnetic moment to Bohr magneton ratio``
|
||||
``proton magnetic moment to nuclear magneton ratio``
|
||||
``proton magnetic shielding correction``
|
||||
``proton mass``
|
||||
``proton mass energy equivalent``
|
||||
``proton mass energy equivalent in MeV``
|
||||
``proton mass in u``
|
||||
``proton molar mass``
|
||||
``proton rms charge radius``
|
||||
``proton-electron mass ratio``
|
||||
``proton-muon mass ratio``
|
||||
``proton-neutron magnetic moment ratio``
|
||||
``proton-neutron mass ratio``
|
||||
``proton-tau mass ratio``
|
||||
``quantum of circulation``
|
||||
``quantum of circulation times 2``
|
||||
``Rydberg constant``
|
||||
``Rydberg constant times c in Hz``
|
||||
``Rydberg constant times hc in eV``
|
||||
``Rydberg constant times hc in J``
|
||||
``Sackur-Tetrode constant (1 K, 100 kPa)``
|
||||
``Sackur-Tetrode constant (1 K, 101.325 kPa)``
|
||||
``second radiation constant``
|
||||
``shielded helion gyromagnetic ratio``
|
||||
``shielded helion gyromagnetic ratio over 2 pi``
|
||||
``shielded helion magnetic moment``
|
||||
``shielded helion magnetic moment to Bohr magneton ratio``
|
||||
``shielded helion magnetic moment to nuclear magneton ratio``
|
||||
``shielded helion to proton magnetic moment ratio``
|
||||
``shielded helion to shielded proton magnetic moment ratio``
|
||||
``shielded proton gyromagnetic ratio``
|
||||
``shielded proton gyromagnetic ratio over 2 pi``
|
||||
``shielded proton magnetic moment``
|
||||
``shielded proton magnetic moment to Bohr magneton ratio``
|
||||
``shielded proton magnetic moment to nuclear magneton ratio``
|
||||
``speed of light in vacuum``
|
||||
``standard acceleration of gravity``
|
||||
``standard atmosphere``
|
||||
``Stefan-Boltzmann constant``
|
||||
``tau Compton wavelength``
|
||||
``tau Compton wavelength over 2 pi``
|
||||
``tau mass``
|
||||
``tau mass energy equivalent``
|
||||
``tau mass energy equivalent in MeV``
|
||||
``tau mass in u``
|
||||
``tau molar mass``
|
||||
``tau-electron mass ratio``
|
||||
``tau-muon mass ratio``
|
||||
``tau-neutron mass ratio``
|
||||
``tau-proton mass ratio``
|
||||
``Thomson cross section``
|
||||
``unified atomic mass unit``
|
||||
``von Klitzing constant``
|
||||
``weak mixing angle``
|
||||
``Wien displacement law constant``
|
||||
``{220} lattice spacing of silicon``
|
||||
====================================================================== ====
|
||||
|
||||
|
||||
Unit prefixes
|
||||
=============
|
||||
|
||||
SI
|
||||
--
|
||||
|
||||
============ =================================================================
|
||||
``yotta`` :math:`10^{24}`
|
||||
``zetta`` :math:`10^{21}`
|
||||
``exa`` :math:`10^{18}`
|
||||
``peta`` :math:`10^{15}`
|
||||
``tera`` :math:`10^{12}`
|
||||
``giga`` :math:`10^{9}`
|
||||
``mega`` :math:`10^{6}`
|
||||
``kilo`` :math:`10^{3}`
|
||||
``hecto`` :math:`10^{2}`
|
||||
``deka`` :math:`10^{1}`
|
||||
``deci`` :math:`10^{-1}`
|
||||
``centi`` :math:`10^{-2}`
|
||||
``milli`` :math:`10^{-3}`
|
||||
``micro`` :math:`10^{-6}`
|
||||
``nano`` :math:`10^{-9}`
|
||||
``pico`` :math:`10^{-12}`
|
||||
``femto`` :math:`10^{-15}`
|
||||
``atto`` :math:`10^{-18}`
|
||||
``zepto`` :math:`10^{-21}`
|
||||
============ =================================================================
|
||||
|
||||
|
||||
Binary
|
||||
------
|
||||
|
||||
============ =================================================================
|
||||
``kibi`` :math:`2^{10}`
|
||||
``mebi`` :math:`2^{20}`
|
||||
``gibi`` :math:`2^{30}`
|
||||
``tebi`` :math:`2^{40}`
|
||||
``pebi`` :math:`2^{50}`
|
||||
``exbi`` :math:`2^{60}`
|
||||
``zebi`` :math:`2^{70}`
|
||||
``yobi`` :math:`2^{80}`
|
||||
============ =================================================================
|
||||
|
||||
Units
|
||||
=====
|
||||
|
||||
Weight
|
||||
------
|
||||
|
||||
================= ============================================================
|
||||
``gram`` :math:`10^{-3}` kg
|
||||
``metric_ton`` :math:`10^{3}` kg
|
||||
``grain`` one grain in kg
|
||||
``lb`` one pound (avoirdupous) in kg
|
||||
``oz`` one ounce in kg
|
||||
``stone`` one stone in kg
|
||||
``grain`` one grain in kg
|
||||
``long_ton`` one long ton in kg
|
||||
``short_ton`` one short ton in kg
|
||||
``troy_ounce`` one Troy ounce in kg
|
||||
``troy_pound`` one Troy pound in kg
|
||||
``carat`` one carat in kg
|
||||
``m_u`` atomic mass constant (in kg)
|
||||
================= ============================================================
|
||||
|
||||
Angle
|
||||
-----
|
||||
|
||||
================= ============================================================
|
||||
``degree`` degree in radians
|
||||
``arcmin`` arc minute in radians
|
||||
``arcsec`` arc second in radians
|
||||
================= ============================================================
|
||||
|
||||
|
||||
Time
|
||||
----
|
||||
|
||||
================= ============================================================
|
||||
``minute`` one minute in seconds
|
||||
``hour`` one hour in seconds
|
||||
``day`` one day in seconds
|
||||
``week`` one week in seconds
|
||||
``year`` one year (365 days) in seconds
|
||||
``Julian_year`` one Julian year (365.25 days) in seconds
|
||||
================= ============================================================
|
||||
|
||||
|
||||
Length
|
||||
------
|
||||
|
||||
================= ============================================================
|
||||
``inch`` one inch in meters
|
||||
``foot`` one foot in meters
|
||||
``yard`` one yard in meters
|
||||
``mile`` one mile in meters
|
||||
``mil`` one mil in meters
|
||||
``pt`` one point in meters
|
||||
``survey_foot`` one survey foot in meters
|
||||
``survey_mile`` one survey mile in meters
|
||||
``nautical_mile`` one nautical mile in meters
|
||||
``fermi`` one Fermi in meters
|
||||
``angstrom`` one Ångström in meters
|
||||
``micron`` one micron in meters
|
||||
``au`` one astronomical unit in meters
|
||||
``light_year`` one light year in meters
|
||||
``parsec`` one parsec in meters
|
||||
================= ============================================================
|
||||
|
||||
Pressure
|
||||
--------
|
||||
|
||||
================= ============================================================
|
||||
``atm`` standard atmosphere in pascals
|
||||
``bar`` one bar in pascals
|
||||
``torr`` one torr (mmHg) in pascals
|
||||
``psi`` one psi in pascals
|
||||
================= ============================================================
|
||||
|
||||
Area
|
||||
----
|
||||
|
||||
================= ============================================================
|
||||
``hectare`` one hectare in square meters
|
||||
``acre`` one acre in square meters
|
||||
================= ============================================================
|
||||
|
||||
|
||||
Volume
|
||||
------
|
||||
|
||||
=================== ========================================================
|
||||
``liter`` one liter in cubic meters
|
||||
``gallon`` one gallon (US) in cubic meters
|
||||
``gallon_imp`` one gallon (UK) in cubic meters
|
||||
``fluid_ounce`` one fluid ounce (US) in cubic meters
|
||||
``fluid_ounce_imp`` one fluid ounce (UK) in cubic meters
|
||||
``bbl`` one barrel in cubic meters
|
||||
=================== ========================================================
|
||||
|
||||
Speed
|
||||
-----
|
||||
|
||||
================= ==========================================================
|
||||
``kmh`` kilometers per hour in meters per second
|
||||
``mph`` miles per hour in meters per second
|
||||
``mach`` one Mach (approx., at 15 °C, 1 atm) in meters per second
|
||||
``knot`` one knot in meters per second
|
||||
================= ==========================================================
|
||||
|
||||
|
||||
Temperature
|
||||
-----------
|
||||
|
||||
===================== =======================================================
|
||||
``zero_Celsius`` zero of Celsius scale in Kelvin
|
||||
``degree_Fahrenheit`` one Fahrenheit (only differences) in Kelvins
|
||||
===================== =======================================================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
C2K
|
||||
K2C
|
||||
F2C
|
||||
C2F
|
||||
F2K
|
||||
K2F
|
||||
|
||||
Energy
|
||||
------
|
||||
|
||||
==================== =======================================================
|
||||
``eV`` one electron volt in Joules
|
||||
``calorie`` one calorie (thermochemical) in Joules
|
||||
``calorie_IT`` one calorie (International Steam Table calorie, 1956) in Joules
|
||||
``erg`` one erg in Joules
|
||||
``Btu`` one British thermal unit (International Steam Table) in Joules
|
||||
``Btu_th`` one British thermal unit (thermochemical) in Joules
|
||||
``ton_TNT`` one ton of TNT in Joules
|
||||
==================== =======================================================
|
||||
|
||||
Power
|
||||
-----
|
||||
|
||||
==================== =======================================================
|
||||
``hp`` one horsepower in watts
|
||||
==================== =======================================================
|
||||
|
||||
Force
|
||||
-----
|
||||
|
||||
==================== =======================================================
|
||||
``dyn`` one dyne in newtons
|
||||
``lbf`` one pound force in newtons
|
||||
``kgf`` one kilogram force in newtons
|
||||
==================== =======================================================
|
||||
|
||||
Optics
|
||||
------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
lambda2nu
|
||||
nu2lambda
|
|
@ -1,77 +0,0 @@
|
|||
Fourier transforms (:mod:`scipy.fftpack`)
|
||||
=========================================
|
||||
|
||||
.. module:: scipy.fftpack
|
||||
|
||||
Fast Fourier transforms
|
||||
-----------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
fft
|
||||
ifft
|
||||
fftn
|
||||
ifftn
|
||||
fft2
|
||||
ifft2
|
||||
rfft
|
||||
irfft
|
||||
|
||||
Differential and pseudo-differential operators
|
||||
----------------------------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
diff
|
||||
tilbert
|
||||
itilbert
|
||||
hilbert
|
||||
ihilbert
|
||||
cs_diff
|
||||
sc_diff
|
||||
ss_diff
|
||||
cc_diff
|
||||
shift
|
||||
|
||||
Helper functions
|
||||
----------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
fftshift
|
||||
ifftshift
|
||||
dftfreq
|
||||
rfftfreq
|
||||
|
||||
Convolutions (:mod:`scipy.fftpack.convolve`)
|
||||
--------------------------------------------
|
||||
|
||||
.. module:: scipy.fftpack.convolve
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
convolve
|
||||
convolve_z
|
||||
init_convolution_kernel
|
||||
destroy_convolve_cache
|
||||
|
||||
|
||||
Other (:mod:`scipy.fftpack._fftpack`)
|
||||
-------------------------------------
|
||||
|
||||
.. module:: scipy.fftpack._fftpack
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
drfft
|
||||
zfft
|
||||
zrfft
|
||||
zfftnd
|
||||
destroy_drfft_cache
|
||||
destroy_zfft_cache
|
||||
destroy_zfftnd_cache
|
|
@ -1,44 +0,0 @@
|
|||
SciPy
|
||||
=====
|
||||
|
||||
:Release: |version|
|
||||
:Date: |today|
|
||||
|
||||
SciPy (pronounced "Sigh Pie") is open-source software for mathematics,
|
||||
science, and engineering.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
tutorial/index
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
release
|
||||
|
||||
Reference
|
||||
---------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
cluster
|
||||
constants
|
||||
fftpack
|
||||
integrate
|
||||
interpolate
|
||||
io
|
||||
linalg
|
||||
maxentropy
|
||||
misc
|
||||
ndimage
|
||||
odr
|
||||
optimize
|
||||
signal
|
||||
sparse
|
||||
sparse.linalg
|
||||
spatial
|
||||
special
|
||||
stats
|
||||
weave
|
|
@ -1,44 +0,0 @@
|
|||
=============================================
|
||||
Integration and ODEs (:mod:`scipy.integrate`)
|
||||
=============================================
|
||||
|
||||
.. module:: scipy.integrate
|
||||
|
||||
|
||||
Integrating functions, given function object
|
||||
============================================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
quad
|
||||
dblquad
|
||||
tplquad
|
||||
fixed_quad
|
||||
quadrature
|
||||
romberg
|
||||
|
||||
Integrating functions, given fixed samples
|
||||
==========================================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
trapz
|
||||
cumtrapz
|
||||
simps
|
||||
romb
|
||||
|
||||
.. seealso::
|
||||
|
||||
:mod:`scipy.special` for orthogonal polynomials (special) for Gaussian
|
||||
quadrature roots and weights for other weighting factors and regions.
|
||||
|
||||
Integrators of ODE systems
|
||||
==========================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
odeint
|
||||
ode
|
|
@ -1,100 +0,0 @@
|
|||
========================================
|
||||
Interpolation (:mod:`scipy.interpolate`)
|
||||
========================================
|
||||
|
||||
.. module:: scipy.interpolate
|
||||
|
||||
Univariate interpolation
|
||||
========================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
interp1d
|
||||
BarycentricInterpolator
|
||||
KroghInterpolator
|
||||
PiecewisePolynomial
|
||||
barycentric_interpolate
|
||||
krogh_interpolate
|
||||
piecewise_polynomial_interpolate
|
||||
|
||||
|
||||
Multivariate interpolation
|
||||
==========================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
interp2d
|
||||
Rbf
|
||||
|
||||
|
||||
1-D Splines
|
||||
===========
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
UnivariateSpline
|
||||
InterpolatedUnivariateSpline
|
||||
LSQUnivariateSpline
|
||||
|
||||
The above univariate spline classes have the following methods:
|
||||
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
UnivariateSpline.__call__
|
||||
UnivariateSpline.derivatives
|
||||
UnivariateSpline.integral
|
||||
UnivariateSpline.roots
|
||||
UnivariateSpline.get_coeffs
|
||||
UnivariateSpline.get_knots
|
||||
UnivariateSpline.get_residual
|
||||
UnivariateSpline.set_smoothing_factor
|
||||
|
||||
|
||||
Low-level interface to FITPACK functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
splrep
|
||||
splprep
|
||||
splev
|
||||
splint
|
||||
sproot
|
||||
spalde
|
||||
bisplrep
|
||||
bisplev
|
||||
|
||||
|
||||
2-D Splines
|
||||
===========
|
||||
|
||||
.. seealso:: scipy.ndimage.map_coordinates
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
BivariateSpline
|
||||
SmoothBivariateSpline
|
||||
LSQBivariateSpline
|
||||
|
||||
Low-level interface to FITPACK functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
bisplrep
|
||||
bisplev
|
||||
|
||||
Additional tools
|
||||
================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
lagrange
|
||||
approximate_taylor_polynomial
|
|
@ -1,67 +0,0 @@
|
|||
==================================
|
||||
Input and output (:mod:`scipy.io`)
|
||||
==================================
|
||||
|
||||
.. seealso:: :ref:`numpy-reference.routines.io` (in Numpy)
|
||||
|
||||
.. module:: scipy.io
|
||||
|
||||
MATLAB® files
|
||||
=============
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
loadmat
|
||||
savemat
|
||||
|
||||
Matrix Market files
|
||||
===================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
mminfo
|
||||
mmread
|
||||
mmwrite
|
||||
|
||||
Other
|
||||
=====
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
save_as_module
|
||||
npfile
|
||||
|
||||
Wav sound files (:mod:`scipy.io.wavfile`)
|
||||
=========================================
|
||||
|
||||
.. module:: scipy.io.wavfile
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
read
|
||||
write
|
||||
|
||||
Arff files (:mod:`scipy.io.arff`)
|
||||
=================================
|
||||
|
||||
.. automodule:: scipy.io.arff
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
loadarff
|
||||
|
||||
Netcdf (:mod:`scipy.io.netcdf`)
|
||||
===============================
|
||||
|
||||
.. module:: scipy.io.netcdf
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
netcdf_file
|
||||
netcdf_variable
|
|
@ -1,95 +0,0 @@
|
|||
====================================
|
||||
Linear algebra (:mod:`scipy.linalg`)
|
||||
====================================
|
||||
|
||||
.. module:: scipy.linalg
|
||||
|
||||
Basics
|
||||
======
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
inv
|
||||
solve
|
||||
solve_banded
|
||||
solveh_banded
|
||||
det
|
||||
norm
|
||||
lstsq
|
||||
pinv
|
||||
pinv2
|
||||
|
||||
Eigenvalue Problem
|
||||
==================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
eig
|
||||
eigvals
|
||||
eigh
|
||||
eigvalsh
|
||||
eig_banded
|
||||
eigvals_banded
|
||||
|
||||
Decompositions
|
||||
==============
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
lu
|
||||
lu_factor
|
||||
lu_solve
|
||||
svd
|
||||
svdvals
|
||||
diagsvd
|
||||
orth
|
||||
cholesky
|
||||
cholesky_banded
|
||||
cho_factor
|
||||
cho_solve
|
||||
cho_solve_banded
|
||||
qr
|
||||
schur
|
||||
rsf2csf
|
||||
hessenberg
|
||||
|
||||
Matrix Functions
|
||||
================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
expm
|
||||
expm2
|
||||
expm3
|
||||
logm
|
||||
cosm
|
||||
sinm
|
||||
tanm
|
||||
coshm
|
||||
sinhm
|
||||
tanhm
|
||||
signm
|
||||
sqrtm
|
||||
funm
|
||||
|
||||
Special Matrices
|
||||
================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
block_diag
|
||||
circulant
|
||||
companion
|
||||
hadamard
|
||||
hankel
|
||||
kron
|
||||
leslie
|
||||
toeplitz
|
||||
tri
|
||||
tril
|
||||
triu
|
|
@ -1,88 +0,0 @@
|
|||
================================================
|
||||
Maximum entropy models (:mod:`scipy.maxentropy`)
|
||||
================================================
|
||||
|
||||
.. automodule:: scipy.maxentropy
|
||||
|
||||
Models
|
||||
======
|
||||
.. autoclass:: scipy.maxentropy.basemodel
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
basemodel.beginlogging
|
||||
basemodel.endlogging
|
||||
basemodel.clearcache
|
||||
basemodel.crossentropy
|
||||
basemodel.dual
|
||||
basemodel.fit
|
||||
basemodel.grad
|
||||
basemodel.log
|
||||
basemodel.logparams
|
||||
basemodel.normconst
|
||||
basemodel.reset
|
||||
basemodel.setcallback
|
||||
basemodel.setparams
|
||||
basemodel.setsmooth
|
||||
|
||||
.. autoclass:: scipy.maxentropy.model
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
model.expectations
|
||||
model.lognormconst
|
||||
model.logpmf
|
||||
model.pmf_function
|
||||
model.setfeaturesandsamplespace
|
||||
|
||||
.. autoclass:: scipy.maxentropy.bigmodel
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
bigmodel.estimate
|
||||
bigmodel.logpdf
|
||||
bigmodel.pdf
|
||||
bigmodel.pdf_function
|
||||
bigmodel.resample
|
||||
bigmodel.setsampleFgen
|
||||
bigmodel.settestsamples
|
||||
bigmodel.stochapprox
|
||||
bigmodel.test
|
||||
|
||||
.. autoclass:: scipy.maxentropy.conditionalmodel
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
conditionalmodel.dual
|
||||
conditionalmodel.expectations
|
||||
conditionalmodel.fit
|
||||
conditionalmodel.lognormconst
|
||||
conditionalmodel.logpmf
|
||||
|
||||
Utilities
|
||||
=========
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
arrayexp
|
||||
arrayexpcomplex
|
||||
columnmeans
|
||||
columnvariances
|
||||
densefeaturematrix
|
||||
densefeatures
|
||||
dotprod
|
||||
flatten
|
||||
innerprod
|
||||
innerprodtranspose
|
||||
logsumexp
|
||||
logsumexp_naive
|
||||
robustlog
|
||||
rowmeans
|
||||
sample_wr
|
||||
sparsefeaturematrix
|
||||
sparsefeatures
|
|
@ -1,10 +0,0 @@
|
|||
==========================================
|
||||
Miscellaneous routines (:mod:`scipy.misc`)
|
||||
==========================================
|
||||
|
||||
.. warning::
|
||||
|
||||
This documentation is work-in-progress and unorganized.
|
||||
|
||||
.. automodule:: scipy.misc
|
||||
:members:
|
|
@ -1,122 +0,0 @@
|
|||
=========================================================
|
||||
Multi-dimensional image processing (:mod:`scipy.ndimage`)
|
||||
=========================================================
|
||||
|
||||
.. module:: scipy.ndimage
|
||||
|
||||
Functions for multi-dimensional image processing.
|
||||
|
||||
Filters :mod:`scipy.ndimage.filters`
|
||||
====================================
|
||||
|
||||
.. module:: scipy.ndimage.filters
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
convolve
|
||||
convolve1d
|
||||
correlate
|
||||
correlate1d
|
||||
gaussian_filter
|
||||
gaussian_filter1d
|
||||
gaussian_gradient_magnitude
|
||||
gaussian_laplace
|
||||
generic_filter
|
||||
generic_filter1d
|
||||
generic_gradient_magnitude
|
||||
generic_laplace
|
||||
laplace
|
||||
maximum_filter
|
||||
maximum_filter1d
|
||||
median_filter
|
||||
minimum_filter
|
||||
minimum_filter1d
|
||||
percentile_filter
|
||||
prewitt
|
||||
rank_filter
|
||||
sobel
|
||||
uniform_filter
|
||||
uniform_filter1d
|
||||
|
||||
Fourier filters :mod:`scipy.ndimage.fourier`
|
||||
============================================
|
||||
|
||||
.. module:: scipy.ndimage.fourier
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
fourier_ellipsoid
|
||||
fourier_gaussian
|
||||
fourier_shift
|
||||
fourier_uniform
|
||||
|
||||
Interpolation :mod:`scipy.ndimage.interpolation`
|
||||
================================================
|
||||
|
||||
.. module:: scipy.ndimage.interpolation
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
affine_transform
|
||||
geometric_transform
|
||||
map_coordinates
|
||||
rotate
|
||||
shift
|
||||
spline_filter
|
||||
spline_filter1d
|
||||
zoom
|
||||
|
||||
Measurements :mod:`scipy.ndimage.measurements`
|
||||
==============================================
|
||||
|
||||
.. module:: scipy.ndimage.measurements
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
center_of_mass
|
||||
extrema
|
||||
find_objects
|
||||
histogram
|
||||
label
|
||||
maximum
|
||||
maximum_position
|
||||
mean
|
||||
minimum
|
||||
minimum_position
|
||||
standard_deviation
|
||||
sum
|
||||
variance
|
||||
watershed_ift
|
||||
|
||||
Morphology :mod:`scipy.ndimage.morphology`
|
||||
==========================================
|
||||
|
||||
.. module:: scipy.ndimage.morphology
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
binary_closing
|
||||
binary_dilation
|
||||
binary_erosion
|
||||
binary_fill_holes
|
||||
binary_hit_or_miss
|
||||
binary_opening
|
||||
binary_propagation
|
||||
black_tophat
|
||||
distance_transform_bf
|
||||
distance_transform_cdt
|
||||
distance_transform_edt
|
||||
generate_binary_structure
|
||||
grey_closing
|
||||
grey_dilation
|
||||
grey_erosion
|
||||
grey_opening
|
||||
iterate_structure
|
||||
morphological_gradient
|
||||
morphological_laplace
|
||||
white_tophat
|
|
@ -1,33 +0,0 @@
|
|||
=================================================
|
||||
Orthogonal distance regression (:mod:`scipy.odr`)
|
||||
=================================================
|
||||
|
||||
.. automodule:: scipy.odr
|
||||
|
||||
.. autoclass:: Data
|
||||
|
||||
.. automethod:: set_meta
|
||||
|
||||
.. autoclass:: Model
|
||||
|
||||
.. automethod:: set_meta
|
||||
|
||||
.. autoclass:: ODR
|
||||
|
||||
.. automethod:: restart
|
||||
|
||||
.. automethod:: run
|
||||
|
||||
.. automethod:: set_iprint
|
||||
|
||||
.. automethod:: set_job
|
||||
|
||||
.. autoclass:: Output
|
||||
|
||||
.. automethod:: pprint
|
||||
|
||||
.. autoexception:: odr_error
|
||||
|
||||
.. autoexception:: odr_stop
|
||||
|
||||
.. autofunction:: odr
|
|
@ -1,111 +0,0 @@
|
|||
=====================================================
|
||||
Optimization and root finding (:mod:`scipy.optimize`)
|
||||
=====================================================
|
||||
|
||||
.. module:: scipy.optimize
|
||||
|
||||
Optimization
|
||||
============
|
||||
|
||||
General-purpose
|
||||
---------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
fmin
|
||||
fmin_powell
|
||||
fmin_cg
|
||||
fmin_bfgs
|
||||
fmin_ncg
|
||||
leastsq
|
||||
|
||||
|
||||
Constrained (multivariate)
|
||||
--------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
fmin_l_bfgs_b
|
||||
fmin_tnc
|
||||
fmin_cobyla
|
||||
fmin_slsqp
|
||||
nnls
|
||||
|
||||
Global
|
||||
------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
anneal
|
||||
brute
|
||||
|
||||
Scalar function minimizers
|
||||
--------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
fminbound
|
||||
golden
|
||||
bracket
|
||||
brent
|
||||
|
||||
Fitting
|
||||
=======
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
curve_fit
|
||||
|
||||
Root finding
|
||||
============
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
fsolve
|
||||
|
||||
Scalar function solvers
|
||||
-----------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
brentq
|
||||
brenth
|
||||
ridder
|
||||
bisect
|
||||
newton
|
||||
|
||||
Fixed point finding:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
fixed_point
|
||||
|
||||
General-purpose nonlinear (multidimensional)
|
||||
--------------------------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
broyden1
|
||||
broyden2
|
||||
broyden3
|
||||
broyden_generalized
|
||||
anderson
|
||||
anderson2
|
||||
|
||||
Utility Functions
|
||||
=================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
line_search
|
||||
check_grad
|
|
@ -1,5 +0,0 @@
|
|||
*************
|
||||
Release Notes
|
||||
*************
|
||||
|
||||
.. include:: ../release/0.8.0-notes.rst
|
|
@ -1,168 +0,0 @@
|
|||
=======================================
|
||||
Signal processing (:mod:`scipy.signal`)
|
||||
=======================================
|
||||
|
||||
.. module:: scipy.signal
|
||||
|
||||
Convolution
|
||||
===========
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
convolve
|
||||
correlate
|
||||
fftconvolve
|
||||
convolve2d
|
||||
correlate2d
|
||||
sepfir2d
|
||||
|
||||
B-splines
|
||||
=========
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
bspline
|
||||
gauss_spline
|
||||
cspline1d
|
||||
qspline1d
|
||||
cspline2d
|
||||
qspline2d
|
||||
spline_filter
|
||||
|
||||
Filtering
|
||||
=========
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
order_filter
|
||||
medfilt
|
||||
medfilt2d
|
||||
wiener
|
||||
|
||||
symiirorder1
|
||||
symiirorder2
|
||||
lfilter
|
||||
lfiltic
|
||||
|
||||
deconvolve
|
||||
|
||||
hilbert
|
||||
get_window
|
||||
|
||||
decimate
|
||||
detrend
|
||||
resample
|
||||
|
||||
Filter design
|
||||
=============
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
bilinear
|
||||
firwin
|
||||
freqs
|
||||
freqz
|
||||
iirdesign
|
||||
iirfilter
|
||||
kaiserord
|
||||
remez
|
||||
|
||||
unique_roots
|
||||
residue
|
||||
residuez
|
||||
invres
|
||||
|
||||
Matlab-style IIR filter design
|
||||
==============================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
butter
|
||||
buttord
|
||||
cheby1
|
||||
cheb1ord
|
||||
cheby2
|
||||
cheb2ord
|
||||
ellip
|
||||
ellipord
|
||||
bessel
|
||||
|
||||
Linear Systems
|
||||
==============
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
lti
|
||||
lsim
|
||||
lsim2
|
||||
impulse
|
||||
impulse2
|
||||
step
|
||||
step2
|
||||
|
||||
LTI Representations
|
||||
===================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
tf2zpk
|
||||
zpk2tf
|
||||
tf2ss
|
||||
ss2tf
|
||||
zpk2ss
|
||||
ss2zpk
|
||||
|
||||
Waveforms
|
||||
=========
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
chirp
|
||||
gausspulse
|
||||
sawtooth
|
||||
square
|
||||
sweep_poly
|
||||
|
||||
Window functions
|
||||
================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
get_window
|
||||
barthann
|
||||
bartlett
|
||||
blackman
|
||||
blackmanharris
|
||||
bohman
|
||||
boxcar
|
||||
chebwin
|
||||
flattop
|
||||
gaussian
|
||||
general_gaussian
|
||||
hamming
|
||||
hann
|
||||
kaiser
|
||||
nuttall
|
||||
parzen
|
||||
slepian
|
||||
triang
|
||||
|
||||
Wavelets
|
||||
========
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
cascade
|
||||
daub
|
||||
morlet
|
||||
qmf
|
|
@ -1,10 +0,0 @@
|
|||
==================================================
|
||||
Sparse linear algebra (:mod:`scipy.sparse.linalg`)
|
||||
==================================================
|
||||
|
||||
.. warning::
|
||||
|
||||
This documentation is work-in-progress and unorganized.
|
||||
|
||||
.. automodule:: scipy.sparse.linalg
|
||||
:members:
|
|
@ -1,64 +0,0 @@
|
|||
=====================================
|
||||
Sparse matrices (:mod:`scipy.sparse`)
|
||||
=====================================
|
||||
|
||||
.. automodule:: scipy.sparse
|
||||
|
||||
|
||||
Sparse matrix classes
|
||||
=====================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
csc_matrix
|
||||
csr_matrix
|
||||
bsr_matrix
|
||||
lil_matrix
|
||||
dok_matrix
|
||||
coo_matrix
|
||||
dia_matrix
|
||||
|
||||
|
||||
Functions
|
||||
=========
|
||||
|
||||
Building sparse matrices:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
eye
|
||||
identity
|
||||
kron
|
||||
kronsum
|
||||
lil_eye
|
||||
lil_diags
|
||||
spdiags
|
||||
tril
|
||||
triu
|
||||
bmat
|
||||
hstack
|
||||
vstack
|
||||
|
||||
Identifying sparse matrices:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
issparse
|
||||
isspmatrix
|
||||
isspmatrix_csc
|
||||
isspmatrix_csr
|
||||
isspmatrix_bsr
|
||||
isspmatrix_lil
|
||||
isspmatrix_dok
|
||||
isspmatrix_coo
|
||||
isspmatrix_dia
|
||||
|
||||
Exceptions
|
||||
==========
|
||||
|
||||
.. autoexception:: SparseEfficiencyWarning
|
||||
|
||||
.. autoexception:: SparseWarning
|
|
@ -1,6 +0,0 @@
|
|||
=====================================================
|
||||
Distance computations (:mod:`scipy.spatial.distance`)
|
||||
=====================================================
|
||||
|
||||
.. automodule:: scipy.spatial.distance
|
||||
:members:
|
|
@ -1,14 +0,0 @@
|
|||
=============================================================
|
||||
Spatial algorithms and data structures (:mod:`scipy.spatial`)
|
||||
=============================================================
|
||||
|
||||
.. warning::
|
||||
|
||||
This documentation is work-in-progress and unorganized.
|
||||
|
||||
.. toctree::
|
||||
|
||||
spatial.distance
|
||||
|
||||
.. automodule:: scipy.spatial
|
||||
:members:
|
|
@ -1,512 +0,0 @@
|
|||
========================================
|
||||
Special functions (:mod:`scipy.special`)
|
||||
========================================
|
||||
|
||||
.. module:: scipy.special
|
||||
|
||||
Nearly all of the functions below are universal functions and follow
|
||||
broadcasting and automatic array-looping rules. Exceptions are noted.
|
||||
|
||||
Error handling
|
||||
==============
|
||||
|
||||
Errors are handled by returning nans, or other appropriate values.
|
||||
Some of the special function routines will print an error message
|
||||
when an error occurs. By default this printing
|
||||
is disabled. To enable such messages use errprint(1)
|
||||
To disable such messages use errprint(0).
|
||||
|
||||
Example:
|
||||
>>> print scipy.special.bdtr(-1,10,0.3)
|
||||
>>> scipy.special.errprint(1)
|
||||
>>> print scipy.special.bdtr(-1,10,0.3)
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
errprint
|
||||
errstate
|
||||
|
||||
Available functions
|
||||
===================
|
||||
|
||||
Airy functions
|
||||
--------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
airy
|
||||
airye
|
||||
ai_zeros
|
||||
bi_zeros
|
||||
|
||||
|
||||
Elliptic Functions and Integrals
|
||||
--------------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
ellipj
|
||||
ellipk
|
||||
ellipkinc
|
||||
ellipe
|
||||
ellipeinc
|
||||
|
||||
Bessel Functions
|
||||
----------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
jn
|
||||
jv
|
||||
jve
|
||||
yn
|
||||
yv
|
||||
yve
|
||||
kn
|
||||
kv
|
||||
kve
|
||||
iv
|
||||
ive
|
||||
hankel1
|
||||
hankel1e
|
||||
hankel2
|
||||
hankel2e
|
||||
|
||||
The following is not an universal function:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
lmbda
|
||||
|
||||
Zeros of Bessel Functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
These are not universal functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
jnjnp_zeros
|
||||
jnyn_zeros
|
||||
jn_zeros
|
||||
jnp_zeros
|
||||
yn_zeros
|
||||
ynp_zeros
|
||||
y0_zeros
|
||||
y1_zeros
|
||||
y1p_zeros
|
||||
|
||||
Faster versions of common Bessel Functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
j0
|
||||
j1
|
||||
y0
|
||||
y1
|
||||
i0
|
||||
i0e
|
||||
i1
|
||||
i1e
|
||||
k0
|
||||
k0e
|
||||
k1
|
||||
k1e
|
||||
|
||||
Integrals of Bessel Functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
itj0y0
|
||||
it2j0y0
|
||||
iti0k0
|
||||
it2i0k0
|
||||
besselpoly
|
||||
|
||||
Derivatives of Bessel Functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
jvp
|
||||
yvp
|
||||
kvp
|
||||
ivp
|
||||
h1vp
|
||||
h2vp
|
||||
|
||||
Spherical Bessel Functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
These are not universal functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
sph_jn
|
||||
sph_yn
|
||||
sph_jnyn
|
||||
sph_in
|
||||
sph_kn
|
||||
sph_inkn
|
||||
|
||||
Riccati-Bessel Functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
These are not universal functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
riccati_jn
|
||||
riccati_yn
|
||||
|
||||
Struve Functions
|
||||
----------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
struve
|
||||
modstruve
|
||||
itstruve0
|
||||
it2struve0
|
||||
itmodstruve0
|
||||
|
||||
|
||||
Raw Statistical Functions
|
||||
-------------------------
|
||||
|
||||
.. seealso:: :mod:`scipy.stats`: Friendly versions of these functions.
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
bdtr
|
||||
bdtrc
|
||||
bdtri
|
||||
btdtr
|
||||
btdtri
|
||||
fdtr
|
||||
fdtrc
|
||||
fdtri
|
||||
gdtr
|
||||
gdtrc
|
||||
gdtria
|
||||
gdtrib
|
||||
gdtrix
|
||||
nbdtr
|
||||
nbdtrc
|
||||
nbdtri
|
||||
pdtr
|
||||
pdtrc
|
||||
pdtri
|
||||
stdtr
|
||||
stdtridf
|
||||
stdtrit
|
||||
chdtr
|
||||
chdtrc
|
||||
chdtri
|
||||
ndtr
|
||||
ndtri
|
||||
smirnov
|
||||
smirnovi
|
||||
kolmogorov
|
||||
kolmogi
|
||||
tklmbda
|
||||
|
||||
Gamma and Related Functions
|
||||
---------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
gamma
|
||||
gammaln
|
||||
gammainc
|
||||
gammaincinv
|
||||
gammaincc
|
||||
gammainccinv
|
||||
beta
|
||||
betaln
|
||||
betainc
|
||||
betaincinv
|
||||
psi
|
||||
rgamma
|
||||
polygamma
|
||||
multigammaln
|
||||
|
||||
|
||||
Error Function and Fresnel Integrals
|
||||
------------------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
erf
|
||||
erfc
|
||||
erfinv
|
||||
erfcinv
|
||||
erf_zeros
|
||||
fresnel
|
||||
fresnel_zeros
|
||||
modfresnelp
|
||||
modfresnelm
|
||||
|
||||
These are not universal functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
fresnelc_zeros
|
||||
fresnels_zeros
|
||||
|
||||
Legendre Functions
|
||||
------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
lpmv
|
||||
sph_harm
|
||||
|
||||
These are not universal functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
lpn
|
||||
lqn
|
||||
lpmn
|
||||
lqmn
|
||||
|
||||
Orthogonal polynomials
|
||||
----------------------
|
||||
|
||||
The following functions evaluate values of orthogonal polynomials:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
eval_legendre
|
||||
eval_chebyt
|
||||
eval_chebyu
|
||||
eval_chebyc
|
||||
eval_chebys
|
||||
eval_jacobi
|
||||
eval_laguerre
|
||||
eval_genlaguerre
|
||||
eval_hermite
|
||||
eval_hermitenorm
|
||||
eval_gegenbauer
|
||||
eval_sh_legendre
|
||||
eval_sh_chebyt
|
||||
eval_sh_chebyu
|
||||
eval_sh_jacobi
|
||||
|
||||
The functions below, in turn, return :ref:`orthopoly1d` objects, which
|
||||
functions similarly as :ref:`numpy.poly1d`. The :ref:`orthopoly1d`
|
||||
class also has an attribute ``weights`` which returns the roots, weights,
|
||||
and total weights for the appropriate form of Gaussian quadrature.
|
||||
These are returned in an ``n x 3`` array with roots in the first column,
|
||||
weights in the second column, and total weights in the final column.
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
legendre
|
||||
chebyt
|
||||
chebyu
|
||||
chebyc
|
||||
chebys
|
||||
jacobi
|
||||
laguerre
|
||||
genlaguerre
|
||||
hermite
|
||||
hermitenorm
|
||||
gegenbauer
|
||||
sh_legendre
|
||||
sh_chebyt
|
||||
sh_chebyu
|
||||
sh_jacobi
|
||||
|
||||
.. warning::
|
||||
|
||||
Large-order polynomials obtained from these functions
|
||||
are numerically unstable.
|
||||
|
||||
``orthopoly1d`` objects are converted to ``poly1d``, when doing
|
||||
arithmetic. ``numpy.poly1d`` works in power basis and cannot
|
||||
represent high-order polynomials accurately, which can cause
|
||||
significant inaccuracy.
|
||||
|
||||
|
||||
Hypergeometric Functions
|
||||
------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
hyp2f1
|
||||
hyp1f1
|
||||
hyperu
|
||||
hyp0f1
|
||||
hyp2f0
|
||||
hyp1f2
|
||||
hyp3f0
|
||||
|
||||
|
||||
Parabolic Cylinder Functions
|
||||
----------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
pbdv
|
||||
pbvv
|
||||
pbwa
|
||||
|
||||
These are not universal functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
pbdv_seq
|
||||
pbvv_seq
|
||||
pbdn_seq
|
||||
|
||||
Mathieu and Related Functions
|
||||
-----------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
mathieu_a
|
||||
mathieu_b
|
||||
|
||||
These are not universal functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
mathieu_even_coef
|
||||
mathieu_odd_coef
|
||||
|
||||
The following return both function and first derivative:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
mathieu_cem
|
||||
mathieu_sem
|
||||
mathieu_modcem1
|
||||
mathieu_modcem2
|
||||
mathieu_modsem1
|
||||
mathieu_modsem2
|
||||
|
||||
Spheroidal Wave Functions
|
||||
-------------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
pro_ang1
|
||||
pro_rad1
|
||||
pro_rad2
|
||||
obl_ang1
|
||||
obl_rad1
|
||||
obl_rad2
|
||||
pro_cv
|
||||
obl_cv
|
||||
pro_cv_seq
|
||||
obl_cv_seq
|
||||
|
||||
The following functions require pre-computed characteristic value:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
pro_ang1_cv
|
||||
pro_rad1_cv
|
||||
pro_rad2_cv
|
||||
obl_ang1_cv
|
||||
obl_rad1_cv
|
||||
obl_rad2_cv
|
||||
|
||||
Kelvin Functions
|
||||
----------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
kelvin
|
||||
kelvin_zeros
|
||||
ber
|
||||
bei
|
||||
berp
|
||||
beip
|
||||
ker
|
||||
kei
|
||||
kerp
|
||||
keip
|
||||
|
||||
These are not universal functions:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
ber_zeros
|
||||
bei_zeros
|
||||
berp_zeros
|
||||
beip_zeros
|
||||
ker_zeros
|
||||
kei_zeros
|
||||
kerp_zeros
|
||||
keip_zeros
|
||||
|
||||
Other Special Functions
|
||||
-----------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
expn
|
||||
exp1
|
||||
expi
|
||||
wofz
|
||||
dawsn
|
||||
shichi
|
||||
sici
|
||||
spence
|
||||
lambertw
|
||||
zeta
|
||||
zetac
|
||||
|
||||
Convenience Functions
|
||||
---------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
cbrt
|
||||
exp10
|
||||
exp2
|
||||
radian
|
||||
cosdg
|
||||
sindg
|
||||
tandg
|
||||
cotdg
|
||||
log1p
|
||||
expm1
|
||||
cosm1
|
||||
round
|
|
@ -1,81 +0,0 @@
|
|||
.. module:: scipy.stats.mstats
|
||||
|
||||
===================================================================
|
||||
Statistical functions for masked arrays (:mod:`scipy.stats.mstats`)
|
||||
===================================================================
|
||||
|
||||
This module contains a large number of statistical functions that can
|
||||
be used with masked arrays.
|
||||
|
||||
Most of these functions are similar to those in scipy.stats but might
|
||||
have small differences in the API or in the algorithm used. Since this
|
||||
is a relatively new package, some API changes are still possible.
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
argstoarray
|
||||
betai
|
||||
chisquare
|
||||
count_tied_groups
|
||||
describe
|
||||
f_oneway
|
||||
f_value_wilks_lambda
|
||||
find_repeats
|
||||
friedmanchisquare
|
||||
gmean
|
||||
hmean
|
||||
kendalltau
|
||||
kendalltau_seasonal
|
||||
kruskalwallis
|
||||
kruskalwallis
|
||||
ks_twosamp
|
||||
ks_twosamp
|
||||
kurtosis
|
||||
kurtosistest
|
||||
linregress
|
||||
mannwhitneyu
|
||||
plotting_positions
|
||||
mode
|
||||
moment
|
||||
mquantiles
|
||||
msign
|
||||
normaltest
|
||||
obrientransform
|
||||
pearsonr
|
||||
plotting_positions
|
||||
pointbiserialr
|
||||
rankdata
|
||||
samplestd
|
||||
samplevar
|
||||
scoreatpercentile
|
||||
sem
|
||||
signaltonoise
|
||||
skew
|
||||
skewtest
|
||||
spearmanr
|
||||
std
|
||||
stderr
|
||||
theilslopes
|
||||
threshold
|
||||
tmax
|
||||
tmean
|
||||
tmin
|
||||
trim
|
||||
trima
|
||||
trimboth
|
||||
trimmed_stde
|
||||
trimr
|
||||
trimtail
|
||||
tsem
|
||||
ttest_onesamp
|
||||
ttest_ind
|
||||
ttest_onesamp
|
||||
ttest_rel
|
||||
tvar
|
||||
var
|
||||
variation
|
||||
winsorize
|
||||
z
|
||||
zmap
|
||||
zs
|
|
@ -1,284 +0,0 @@
|
|||
.. module:: scipy.stats
|
||||
|
||||
==========================================
|
||||
Statistical functions (:mod:`scipy.stats`)
|
||||
==========================================
|
||||
|
||||
This module contains a large number of probability distributions as
|
||||
well as a growing library of statistical functions.
|
||||
|
||||
Each included continuous distribution is an instance of the class rv_continous:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
rv_continuous
|
||||
rv_continuous.pdf
|
||||
rv_continuous.cdf
|
||||
rv_continuous.sf
|
||||
rv_continuous.ppf
|
||||
rv_continuous.isf
|
||||
rv_continuous.stats
|
||||
|
||||
Each discrete distribution is an instance of the class rv_discrete:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
rv_discrete
|
||||
rv_discrete.pmf
|
||||
rv_discrete.cdf
|
||||
rv_discrete.sf
|
||||
rv_discrete.ppf
|
||||
rv_discrete.isf
|
||||
rv_discrete.stats
|
||||
|
||||
Continuous distributions
|
||||
========================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
norm
|
||||
alpha
|
||||
anglit
|
||||
arcsine
|
||||
beta
|
||||
betaprime
|
||||
bradford
|
||||
burr
|
||||
fisk
|
||||
cauchy
|
||||
chi
|
||||
chi2
|
||||
cosine
|
||||
dgamma
|
||||
dweibull
|
||||
erlang
|
||||
expon
|
||||
exponweib
|
||||
exponpow
|
||||
fatiguelife
|
||||
foldcauchy
|
||||
f
|
||||
foldnorm
|
||||
fretchet_r
|
||||
fretcher_l
|
||||
genlogistic
|
||||
genpareto
|
||||
genexpon
|
||||
genextreme
|
||||
gausshyper
|
||||
gamma
|
||||
gengamma
|
||||
genhalflogistic
|
||||
gompertz
|
||||
gumbel_r
|
||||
gumbel_l
|
||||
halfcauchy
|
||||
halflogistic
|
||||
halfnorm
|
||||
hypsecant
|
||||
invgamma
|
||||
invnorm
|
||||
invweibull
|
||||
johnsonsb
|
||||
johnsonsu
|
||||
laplace
|
||||
logistic
|
||||
loggamma
|
||||
loglaplace
|
||||
lognorm
|
||||
gilbrat
|
||||
lomax
|
||||
maxwell
|
||||
mielke
|
||||
nakagami
|
||||
ncx2
|
||||
ncf
|
||||
t
|
||||
nct
|
||||
pareto
|
||||
powerlaw
|
||||
powerlognorm
|
||||
powernorm
|
||||
rdist
|
||||
reciprocal
|
||||
rayleigh
|
||||
rice
|
||||
recipinvgauss
|
||||
semicircular
|
||||
triang
|
||||
truncexpon
|
||||
truncnorm
|
||||
tukeylambda
|
||||
uniform
|
||||
von_mises
|
||||
wald
|
||||
weibull_min
|
||||
weibull_max
|
||||
wrapcauchy
|
||||
ksone
|
||||
kstwobign
|
||||
|
||||
Discrete distributions
|
||||
======================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
binom
|
||||
bernoulli
|
||||
nbinom
|
||||
geom
|
||||
hypergeom
|
||||
logser
|
||||
poisson
|
||||
planck
|
||||
boltzmann
|
||||
randint
|
||||
zipf
|
||||
dlaplace
|
||||
|
||||
Statistical functions
|
||||
=====================
|
||||
|
||||
Several of these functions have a similar version in scipy.stats.mstats
|
||||
which work for masked arrays.
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
gmean
|
||||
hmean
|
||||
mean
|
||||
cmedian
|
||||
median
|
||||
mode
|
||||
tmean
|
||||
tvar
|
||||
tmin
|
||||
tmax
|
||||
tstd
|
||||
tsem
|
||||
moment
|
||||
variation
|
||||
skew
|
||||
kurtosis
|
||||
describe
|
||||
skewtest
|
||||
kurtosistest
|
||||
normaltest
|
||||
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
itemfreq
|
||||
scoreatpercentile
|
||||
percentileofscore
|
||||
histogram2
|
||||
histogram
|
||||
cumfreq
|
||||
relfreq
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
obrientransform
|
||||
samplevar
|
||||
samplestd
|
||||
signaltonoise
|
||||
bayes_mvs
|
||||
var
|
||||
std
|
||||
stderr
|
||||
sem
|
||||
z
|
||||
zs
|
||||
zmap
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
threshold
|
||||
trimboth
|
||||
trim1
|
||||
cov
|
||||
corrcoef
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
f_oneway
|
||||
pearsonr
|
||||
spearmanr
|
||||
pointbiserialr
|
||||
kendalltau
|
||||
linregress
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
ttest_1samp
|
||||
ttest_ind
|
||||
ttest_rel
|
||||
kstest
|
||||
chisquare
|
||||
ks_2samp
|
||||
mannwhitneyu
|
||||
tiecorrect
|
||||
ranksums
|
||||
wilcoxon
|
||||
kruskal
|
||||
friedmanchisquare
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
ansari
|
||||
bartlett
|
||||
levene
|
||||
shapiro
|
||||
anderson
|
||||
binom_test
|
||||
fligner
|
||||
mood
|
||||
oneway
|
||||
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
glm
|
||||
anova
|
||||
|
||||
Plot-tests
|
||||
==========
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
probplot
|
||||
ppcc_max
|
||||
ppcc_plot
|
||||
|
||||
|
||||
Masked statistics functions
|
||||
===========================
|
||||
|
||||
.. toctree::
|
||||
|
||||
stats.mstats
|
||||
|
||||
|
||||
Univariate and multivariate kernel density estimation (:mod:`scipy.stats.kde`)
|
||||
==============================================================================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
gaussian_kde
|
||||
|
||||
For many more stat related functions install the software R and the
|
||||
interface package rpy.
|
|
@ -1,302 +0,0 @@
|
|||
Basic functions in Numpy (and top-level scipy)
|
||||
==============================================
|
||||
|
||||
.. sectionauthor:: Travis E. Oliphant
|
||||
|
||||
.. currentmodule:: numpy
|
||||
|
||||
.. contents::
|
||||
|
||||
Interaction with Numpy
|
||||
------------------------
|
||||
|
||||
To begin with, all of the Numpy functions have been subsumed into the
|
||||
:mod:`scipy` namespace so that all of those functions are available
|
||||
without additionally importing Numpy. In addition, the universal
|
||||
functions (addition, subtraction, division) have been altered to not
|
||||
raise exceptions if floating-point errors are encountered; instead,
|
||||
NaN's and Inf's are returned in the arrays. To assist in detection of
|
||||
these events, several functions (:func:`sp.isnan`, :func:`sp.isfinite`,
|
||||
:func:`sp.isinf`) are available.
|
||||
|
||||
Finally, some of the basic functions like log, sqrt, and inverse trig
|
||||
functions have been modified to return complex numbers instead of
|
||||
NaN's where appropriate (*i.e.* ``sp.sqrt(-1)`` returns ``1j``).
|
||||
|
||||
|
||||
Top-level scipy routines
|
||||
------------------------
|
||||
|
||||
The purpose of the top level of scipy is to collect general-purpose
|
||||
routines that the other sub-packages can use and to provide a simple
|
||||
replacement for Numpy. Anytime you might think to import Numpy, you
|
||||
can import scipy instead and remove yourself from direct dependence on
|
||||
Numpy. These routines are divided into several files for
|
||||
organizational purposes, but they are all available under the numpy
|
||||
namespace (and the scipy namespace). There are routines for type
|
||||
handling and type checking, shape and matrix manipulation, polynomial
|
||||
processing, and other useful functions. Rather than giving a detailed
|
||||
description of each of these functions (which is available in the
|
||||
Numpy Reference Guide or by using the :func:`help`, :func:`info` and
|
||||
:func:`source` commands), this tutorial will discuss some of the more
|
||||
useful commands which require a little introduction to use to their
|
||||
full potential.
|
||||
|
||||
|
||||
Type handling
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
Note the difference between :func:`sp.iscomplex`/:func:`sp.isreal` and
|
||||
:func:`sp.iscomplexobj`/:func:`sp.isrealobj`. The former command is
|
||||
array based and returns byte arrays of ones and zeros providing the
|
||||
result of the element-wise test. The latter command is object based
|
||||
and returns a scalar describing the result of the test on the entire
|
||||
object.
|
||||
|
||||
Often it is required to get just the real and/or imaginary part of a
|
||||
complex number. While complex numbers and arrays have attributes that
|
||||
return those values, if one is not sure whether or not the object will
|
||||
be complex-valued, it is better to use the functional forms
|
||||
:func:`sp.real` and :func:`sp.imag` . These functions succeed for anything
|
||||
that can be turned into a Numpy array. Consider also the function
|
||||
:func:`sp.real_if_close` which transforms a complex-valued number with
|
||||
tiny imaginary part into a real number.
|
||||
|
||||
Occasionally the need to check whether or not a number is a scalar
|
||||
(Python (long)int, Python float, Python complex, or rank-0 array)
|
||||
occurs in coding. This functionality is provided in the convenient
|
||||
function :func:`sp.isscalar` which returns a 1 or a 0.
|
||||
|
||||
Finally, ensuring that objects are a certain Numpy type occurs often
|
||||
enough that it has been given a convenient interface in SciPy through
|
||||
the use of the :obj:`sp.cast` dictionary. The dictionary is keyed by the
|
||||
type it is desired to cast to and the dictionary stores functions to
|
||||
perform the casting. Thus, ``sp.cast['f'](d)`` returns an array
|
||||
of :class:`sp.float32` from *d*. This function is also useful as an easy
|
||||
way to get a scalar of a certain type::
|
||||
|
||||
>>> sp.cast['f'](sp.pi)
|
||||
array(3.1415927410125732, dtype=float32)
|
||||
|
||||
Index Tricks
|
||||
^^^^^^^^^^^^
|
||||
|
||||
There are some class instances that make special use of the slicing
|
||||
functionality to provide efficient means for array construction. This
|
||||
part will discuss the operation of :obj:`sp.mgrid` , :obj:`sp.ogrid` ,
|
||||
:obj:`sp.r_` , and :obj:`sp.c_` for quickly constructing arrays.
|
||||
|
||||
One familiar with Matlab may complain that it is difficult to
|
||||
construct arrays from the interactive session with Python. Suppose,
|
||||
for example that one wants to construct an array that begins with 3
|
||||
followed by 5 zeros and then contains 10 numbers spanning the range -1
|
||||
to 1 (inclusive on both ends). Before SciPy, you would need to enter
|
||||
something like the following
|
||||
|
||||
>>> concatenate(([3],[0]*5,arange(-1,1.002,2/9.0)))
|
||||
|
||||
With the :obj:`r_` command one can enter this as
|
||||
|
||||
>>> r_[3,[0]*5,-1:1:10j]
|
||||
|
||||
which can ease typing and make for more readable code. Notice how
|
||||
objects are concatenated, and the slicing syntax is (ab)used to
|
||||
construct ranges. The other term that deserves a little explanation is
|
||||
the use of the complex number 10j as the step size in the slicing
|
||||
syntax. This non-standard use allows the number to be interpreted as
|
||||
the number of points to produce in the range rather than as a step
|
||||
size (note we would have used the long integer notation, 10L, but this
|
||||
notation may go away in Python as the integers become unified). This
|
||||
non-standard usage may be unsightly to some, but it gives the user the
|
||||
ability to quickly construct complicated vectors in a very readable
|
||||
fashion. When the number of points is specified in this way, the end-
|
||||
point is inclusive.
|
||||
|
||||
The "r" stands for row concatenation because if the objects between
|
||||
commas are 2 dimensional arrays, they are stacked by rows (and thus
|
||||
must have commensurate columns). There is an equivalent command
|
||||
:obj:`c_` that stacks 2d arrays by columns but works identically to
|
||||
:obj:`r_` for 1d arrays.
|
||||
|
||||
Another very useful class instance which makes use of extended slicing
|
||||
notation is the function :obj:`mgrid`. In the simplest case, this
|
||||
function can be used to construct 1d ranges as a convenient substitute
|
||||
for arange. It also allows the use of complex-numbers in the step-size
|
||||
to indicate the number of points to place between the (inclusive)
|
||||
end-points. The real purpose of this function however is to produce N,
|
||||
N-d arrays which provide coordinate arrays for an N-dimensional
|
||||
volume. The easiest way to understand this is with an example of its
|
||||
usage:
|
||||
|
||||
>>> mgrid[0:5,0:5]
|
||||
array([[[0, 0, 0, 0, 0],
|
||||
[1, 1, 1, 1, 1],
|
||||
[2, 2, 2, 2, 2],
|
||||
[3, 3, 3, 3, 3],
|
||||
[4, 4, 4, 4, 4]],
|
||||
[[0, 1, 2, 3, 4],
|
||||
[0, 1, 2, 3, 4],
|
||||
[0, 1, 2, 3, 4],
|
||||
[0, 1, 2, 3, 4],
|
||||
[0, 1, 2, 3, 4]]])
|
||||
>>> mgrid[0:5:4j,0:5:4j]
|
||||
array([[[ 0. , 0. , 0. , 0. ],
|
||||
[ 1.6667, 1.6667, 1.6667, 1.6667],
|
||||
[ 3.3333, 3.3333, 3.3333, 3.3333],
|
||||
[ 5. , 5. , 5. , 5. ]],
|
||||
[[ 0. , 1.6667, 3.3333, 5. ],
|
||||
[ 0. , 1.6667, 3.3333, 5. ],
|
||||
[ 0. , 1.6667, 3.3333, 5. ],
|
||||
[ 0. , 1.6667, 3.3333, 5. ]]])
|
||||
|
||||
Having meshed arrays like this is sometimes very useful. However, it
|
||||
is not always needed just to evaluate some N-dimensional function over
|
||||
a grid due to the array-broadcasting rules of Numpy and SciPy. If this
|
||||
is the only purpose for generating a meshgrid, you should instead use
|
||||
the function :obj:`ogrid` which generates an "open "grid using NewAxis
|
||||
judiciously to create N, N-d arrays where only one dimension in each
|
||||
array has length greater than 1. This will save memory and create the
|
||||
same result if the only purpose for the meshgrid is to generate sample
|
||||
points for evaluation of an N-d function.
|
||||
|
||||
|
||||
Shape manipulation
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In this category of functions are routines for squeezing out length-
|
||||
one dimensions from N-dimensional arrays, ensuring that an array is at
|
||||
least 1-, 2-, or 3-dimensional, and stacking (concatenating) arrays by
|
||||
rows, columns, and "pages "(in the third dimension). Routines for
|
||||
splitting arrays (roughly the opposite of stacking arrays) are also
|
||||
available.
|
||||
|
||||
|
||||
Polynomials
|
||||
^^^^^^^^^^^
|
||||
|
||||
There are two (interchangeable) ways to deal with 1-d polynomials in
|
||||
SciPy. The first is to use the :class:`poly1d` class from Numpy. This
|
||||
class accepts coefficients or polynomial roots to initialize a
|
||||
polynomial. The polynomial object can then be manipulated in algebraic
|
||||
expressions, integrated, differentiated, and evaluated. It even prints
|
||||
like a polynomial:
|
||||
|
||||
>>> p = poly1d([3,4,5])
|
||||
>>> print p
|
||||
2
|
||||
3 x + 4 x + 5
|
||||
>>> print p*p
|
||||
4 3 2
|
||||
9 x + 24 x + 46 x + 40 x + 25
|
||||
>>> print p.integ(k=6)
|
||||
3 2
|
||||
x + 2 x + 5 x + 6
|
||||
>>> print p.deriv()
|
||||
6 x + 4
|
||||
>>> p([4,5])
|
||||
array([ 69, 100])
|
||||
|
||||
The other way to handle polynomials is as an array of coefficients
|
||||
with the first element of the array giving the coefficient of the
|
||||
highest power. There are explicit functions to add, subtract,
|
||||
multiply, divide, integrate, differentiate, and evaluate polynomials
|
||||
represented as sequences of coefficients.
|
||||
|
||||
|
||||
Vectorizing functions (vectorize)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
One of the features that NumPy provides is a class :obj:`vectorize` to
|
||||
convert an ordinary Python function which accepts scalars and returns
|
||||
scalars into a "vectorized-function" with the same broadcasting rules
|
||||
as other Numpy functions (*i.e.* the Universal functions, or
|
||||
ufuncs). For example, suppose you have a Python function named
|
||||
:obj:`addsubtract` defined as:
|
||||
|
||||
>>> def addsubtract(a,b):
|
||||
... if a > b:
|
||||
... return a - b
|
||||
... else:
|
||||
... return a + b
|
||||
|
||||
which defines a function of two scalar variables and returns a scalar
|
||||
result. The class vectorize can be used to "vectorize "this function so that ::
|
||||
|
||||
>>> vec_addsubtract = vectorize(addsubtract)
|
||||
|
||||
returns a function which takes array arguments and returns an array
|
||||
result:
|
||||
|
||||
>>> vec_addsubtract([0,3,6,9],[1,3,5,7])
|
||||
array([1, 6, 1, 2])
|
||||
|
||||
This particular function could have been written in vector form
|
||||
without the use of :obj:`vectorize` . But, what if the function you have written is the result of some
|
||||
optimization or integration routine. Such functions can likely only be
|
||||
vectorized using ``vectorize.``
|
||||
|
||||
|
||||
Other useful functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
There are several other functions in the scipy_base package including
|
||||
most of the other functions that are also in the Numpy package. The
|
||||
reason for duplicating these functions is to allow SciPy to
|
||||
potentially alter their original interface and make it easier for
|
||||
users to know how to get access to functions
|
||||
|
||||
>>> from scipy import *
|
||||
|
||||
Functions which should be mentioned are :obj:`mod(x,y)` which can
|
||||
replace ``x % y`` when it is desired that the result take the sign of
|
||||
*y* instead of *x* . Also included is :obj:`fix` which always rounds
|
||||
to the nearest integer towards zero. For doing phase processing, the
|
||||
functions :func:`angle`, and :obj:`unwrap` are also useful. Also, the
|
||||
:obj:`linspace` and :obj:`logspace` functions return equally spaced samples
|
||||
in a linear or log scale. Finally, it's useful to be aware of the indexing
|
||||
capabilities of Numpy. Mention should be made of the new
|
||||
function :obj:`select` which extends the functionality of :obj:`where` to
|
||||
include multiple conditions and multiple choices. The calling
|
||||
convention is ``select(condlist,choicelist,default=0).`` :obj:`select` is
|
||||
a vectorized form of the multiple if-statement. It allows rapid
|
||||
construction of a function which returns an array of results based on
|
||||
a list of conditions. Each element of the return array is taken from
|
||||
the array in a ``choicelist`` corresponding to the first condition in
|
||||
``condlist`` that is true. For example
|
||||
|
||||
>>> x = r_[-2:3]
|
||||
>>> x
|
||||
array([-2, -1, 0, 1, 2])
|
||||
>>> select([x > 3, x >= 0],[0,x+2])
|
||||
array([0, 0, 2, 3, 4])
|
||||
|
||||
|
||||
Common functions
|
||||
----------------
|
||||
|
||||
Some functions depend on sub-packages of SciPy but should be available
|
||||
from the top-level of SciPy due to their common use. These are
|
||||
functions that might have been placed in scipy_base except for their
|
||||
dependence on other sub-packages of SciPy. For example the
|
||||
:obj:`factorial` and :obj:`comb` functions compute :math:`n!` and
|
||||
:math:`n!/k!(n-k)!` using either exact integer arithmetic (thanks to
|
||||
Python's Long integer object), or by using floating-point precision
|
||||
and the gamma function. The functions :obj:`rand` and :obj:`randn`
|
||||
are used so often that they warranted a place at the top level. There
|
||||
are convenience functions for the interactive use: :obj:`disp`
|
||||
(similar to print), and :obj:`who` (returns a list of defined
|
||||
variables and memory consumption--upper bounded). Another function
|
||||
returns a common image used in image processing: :obj:`lena`.
|
||||
|
||||
Finally, two functions are provided that are useful for approximating
|
||||
derivatives of functions using discrete-differences. The function
|
||||
:obj:`central_diff_weights` returns weighting coefficients for an
|
||||
equally-spaced :math:`N`-point approximation to the derivative of
|
||||
order *o*. These weights must be multiplied by the function
|
||||
corresponding to these points and the results added to obtain the
|
||||
derivative approximation. This function is intended for use when only
|
||||
samples of the function are avaiable. When the function is an object
|
||||
that can be handed to a routine and evaluated, the function
|
||||
:obj:`derivative` can be used to automatically evaluate the object at
|
||||
the correct points to obtain an N-point approximation to the *o*-th
|
||||
derivative at a given point.
|
|
@ -1,55 +0,0 @@
|
|||
>>> sp.info(optimize.fmin)
|
||||
fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,
|
||||
full_output=0, disp=1, retall=0, callback=None)
|
||||
|
||||
Minimize a function using the downhill simplex algorithm.
|
||||
|
||||
:Parameters:
|
||||
|
||||
func : callable func(x,*args)
|
||||
The objective function to be minimized.
|
||||
x0 : ndarray
|
||||
Initial guess.
|
||||
args : tuple
|
||||
Extra arguments passed to func, i.e. ``f(x,*args)``.
|
||||
callback : callable
|
||||
Called after each iteration, as callback(xk), where xk is the
|
||||
current parameter vector.
|
||||
|
||||
:Returns: (xopt, {fopt, iter, funcalls, warnflag})
|
||||
|
||||
xopt : ndarray
|
||||
Parameter that minimizes function.
|
||||
fopt : float
|
||||
Value of function at minimum: ``fopt = func(xopt)``.
|
||||
iter : int
|
||||
Number of iterations performed.
|
||||
funcalls : int
|
||||
Number of function calls made.
|
||||
warnflag : int
|
||||
1 : Maximum number of function evaluations made.
|
||||
2 : Maximum number of iterations reached.
|
||||
allvecs : list
|
||||
Solution at each iteration.
|
||||
|
||||
*Other Parameters*:
|
||||
|
||||
xtol : float
|
||||
Relative error in xopt acceptable for convergence.
|
||||
ftol : number
|
||||
Relative error in func(xopt) acceptable for convergence.
|
||||
maxiter : int
|
||||
Maximum number of iterations to perform.
|
||||
maxfun : number
|
||||
Maximum number of function evaluations to make.
|
||||
full_output : bool
|
||||
Set to True if fval and warnflag outputs are desired.
|
||||
disp : bool
|
||||
Set to True to print convergence messages.
|
||||
retall : bool
|
||||
Set to True to return list of solutions at each iteration.
|
||||
|
||||
:Notes:
|
||||
|
||||
Uses a Nelder-Mead simplex algorithm to find the minimum of
|
||||
function of one or more variables.
|
|
@ -1,25 +0,0 @@
|
|||
>>> help(integrate)
|
||||
Methods for Integrating Functions given function object.
|
||||
|
||||
quad -- General purpose integration.
|
||||
dblquad -- General purpose double integration.
|
||||
tplquad -- General purpose triple integration.
|
||||
fixed_quad -- Integrate func(x) using Gaussian quadrature of order n.
|
||||
quadrature -- Integrate with given tolerance using Gaussian quadrature.
|
||||
romberg -- Integrate func using Romberg integration.
|
||||
|
||||
Methods for Integrating Functions given fixed samples.
|
||||
|
||||
trapz -- Use trapezoidal rule to compute integral from samples.
|
||||
cumtrapz -- Use trapezoidal rule to cumulatively compute integral.
|
||||
simps -- Use Simpson's rule to compute integral from samples.
|
||||
romb -- Use Romberg Integration to compute integral from
|
||||
(2**k + 1) evenly-spaced samples.
|
||||
|
||||
See the special module's orthogonal polynomials (special) for Gaussian
|
||||
quadrature roots and weights for other weighting factors and regions.
|
||||
|
||||
Interface to numerical integrators of ODE systems.
|
||||
|
||||
odeint -- General integration of ordinary differential equations.
|
||||
ode -- Integrate ODE using VODE and ZVODE routines.
|
|
@ -1,91 +0,0 @@
|
|||
from scipy import optimize
|
||||
>>> info(optimize)
|
||||
Optimization Tools
|
||||
==================
|
||||
|
||||
A collection of general-purpose optimization routines.
|
||||
|
||||
fmin -- Nelder-Mead Simplex algorithm
|
||||
(uses only function calls)
|
||||
fmin_powell -- Powell's (modified) level set method (uses only
|
||||
function calls)
|
||||
fmin_cg -- Non-linear (Polak-Ribiere) conjugate gradient algorithm
|
||||
(can use function and gradient).
|
||||
fmin_bfgs -- Quasi-Newton method (Broydon-Fletcher-Goldfarb-Shanno);
|
||||
(can use function and gradient)
|
||||
fmin_ncg -- Line-search Newton Conjugate Gradient (can use
|
||||
function, gradient and Hessian).
|
||||
leastsq -- Minimize the sum of squares of M equations in
|
||||
N unknowns given a starting estimate.
|
||||
|
||||
|
||||
Constrained Optimizers (multivariate)
|
||||
|
||||
fmin_l_bfgs_b -- Zhu, Byrd, and Nocedal's L-BFGS-B constrained optimizer
|
||||
(if you use this please quote their papers -- see help)
|
||||
|
||||
fmin_tnc -- Truncated Newton Code originally written by Stephen Nash and
|
||||
adapted to C by Jean-Sebastien Roy.
|
||||
|
||||
fmin_cobyla -- Constrained Optimization BY Linear Approximation
|
||||
|
||||
|
||||
Global Optimizers
|
||||
|
||||
anneal -- Simulated Annealing
|
||||
brute -- Brute force searching optimizer
|
||||
|
||||
|
||||
Scalar function minimizers
|
||||
|
||||
fminbound -- Bounded minimization of a scalar function.
|
||||
brent -- 1-D function minimization using Brent method.
|
||||
golden -- 1-D function minimization using Golden Section method
|
||||
bracket -- Bracket a minimum (given two starting points)
|
||||
|
||||
|
||||
Also a collection of general-purpose root-finding routines.
|
||||
|
||||
fsolve -- Non-linear multi-variable equation solver.
|
||||
|
||||
|
||||
Scalar function solvers
|
||||
|
||||
brentq -- quadratic interpolation Brent method
|
||||
brenth -- Brent method (modified by Harris with hyperbolic
|
||||
extrapolation)
|
||||
ridder -- Ridder's method
|
||||
bisect -- Bisection method
|
||||
newton -- Secant method or Newton's method
|
||||
|
||||
fixed_point -- Single-variable fixed-point solver.
|
||||
|
||||
A collection of general-purpose nonlinear multidimensional solvers.
|
||||
|
||||
broyden1 -- Broyden's first method - is a quasi-Newton-Raphson
|
||||
method for updating an approximate Jacobian and then
|
||||
inverting it
|
||||
broyden2 -- Broyden's second method - the same as broyden1, but
|
||||
updates the inverse Jacobian directly
|
||||
broyden3 -- Broyden's second method - the same as broyden2, but
|
||||
instead of directly computing the inverse Jacobian,
|
||||
it remembers how to construct it using vectors, and
|
||||
when computing inv(J)*F, it uses those vectors to
|
||||
compute this product, thus avoding the expensive NxN
|
||||
matrix multiplication.
|
||||
broyden_generalized -- Generalized Broyden's method, the same as broyden2,
|
||||
but instead of approximating the full NxN Jacobian,
|
||||
it construct it at every iteration in a way that
|
||||
avoids the NxN matrix multiplication. This is not
|
||||
as precise as broyden3.
|
||||
anderson -- extended Anderson method, the same as the
|
||||
broyden_generalized, but added w_0^2*I to before
|
||||
taking inversion to improve the stability
|
||||
anderson2 -- the Anderson method, the same as anderson, but
|
||||
formulated differently
|
||||
|
||||
Utility Functions
|
||||
|
||||
line_search -- Return a step that satisfies the strong Wolfe conditions.
|
||||
check_grad -- Check the supplied derivative using finite difference
|
||||
techniques.
|
|
@ -1,45 +0,0 @@
|
|||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
from scipy import stats
|
||||
|
||||
npoints = 20 # number of integer support points of the distribution minus 1
|
||||
npointsh = npoints / 2
|
||||
npointsf = float(npoints)
|
||||
nbound = 4 #bounds for the truncated normal
|
||||
normbound = (1 + 1 / npointsf) * nbound #actual bounds of truncated normal
|
||||
grid = np.arange(-npointsh, npointsh+2, 1) #integer grid
|
||||
gridlimitsnorm = (grid-0.5) / npointsh * nbound #bin limits for the truncnorm
|
||||
gridlimits = grid - 0.5
|
||||
grid = grid[:-1]
|
||||
probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound))
|
||||
gridint = grid
|
||||
normdiscrete = stats.rv_discrete(
|
||||
values=(gridint, np.round(probs, decimals=7)),
|
||||
name='normdiscrete')
|
||||
|
||||
|
||||
n_sample = 500
|
||||
np.random.seed(87655678) #fix the seed for replicability
|
||||
rvs = normdiscrete.rvs(size=n_sample)
|
||||
rvsnd=rvs
|
||||
f,l = np.histogram(rvs, bins=gridlimits)
|
||||
sfreq = np.vstack([gridint, f, probs*n_sample]).T
|
||||
fs = sfreq[:,1] / float(n_sample)
|
||||
ft = sfreq[:,2] / float(n_sample)
|
||||
nd_std = np.sqrt(normdiscrete.stats(moments='v'))
|
||||
|
||||
ind = gridint # the x locations for the groups
|
||||
width = 0.35 # the width of the bars
|
||||
|
||||
plt.subplot(111)
|
||||
rects1 = plt.bar(ind, ft, width, color='b')
|
||||
rects2 = plt.bar(ind+width, fs, width, color='r')
|
||||
normline = plt.plot(ind+width/2.0, stats.norm.pdf(ind, scale=nd_std),
|
||||
color='b')
|
||||
|
||||
plt.ylabel('Frequency')
|
||||
plt.title('Frequency and Probability of normdiscrete')
|
||||
plt.xticks(ind+width, ind )
|
||||
plt.legend((rects1[0], rects2[0]), ('true', 'sample'))
|
||||
|
||||
plt.show()
|
|
@ -1,48 +0,0 @@
|
|||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
from scipy import stats
|
||||
|
||||
npoints = 20 # number of integer support points of the distribution minus 1
|
||||
npointsh = npoints / 2
|
||||
npointsf = float(npoints)
|
||||
nbound = 4 #bounds for the truncated normal
|
||||
normbound = (1 + 1 / npointsf) * nbound #actual bounds of truncated normal
|
||||
grid = np.arange(-npointsh, npointsh+2,1) #integer grid
|
||||
gridlimitsnorm = (grid - 0.5) / npointsh * nbound #bin limits for the truncnorm
|
||||
gridlimits = grid - 0.5
|
||||
grid = grid[:-1]
|
||||
probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound))
|
||||
gridint = grid
|
||||
normdiscrete = stats.rv_discrete(
|
||||
values=(gridint, np.round(probs, decimals=7)),
|
||||
name='normdiscrete')
|
||||
|
||||
n_sample = 500
|
||||
np.random.seed(87655678) #fix the seed for replicability
|
||||
rvs = normdiscrete.rvs(size=n_sample)
|
||||
rvsnd = rvs
|
||||
f,l = np.histogram(rvs,bins=gridlimits)
|
||||
sfreq = np.vstack([gridint,f,probs*n_sample]).T
|
||||
fs = sfreq[:,1] / float(n_sample)
|
||||
ft = sfreq[:,2] / float(n_sample)
|
||||
fs = sfreq[:,1].cumsum() / float(n_sample)
|
||||
ft = sfreq[:,2].cumsum() / float(n_sample)
|
||||
nd_std = np.sqrt(normdiscrete.stats(moments='v'))
|
||||
|
||||
|
||||
ind = gridint # the x locations for the groups
|
||||
width = 0.35 # the width of the bars
|
||||
|
||||
plt.figure()
|
||||
plt.subplot(111)
|
||||
rects1 = plt.bar(ind, ft, width, color='b')
|
||||
rects2 = plt.bar(ind+width, fs, width, color='r')
|
||||
normline = plt.plot(ind+width/2.0, stats.norm.cdf(ind+0.5,scale=nd_std),
|
||||
color='b')
|
||||
|
||||
plt.ylabel('cdf')
|
||||
plt.title('Cumulative Frequency and CDF of normdiscrete')
|
||||
plt.xticks(ind+width, ind )
|
||||
plt.legend( (rects1[0], rects2[0]), ('true', 'sample') )
|
||||
|
||||
plt.show()
|
|
@ -1,145 +0,0 @@
|
|||
Fourier Transforms (:mod:`scipy.fftpack`)
|
||||
=========================================
|
||||
|
||||
.. sectionauthor:: Scipy Developers
|
||||
|
||||
.. currentmodule:: scipy.fftpack
|
||||
|
||||
.. warning::
|
||||
|
||||
This is currently a stub page
|
||||
|
||||
|
||||
.. contents::
|
||||
|
||||
|
||||
Fourier analysis is fundamentally a method for expressing a function as a
|
||||
sum of periodic components, and for recovering the signal from those
|
||||
components. When both the function and its Fourier transform are
|
||||
replaced with discretized counterparts, it is called the discrete Fourier
|
||||
transform (DFT). The DFT has become a mainstay of numerical computing in
|
||||
part because of a very fast algorithm for computing it, called the Fast
|
||||
Fourier Transform (FFT), which was known to Gauss (1805) and was brought
|
||||
to light in its current form by Cooley and Tukey [CT]_. Press et al. [NR]_
|
||||
provide an accessible introduction to Fourier analysis and its
|
||||
applications.
|
||||
|
||||
|
||||
Fast Fourier transforms
|
||||
-----------------------
|
||||
|
||||
One dimensional discrete Fourier transforms
|
||||
-------------------------------------------
|
||||
|
||||
fft, ifft, rfft, irfft
|
||||
|
||||
|
||||
Two and n dimensional discrete Fourier transforms
|
||||
-------------------------------------------------
|
||||
|
||||
fft in more than one dimension
|
||||
|
||||
|
||||
Discrete Cosine Transforms
|
||||
--------------------------
|
||||
|
||||
|
||||
Return the Discrete Cosine Transform [Mak]_ of arbitrary type sequence ``x``.
|
||||
|
||||
For a single dimension array ``x``, ``dct(x, norm='ortho')`` is equal to
|
||||
matlab ``dct(x)``.
|
||||
|
||||
There are theoretically 8 types of the DCT [WP]_, only the first 3 types are
|
||||
implemented in scipy. 'The' DCT generally refers to DCT type 2, and 'the'
|
||||
Inverse DCT generally refers to DCT type 3.
|
||||
|
||||
type I
|
||||
~~~~~~
|
||||
|
||||
There are several definitions of the DCT-I; we use the following
|
||||
(for ``norm=None``):
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y_k = x_0 + (-1)^k x_{N-1} + 2\sum_{n=1}^{N-2} x_n
|
||||
\cos\left({\pi nk\over N-1}\right),
|
||||
\qquad 0 \le k < N. \]
|
||||
|
||||
Only None is supported as normalization mode for DCT-I. Note also that the
|
||||
DCT-I is only supported for input size > 1
|
||||
|
||||
type II
|
||||
~~~~~~~
|
||||
|
||||
There are several definitions of the DCT-II; we use the following
|
||||
(for ``norm=None``):
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y_k = 2 \sum_{n=0}^{N-1} x_n
|
||||
\cos \left({\pi(2n+1)k \over 2N} \right)
|
||||
\qquad 0 \le k < N.\]
|
||||
|
||||
If ``norm='ortho'``, :math:`y_k` is multiplied by a scaling factor `f`:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[f = \begin{cases} \sqrt{1/(4N)}, & \text{if $k = 0$} \\
|
||||
\sqrt{1/(2N)}, & \text{otherwise} \end{cases} \]
|
||||
|
||||
|
||||
Which makes the corresponding matrix of coefficients orthonormal
|
||||
(`OO' = Id`).
|
||||
|
||||
type III
|
||||
~~~~~~~~
|
||||
|
||||
There are several definitions, we use the following
|
||||
(for ``norm=None``):
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y_k = x_0 + 2 \sum_{n=1}^{N-1} x_n
|
||||
\cos\left({\pi n(2k+1) \over 2N}\right)
|
||||
\qquad 0 \le k < N,\]
|
||||
|
||||
or, for ``norm='ortho'``:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y_k = {x_0\over\sqrt{N}} + {1\over\sqrt{N}} \sum_{n=1}^{N-1}
|
||||
x_n \cos\left({\pi n(2k+1) \over 2N}\right)
|
||||
\qquad 0 \le k < N.\]
|
||||
|
||||
The (unnormalized) DCT-III is the inverse of the (unnormalized) DCT-II, up
|
||||
to a factor `2N`. The orthonormalized DCT-III is exactly the inverse of the
|
||||
orthonormalized DCT-II.
|
||||
|
||||
References
|
||||
~~~~~~~~~~
|
||||
|
||||
.. [CT] Cooley, James W., and John W. Tukey, 1965, "An algorithm for the
|
||||
machine calculation of complex Fourier series," *Math. Comput.*
|
||||
19: 297-301.
|
||||
|
||||
.. [NR] Press, W., Teukolsky, S., Vetterline, W.T., and Flannery, B.P.,
|
||||
2007, *Numerical Recipes: The Art of Scientific Computing*, ch.
|
||||
12-13. Cambridge Univ. Press, Cambridge, UK.
|
||||
|
||||
.. [Mak] J. Makhoul, 1980, 'A Fast Cosine Transform in One and Two Dimensions',
|
||||
`IEEE Transactions on acoustics, speech and signal processing`
|
||||
vol. 28(1), pp. 27-34, http://dx.doi.org/10.1109/TASSP.1980.1163351
|
||||
|
||||
.. [WP] http://en.wikipedia.org/wiki/Discrete_cosine_transform
|
||||
|
||||
|
||||
FFT convolution
|
||||
---------------
|
||||
|
||||
scipy.fftpack.convolve performs a convolution of two one-dimensional
|
||||
arrays in frequency domain.
|
|
@ -1,129 +0,0 @@
|
|||
============
|
||||
Introduction
|
||||
============
|
||||
|
||||
.. contents::
|
||||
|
||||
SciPy is a collection of mathematical algorithms and convenience
|
||||
functions built on the Numpy extension for Python. It adds
|
||||
significant power to the interactive Python session by exposing the
|
||||
user to high-level commands and classes for the manipulation and
|
||||
visualization of data. With SciPy, an interactive Python session
|
||||
becomes a data-processing and system-prototyping environment rivaling
|
||||
sytems such as Matlab, IDL, Octave, R-Lab, and SciLab.
|
||||
|
||||
The additional power of using SciPy within Python, however, is that a
|
||||
powerful programming language is also available for use in developing
|
||||
sophisticated programs and specialized applications. Scientific
|
||||
applications written in SciPy benefit from the development of
|
||||
additional modules in numerous niche's of the software landscape by
|
||||
developers across the world. Everything from parallel programming to
|
||||
web and data-base subroutines and classes have been made available to
|
||||
the Python programmer. All of this power is available in addition to
|
||||
the mathematical libraries in SciPy.
|
||||
|
||||
This document provides a tutorial for the first-time user of SciPy to
|
||||
help get started with some of the features available in this powerful
|
||||
package. It is assumed that the user has already installed the
|
||||
package. Some general Python facility is also assumed such as could be
|
||||
acquired by working through the Tutorial in the Python distribution.
|
||||
For further introductory help the user is directed to the Numpy
|
||||
documentation.
|
||||
|
||||
For brevity and convenience, we will often assume that the main
|
||||
packages (numpy, scipy, and matplotlib) have been imported as::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> import scipy as sp
|
||||
>>> import matplotlib as mpl
|
||||
>>> import matplotlib.pyplot as plt
|
||||
|
||||
These are the import conventions that our community has adopted
|
||||
after discussion on public mailing lists. You will see these
|
||||
conventions used throughout NumPy and SciPy source code and
|
||||
documentation. While we obviously don't require you to follow
|
||||
these conventions in your own code, it is highly recommended.
|
||||
|
||||
SciPy Organization
|
||||
------------------
|
||||
|
||||
SciPy is organized into subpackages covering different scientific
|
||||
computing domains. These are summarized in the following table:
|
||||
|
||||
.. currentmodule:: scipy
|
||||
|
||||
================== ======================================================
|
||||
Subpackage Description
|
||||
================== ======================================================
|
||||
:mod:`cluster` Clustering algorithms
|
||||
:mod:`constants` Physical and mathematical constants
|
||||
:mod:`fftpack` Fast Fourier Transform routines
|
||||
:mod:`integrate` Integration and ordinary differential equation solvers
|
||||
:mod:`interpolate` Interpolation and smoothing splines
|
||||
:mod:`io` Input and Output
|
||||
:mod:`linalg` Linear algebra
|
||||
:mod:`maxentropy` Maximum entropy methods
|
||||
:mod:`ndimage` N-dimensional image processing
|
||||
:mod:`odr` Orthogonal distance regression
|
||||
:mod:`optimize` Optimization and root-finding routines
|
||||
:mod:`signal` Signal processing
|
||||
:mod:`sparse` Sparse matrices and associated routines
|
||||
:mod:`spatial` Spatial data structures and algorithms
|
||||
:mod:`special` Special functions
|
||||
:mod:`stats` Statistical distributions and functions
|
||||
:mod:`weave` C/C++ integration
|
||||
================== ======================================================
|
||||
|
||||
Scipy sub-packages need to be imported separately, for example::
|
||||
|
||||
>>> from scipy import linalg, optimize
|
||||
|
||||
Because of their ubiquitousness, some of the functions in these
|
||||
subpackages are also made available in the scipy namespace to ease
|
||||
their use in interactive sessions and programs. In addition, many
|
||||
basic array functions from :mod:`numpy` are also available at the
|
||||
top-level of the :mod:`scipy` package. Before looking at the
|
||||
sub-packages individually, we will first look at some of these common
|
||||
functions.
|
||||
|
||||
Finding Documentation
|
||||
---------------------
|
||||
|
||||
Scipy and Numpy have HTML and PDF versions of their documentation
|
||||
available at http://docs.scipy.org/, which currently details nearly
|
||||
all available functionality. However, this documentation is still
|
||||
work-in-progress, and some parts may be incomplete or sparse. As
|
||||
we are a volunteer organization and depend on the community for
|
||||
growth, your participation - everything from providing feedback to
|
||||
improving the documentation and code - is welcome and actively
|
||||
encouraged.
|
||||
|
||||
Python also provides the facility of documentation strings. The
|
||||
functions and classes available in SciPy use this method for on-line
|
||||
documentation. There are two methods for reading these messages and
|
||||
getting help. Python provides the command :func:`help` in the pydoc
|
||||
module. Entering this command with no arguments (i.e. ``>>> help`` )
|
||||
launches an interactive help session that allows searching through the
|
||||
keywords and modules available to all of Python. Running the command
|
||||
help with an object as the argument displays the calling signature,
|
||||
and the documentation string of the object.
|
||||
|
||||
The pydoc method of help is sophisticated but uses a pager to display
|
||||
the text. Sometimes this can interfere with the terminal you are
|
||||
running the interactive session within. A scipy-specific help system
|
||||
is also available under the command ``sp.info``. The signature and
|
||||
documentation string for the object passed to the ``help`` command are
|
||||
printed to standard output (or to a writeable object passed as the
|
||||
third argument). The second keyword argument of ``sp.info`` defines
|
||||
the maximum width of the line for printing. If a module is passed as
|
||||
the argument to help than a list of the functions and classes defined
|
||||
in that module is printed. For example:
|
||||
|
||||
.. literalinclude:: examples/1-1
|
||||
|
||||
Another useful command is :func:`source`. When given a function
|
||||
written in Python as an argument, it prints out a listing of the
|
||||
source code for that function. This can be helpful in learning about
|
||||
an algorithm or understanding exactly what a function is doing with
|
||||
its arguments. Also don't forget about the Python command ``dir``
|
||||
which can be used to look at the namespace of a module or package.
|
|
@ -1,22 +0,0 @@
|
|||
**************
|
||||
SciPy Tutorial
|
||||
**************
|
||||
|
||||
.. sectionauthor:: Travis E. Oliphant
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
general
|
||||
basic
|
||||
special
|
||||
integrate
|
||||
optimize
|
||||
interpolate
|
||||
fftpack
|
||||
signal
|
||||
linalg
|
||||
stats
|
||||
ndimage
|
||||
io
|
||||
weave
|
|
@ -1,280 +0,0 @@
|
|||
Integration (:mod:`scipy.integrate`)
|
||||
====================================
|
||||
|
||||
.. sectionauthor:: Travis E. Oliphant
|
||||
|
||||
.. currentmodule:: scipy.integrate
|
||||
|
||||
The :mod:`scipy.integrate` sub-package provides several integration
|
||||
techniques including an ordinary differential equation integrator. An
|
||||
overview of the module is provided by the help command:
|
||||
|
||||
.. literalinclude:: examples/4-1
|
||||
|
||||
|
||||
General integration (:func:`quad`)
|
||||
----------------------------------
|
||||
|
||||
The function :obj:`quad` is provided to integrate a function of one
|
||||
variable between two points. The points can be :math:`\pm\infty`
|
||||
(:math:`\pm` ``inf``) to indicate infinite limits. For example,
|
||||
suppose you wish to integrate a bessel function ``jv(2.5,x)`` along
|
||||
the interval :math:`[0,4.5].`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ I=\int_{0}^{4.5}J_{2.5}\left(x\right)\, dx.\]
|
||||
|
||||
This could be computed using :obj:`quad`:
|
||||
|
||||
>>> result = integrate.quad(lambda x: special.jv(2.5,x), 0, 4.5)
|
||||
>>> print result
|
||||
(1.1178179380783249, 7.8663172481899801e-09)
|
||||
|
||||
>>> I = sqrt(2/pi)*(18.0/27*sqrt(2)*cos(4.5)-4.0/27*sqrt(2)*sin(4.5)+
|
||||
sqrt(2*pi)*special.fresnel(3/sqrt(pi))[0])
|
||||
>>> print I
|
||||
1.117817938088701
|
||||
|
||||
>>> print abs(result[0]-I)
|
||||
1.03761443881e-11
|
||||
|
||||
The first argument to quad is a "callable" Python object (*i.e* a
|
||||
function, method, or class instance). Notice the use of a lambda-
|
||||
function in this case as the argument. The next two arguments are the
|
||||
limits of integration. The return value is a tuple, with the first
|
||||
element holding the estimated value of the integral and the second
|
||||
element holding an upper bound on the error. Notice, that in this
|
||||
case, the true value of this integral is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ I=\sqrt{\frac{2}{\pi}}\left(\frac{18}{27}\sqrt{2}\cos\left(4.5\right)-\frac{4}{27}\sqrt{2}\sin\left(4.5\right)+\sqrt{2\pi}\textrm{Si}\left(\frac{3}{\sqrt{\pi}}\right)\right),\]
|
||||
|
||||
where
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \textrm{Si}\left(x\right)=\int_{0}^{x}\sin\left(\frac{\pi}{2}t^{2}\right)\, dt.\]
|
||||
|
||||
is the Fresnel sine integral. Note that the numerically-computed
|
||||
integral is within :math:`1.04\times10^{-11}` of the exact result --- well below the reported error bound.
|
||||
|
||||
Infinite inputs are also allowed in :obj:`quad` by using :math:`\pm`
|
||||
``inf`` as one of the arguments. For example, suppose that a numerical
|
||||
value for the exponential integral:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ E_{n}\left(x\right)=\int_{1}^{\infty}\frac{e^{-xt}}{t^{n}}\, dt.\]
|
||||
|
||||
is desired (and the fact that this integral can be computed as
|
||||
``special.expn(n,x)`` is forgotten). The functionality of the function
|
||||
:obj:`special.expn` can be replicated by defining a new function
|
||||
:obj:`vec_expint` based on the routine :obj:`quad`:
|
||||
|
||||
>>> from scipy.integrate import quad
|
||||
>>> def integrand(t,n,x):
|
||||
... return exp(-x*t) / t**n
|
||||
|
||||
>>> def expint(n,x):
|
||||
... return quad(integrand, 1, Inf, args=(n, x))[0]
|
||||
|
||||
>>> vec_expint = vectorize(expint)
|
||||
|
||||
>>> vec_expint(3,arange(1.0,4.0,0.5))
|
||||
array([ 0.1097, 0.0567, 0.0301, 0.0163, 0.0089, 0.0049])
|
||||
>>> special.expn(3,arange(1.0,4.0,0.5))
|
||||
array([ 0.1097, 0.0567, 0.0301, 0.0163, 0.0089, 0.0049])
|
||||
|
||||
The function which is integrated can even use the quad argument
|
||||
(though the error bound may underestimate the error due to possible
|
||||
numerical error in the integrand from the use of :obj:`quad` ). The integral in this case is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ I_{n}=\int_{0}^{\infty}\int_{1}^{\infty}\frac{e^{-xt}}{t^{n}}\, dt\, dx=\frac{1}{n}.\]
|
||||
|
||||
>>> result = quad(lambda x: expint(3, x), 0, inf)
|
||||
>>> print result
|
||||
(0.33333333324560266, 2.8548934485373678e-09)
|
||||
|
||||
>>> I3 = 1.0/3.0
|
||||
>>> print I3
|
||||
0.333333333333
|
||||
|
||||
>>> print I3 - result[0]
|
||||
8.77306560731e-11
|
||||
|
||||
This last example shows that multiple integration can be handled using
|
||||
repeated calls to :func:`quad`. The mechanics of this for double and
|
||||
triple integration have been wrapped up into the functions
|
||||
:obj:`dblquad` and :obj:`tplquad`. The function, :obj:`dblquad`
|
||||
performs double integration. Use the help function to be sure that the
|
||||
arguments are defined in the correct order. In addition, the limits on
|
||||
all inner integrals are actually functions which can be constant
|
||||
functions. An example of using double integration to compute several
|
||||
values of :math:`I_{n}` is shown below:
|
||||
|
||||
>>> from scipy.integrate import quad, dblquad
|
||||
>>> def I(n):
|
||||
... return dblquad(lambda t, x: exp(-x*t)/t**n, 0, Inf, lambda x: 1, lambda x: Inf)
|
||||
|
||||
>>> print I(4)
|
||||
(0.25000000000435768, 1.0518245707751597e-09)
|
||||
>>> print I(3)
|
||||
(0.33333333325010883, 2.8604069919261191e-09)
|
||||
>>> print I(2)
|
||||
(0.49999999999857514, 1.8855523253868967e-09)
|
||||
|
||||
|
||||
Gaussian quadrature (integrate.gauss_quadtol)
|
||||
---------------------------------------------
|
||||
|
||||
A few functions are also provided in order to perform simple Gaussian
|
||||
quadrature over a fixed interval. The first is :obj:`fixed_quad` which
|
||||
performs fixed-order Gaussian quadrature. The second function is
|
||||
:obj:`quadrature` which performs Gaussian quadrature of multiple
|
||||
orders until the difference in the integral estimate is beneath some
|
||||
tolerance supplied by the user. These functions both use the module
|
||||
:mod:`special.orthogonal` which can calculate the roots and quadrature
|
||||
weights of a large variety of orthogonal polynomials (the polynomials
|
||||
themselves are available as special functions returning instances of
|
||||
the polynomial class --- e.g. :obj:`special.legendre <scipy.special.legendre>`).
|
||||
|
||||
|
||||
Integrating using samples
|
||||
-------------------------
|
||||
|
||||
There are three functions for computing integrals given only samples:
|
||||
:obj:`trapz` , :obj:`simps`, and :obj:`romb` . The first two
|
||||
functions use Newton-Coates formulas of order 1 and 2 respectively to
|
||||
perform integration. These two functions can handle,
|
||||
non-equally-spaced samples. The trapezoidal rule approximates the
|
||||
function as a straight line between adjacent points, while Simpson's
|
||||
rule approximates the function between three adjacent points as a
|
||||
parabola.
|
||||
|
||||
If the samples are equally-spaced and the number of samples available
|
||||
is :math:`2^{k}+1` for some integer :math:`k`, then Romberg
|
||||
integration can be used to obtain high-precision estimates of the
|
||||
integral using the available samples. Romberg integration uses the
|
||||
trapezoid rule at step-sizes related by a power of two and then
|
||||
performs Richardson extrapolation on these estimates to approximate
|
||||
the integral with a higher-degree of accuracy. (A different interface
|
||||
to Romberg integration useful when the function can be provided is
|
||||
also available as :func:`romberg`).
|
||||
|
||||
|
||||
Ordinary differential equations (:func:`odeint`)
|
||||
------------------------------------------------
|
||||
|
||||
Integrating a set of ordinary differential equations (ODEs) given
|
||||
initial conditions is another useful example. The function
|
||||
:obj:`odeint` is available in SciPy for integrating a first-order
|
||||
vector differential equation:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \frac{d\mathbf{y}}{dt}=\mathbf{f}\left(\mathbf{y},t\right),\]
|
||||
|
||||
given initial conditions :math:`\mathbf{y}\left(0\right)=y_{0}`, where
|
||||
:math:`\mathbf{y}` is a length :math:`N` vector and :math:`\mathbf{f}`
|
||||
is a mapping from :math:`\mathcal{R}^{N}` to :math:`\mathcal{R}^{N}.`
|
||||
A higher-order ordinary differential equation can always be reduced to
|
||||
a differential equation of this type by introducing intermediate
|
||||
derivatives into the :math:`\mathbf{y}` vector.
|
||||
|
||||
For example suppose it is desired to find the solution to the
|
||||
following second-order differential equation:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \frac{d^{2}w}{dz^{2}}-zw(z)=0\]
|
||||
|
||||
with initial conditions :math:`w\left(0\right)=\frac{1}{\sqrt[3]{3^{2}}\Gamma\left(\frac{2}{3}\right)}` and :math:`\left.\frac{dw}{dz}\right|_{z=0}=-\frac{1}{\sqrt[3]{3}\Gamma\left(\frac{1}{3}\right)}.` It is known that the solution to this differential equation with these
|
||||
boundary conditions is the Airy function
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ w=\textrm{Ai}\left(z\right),\]
|
||||
|
||||
which gives a means to check the integrator using :func:`special.airy <scipy.special.airy>`.
|
||||
|
||||
First, convert this ODE into standard form by setting
|
||||
:math:`\mathbf{y}=\left[\frac{dw}{dz},w\right]` and :math:`t=z`. Thus,
|
||||
the differential equation becomes
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \frac{d\mathbf{y}}{dt}=\left[\begin{array}{c} ty_{1}\\ y_{0}\end{array}\right]=\left[\begin{array}{cc} 0 & t\\ 1 & 0\end{array}\right]\left[\begin{array}{c} y_{0}\\ y_{1}\end{array}\right]=\left[\begin{array}{cc} 0 & t\\ 1 & 0\end{array}\right]\mathbf{y}.\]
|
||||
|
||||
In other words,
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{f}\left(\mathbf{y},t\right)=\mathbf{A}\left(t\right)\mathbf{y}.\]
|
||||
|
||||
As an interesting reminder, if :math:`\mathbf{A}\left(t\right)`
|
||||
commutes with :math:`\int_{0}^{t}\mathbf{A}\left(\tau\right)\, d\tau`
|
||||
under matrix multiplication, then this linear differential equation
|
||||
has an exact solution using the matrix exponential:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{y}\left(t\right)=\exp\left(\int_{0}^{t}\mathbf{A}\left(\tau\right)d\tau\right)\mathbf{y}\left(0\right),\]
|
||||
|
||||
However, in this case, :math:`\mathbf{A}\left(t\right)` and its integral do not commute.
|
||||
|
||||
There are many optional inputs and outputs available when using odeint
|
||||
which can help tune the solver. These additional inputs and outputs
|
||||
are not needed much of the time, however, and the three required input
|
||||
arguments and the output solution suffice. The required inputs are the
|
||||
function defining the derivative, *fprime*, the initial conditions
|
||||
vector, *y0*, and the time points to obtain a solution, *t*, (with
|
||||
the initial value point as the first element of this sequence). The
|
||||
output to :obj:`odeint` is a matrix where each row contains the
|
||||
solution vector at each requested time point (thus, the initial
|
||||
conditions are given in the first output row).
|
||||
|
||||
The following example illustrates the use of odeint including the
|
||||
usage of the *Dfun* option which allows the user to specify a gradient
|
||||
(with respect to :math:`\mathbf{y}` ) of the function,
|
||||
:math:`\mathbf{f}\left(\mathbf{y},t\right)`.
|
||||
|
||||
>>> from scipy.integrate import odeint
|
||||
>>> from scipy.special import gamma, airy
|
||||
>>> y1_0 = 1.0/3**(2.0/3.0)/gamma(2.0/3.0)
|
||||
>>> y0_0 = -1.0/3**(1.0/3.0)/gamma(1.0/3.0)
|
||||
>>> y0 = [y0_0, y1_0]
|
||||
>>> def func(y, t):
|
||||
... return [t*y[1],y[0]]
|
||||
|
||||
>>> def gradient(y,t):
|
||||
... return [[0,t],[1,0]]
|
||||
|
||||
>>> x = arange(0,4.0, 0.01)
|
||||
>>> t = x
|
||||
>>> ychk = airy(x)[0]
|
||||
>>> y = odeint(func, y0, t)
|
||||
>>> y2 = odeint(func, y0, t, Dfun=gradient)
|
||||
|
||||
>>> print ychk[:36:6]
|
||||
[ 0.355028 0.339511 0.324068 0.308763 0.293658 0.278806]
|
||||
|
||||
>>> print y[:36:6,1]
|
||||
[ 0.355028 0.339511 0.324067 0.308763 0.293658 0.278806]
|
||||
|
||||
>>> print y2[:36:6,1]
|
||||
[ 0.355028 0.339511 0.324067 0.308763 0.293658 0.278806]
|
|
@ -1,399 +0,0 @@
|
|||
Interpolation (:mod:`scipy.interpolate`)
|
||||
========================================
|
||||
|
||||
.. sectionauthor:: Travis E. Oliphant
|
||||
|
||||
.. currentmodule:: scipy.interpolate
|
||||
|
||||
.. contents::
|
||||
|
||||
There are two general interpolation facilities available in SciPy. The
|
||||
first facility is an interpolation class which performs linear
|
||||
1-dimensional interpolation. The second facility is based on the
|
||||
FORTRAN library FITPACK and provides functions for 1- and
|
||||
2-dimensional (smoothed) cubic-spline interpolation. There are both
|
||||
procedural and object-oriented interfaces for the FITPACK library.
|
||||
|
||||
|
||||
Linear 1-d interpolation (:class:`interp1d`)
|
||||
--------------------------------------------
|
||||
|
||||
The interp1d class in scipy.interpolate is a convenient method to
|
||||
create a function based on fixed data points which can be evaluated
|
||||
anywhere within the domain defined by the given data using linear
|
||||
interpolation. An instance of this class is created by passing the 1-d
|
||||
vectors comprising the data. The instance of this class defines a
|
||||
__call__ method and can therefore by treated like a function which
|
||||
interpolates between known data values to obtain unknown values (it
|
||||
also has a docstring for help). Behavior at the boundary can be
|
||||
specified at instantiation time. The following example demonstrates
|
||||
it's use.
|
||||
|
||||
.. plot::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> from scipy import interpolate
|
||||
|
||||
>>> x = np.arange(0,10)
|
||||
>>> y = np.exp(-x/3.0)
|
||||
>>> f = interpolate.interp1d(x, y)
|
||||
|
||||
>>> xnew = np.arange(0,9,0.1)
|
||||
>>> import matplotlib.pyplot as plt
|
||||
>>> plt.plot(x,y,'o',xnew,f(xnew),'-')
|
||||
|
||||
.. :caption: One-dimensional interpolation using the
|
||||
.. class :obj:`interpolate.interp1d`
|
||||
|
||||
|
||||
Spline interpolation in 1-d: Procedural (interpolate.splXXX)
|
||||
------------------------------------------------------------
|
||||
|
||||
Spline interpolation requires two essential steps: (1) a spline
|
||||
representation of the curve is computed, and (2) the spline is
|
||||
evaluated at the desired points. In order to find the spline
|
||||
representation, there are two different ways to represent a curve and
|
||||
obtain (smoothing) spline coefficients: directly and parametrically.
|
||||
The direct method finds the spline representation of a curve in a two-
|
||||
dimensional plane using the function :obj:`splrep`. The
|
||||
first two arguments are the only ones required, and these provide the
|
||||
:math:`x` and :math:`y` components of the curve. The normal output is
|
||||
a 3-tuple, :math:`\left(t,c,k\right)` , containing the knot-points,
|
||||
:math:`t` , the coefficients :math:`c` and the order :math:`k` of the
|
||||
spline. The default spline order is cubic, but this can be changed
|
||||
with the input keyword, *k.*
|
||||
|
||||
For curves in :math:`N` -dimensional space the function
|
||||
:obj:`splprep` allows defining the curve
|
||||
parametrically. For this function only 1 input argument is
|
||||
required. This input is a list of :math:`N` -arrays representing the
|
||||
curve in :math:`N` -dimensional space. The length of each array is the
|
||||
number of curve points, and each array provides one component of the
|
||||
:math:`N` -dimensional data point. The parameter variable is given
|
||||
with the keword argument, *u,* which defaults to an equally-spaced
|
||||
monotonic sequence between :math:`0` and :math:`1` . The default
|
||||
output consists of two objects: a 3-tuple, :math:`\left(t,c,k\right)`
|
||||
, containing the spline representation and the parameter variable
|
||||
:math:`u.`
|
||||
|
||||
The keyword argument, *s* , is used to specify the amount of smoothing
|
||||
to perform during the spline fit. The default value of :math:`s` is
|
||||
:math:`s=m-\sqrt{2m}` where :math:`m` is the number of data-points
|
||||
being fit. Therefore, **if no smoothing is desired a value of**
|
||||
:math:`\mathbf{s}=0` **should be passed to the routines.**
|
||||
|
||||
Once the spline representation of the data has been determined,
|
||||
functions are available for evaluating the spline
|
||||
(:func:`splev`) and its derivatives
|
||||
(:func:`splev`, :func:`spalde`) at any point
|
||||
and the integral of the spline between any two points (
|
||||
:func:`splint`). In addition, for cubic splines ( :math:`k=3`
|
||||
) with 8 or more knots, the roots of the spline can be estimated (
|
||||
:func:`sproot`). These functions are demonstrated in the
|
||||
example that follows.
|
||||
|
||||
.. plot::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> import matplotlib.pyplot as plt
|
||||
>>> from scipy import interpolate
|
||||
|
||||
Cubic-spline
|
||||
|
||||
>>> x = np.arange(0,2*np.pi+np.pi/4,2*np.pi/8)
|
||||
>>> y = np.sin(x)
|
||||
>>> tck = interpolate.splrep(x,y,s=0)
|
||||
>>> xnew = np.arange(0,2*np.pi,np.pi/50)
|
||||
>>> ynew = interpolate.splev(xnew,tck,der=0)
|
||||
|
||||
>>> plt.figure()
|
||||
>>> plt.plot(x,y,'x',xnew,ynew,xnew,np.sin(xnew),x,y,'b')
|
||||
>>> plt.legend(['Linear','Cubic Spline', 'True'])
|
||||
>>> plt.axis([-0.05,6.33,-1.05,1.05])
|
||||
>>> plt.title('Cubic-spline interpolation')
|
||||
>>> plt.show()
|
||||
|
||||
Derivative of spline
|
||||
|
||||
>>> yder = interpolate.splev(xnew,tck,der=1)
|
||||
>>> plt.figure()
|
||||
>>> plt.plot(xnew,yder,xnew,np.cos(xnew),'--')
|
||||
>>> plt.legend(['Cubic Spline', 'True'])
|
||||
>>> plt.axis([-0.05,6.33,-1.05,1.05])
|
||||
>>> plt.title('Derivative estimation from spline')
|
||||
>>> plt.show()
|
||||
|
||||
Integral of spline
|
||||
|
||||
>>> def integ(x,tck,constant=-1):
|
||||
>>> x = np.atleast_1d(x)
|
||||
>>> out = np.zeros(x.shape, dtype=x.dtype)
|
||||
>>> for n in xrange(len(out)):
|
||||
>>> out[n] = interpolate.splint(0,x[n],tck)
|
||||
>>> out += constant
|
||||
>>> return out
|
||||
>>>
|
||||
>>> yint = integ(xnew,tck)
|
||||
>>> plt.figure()
|
||||
>>> plt.plot(xnew,yint,xnew,-np.cos(xnew),'--')
|
||||
>>> plt.legend(['Cubic Spline', 'True'])
|
||||
>>> plt.axis([-0.05,6.33,-1.05,1.05])
|
||||
>>> plt.title('Integral estimation from spline')
|
||||
>>> plt.show()
|
||||
|
||||
Roots of spline
|
||||
|
||||
>>> print interpolate.sproot(tck)
|
||||
[ 0. 3.1416]
|
||||
|
||||
Parametric spline
|
||||
|
||||
>>> t = np.arange(0,1.1,.1)
|
||||
>>> x = np.sin(2*np.pi*t)
|
||||
>>> y = np.cos(2*np.pi*t)
|
||||
>>> tck,u = interpolate.splprep([x,y],s=0)
|
||||
>>> unew = np.arange(0,1.01,0.01)
|
||||
>>> out = interpolate.splev(unew,tck)
|
||||
>>> plt.figure()
|
||||
>>> plt.plot(x,y,'x',out[0],out[1],np.sin(2*np.pi*unew),np.cos(2*np.pi*unew),x,y,'b')
|
||||
>>> plt.legend(['Linear','Cubic Spline', 'True'])
|
||||
>>> plt.axis([-1.05,1.05,-1.05,1.05])
|
||||
>>> plt.title('Spline of parametrically-defined curve')
|
||||
>>> plt.show()
|
||||
|
||||
Spline interpolation in 1-d: Object-oriented (:class:`UnivariateSpline`)
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
The spline-fitting capabilities described above are also available via
|
||||
an objected-oriented interface. The one dimensional splines are
|
||||
objects of the `UnivariateSpline` class, and are created with the
|
||||
:math:`x` and :math:`y` components of the curve provided as arguments
|
||||
to the constructor. The class defines __call__, allowing the object
|
||||
to be called with the x-axis values at which the spline should be
|
||||
evaluated, returning the interpolated y-values. This is shown in
|
||||
the example below for the subclass `InterpolatedUnivariateSpline`.
|
||||
The methods :meth:`integral <UnivariateSpline.integral>`,
|
||||
:meth:`derivatives <UnivariateSpline.derivatives>`, and
|
||||
:meth:`roots <UnivariateSpline.roots>` methods are also available
|
||||
on `UnivariateSpline` objects, allowing definite integrals,
|
||||
derivatives, and roots to be computed for the spline.
|
||||
|
||||
The UnivariateSpline class can also be used to smooth data by
|
||||
providing a non-zero value of the smoothing parameter `s`, with the
|
||||
same meaning as the `s` keyword of the :obj:`splrep` function
|
||||
described above. This results in a spline that has fewer knots
|
||||
than the number of data points, and hence is no longer strictly
|
||||
an interpolating spline, but rather a smoothing spline. If this
|
||||
is not desired, the `InterpolatedUnivariateSpline` class is available.
|
||||
It is a subclass of `UnivariateSpline` that always passes through all
|
||||
points (equivalent to forcing the smoothing parameter to 0). This
|
||||
class is demonstrated in the example below.
|
||||
|
||||
The `LSQUnivarateSpline` is the other subclass of `UnivarateSpline`.
|
||||
It allows the user to specify the number and location of internal
|
||||
knots as explicitly with the parameter `t`. This allows creation
|
||||
of customized splines with non-linear spacing, to interpolate in
|
||||
some domains and smooth in others, or change the character of the
|
||||
spline.
|
||||
|
||||
|
||||
.. plot::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> import matplotlib.pyplot as plt
|
||||
>>> from scipy import interpolate
|
||||
|
||||
InterpolatedUnivariateSpline
|
||||
|
||||
>>> x = np.arange(0,2*np.pi+np.pi/4,2*np.pi/8)
|
||||
>>> y = np.sin(x)
|
||||
>>> s = interpolate.InterpolatedUnivariateSpline(x,y)
|
||||
>>> xnew = np.arange(0,2*np.pi,np.pi/50)
|
||||
>>> ynew = s(xnew)
|
||||
|
||||
>>> plt.figure()
|
||||
>>> plt.plot(x,y,'x',xnew,ynew,xnew,np.sin(xnew),x,y,'b')
|
||||
>>> plt.legend(['Linear','InterpolatedUnivariateSpline', 'True'])
|
||||
>>> plt.axis([-0.05,6.33,-1.05,1.05])
|
||||
>>> plt.title('InterpolatedUnivariateSpline')
|
||||
>>> plt.show()
|
||||
|
||||
LSQUnivarateSpline with non-uniform knots
|
||||
|
||||
>>> t = [np.pi/2-.1,np.pi/2-.1,3*np.pi/2-.1,3*np.pi/2+.1]
|
||||
>>> s = interpolate.LSQUnivariateSpline(x,y,t)
|
||||
>>> ynew = s(xnew)
|
||||
|
||||
>>> plt.figure()
|
||||
>>> plt.plot(x,y,'x',xnew,ynew,xnew,np.sin(xnew),x,y,'b')
|
||||
>>> plt.legend(['Linear','LSQUnivariateSpline', 'True'])
|
||||
>>> plt.axis([-0.05,6.33,-1.05,1.05])
|
||||
>>> plt.title('Spline with Specified Interior Knots')
|
||||
>>> plt.show()
|
||||
|
||||
|
||||
|
||||
Two-dimensional spline representation: Procedural (:func:`bisplrep`)
|
||||
--------------------------------------------------------------------
|
||||
|
||||
For (smooth) spline-fitting to a two dimensional surface, the function
|
||||
:func:`bisplrep` is available. This function takes as required inputs
|
||||
the **1-D** arrays *x*, *y*, and *z* which represent points on the
|
||||
surface :math:`z=f\left(x,y\right).` The default output is a list
|
||||
:math:`\left[tx,ty,c,kx,ky\right]` whose entries represent
|
||||
respectively, the components of the knot positions, the coefficients
|
||||
of the spline, and the order of the spline in each coordinate. It is
|
||||
convenient to hold this list in a single object, *tck,* so that it can
|
||||
be passed easily to the function :obj:`bisplev`. The
|
||||
keyword, *s* , can be used to change the amount of smoothing performed
|
||||
on the data while determining the appropriate spline. The default
|
||||
value is :math:`s=m-\sqrt{2m}` where :math:`m` is the number of data
|
||||
points in the *x, y,* and *z* vectors. As a result, if no smoothing is
|
||||
desired, then :math:`s=0` should be passed to
|
||||
:obj:`bisplrep` .
|
||||
|
||||
To evaluate the two-dimensional spline and it's partial derivatives
|
||||
(up to the order of the spline), the function
|
||||
:obj:`bisplev` is required. This function takes as the
|
||||
first two arguments **two 1-D arrays** whose cross-product specifies
|
||||
the domain over which to evaluate the spline. The third argument is
|
||||
the *tck* list returned from :obj:`bisplrep`. If desired,
|
||||
the fourth and fifth arguments provide the orders of the partial
|
||||
derivative in the :math:`x` and :math:`y` direction respectively.
|
||||
|
||||
It is important to note that two dimensional interpolation should not
|
||||
be used to find the spline representation of images. The algorithm
|
||||
used is not amenable to large numbers of input points. The signal
|
||||
processing toolbox contains more appropriate algorithms for finding
|
||||
the spline representation of an image. The two dimensional
|
||||
interpolation commands are intended for use when interpolating a two
|
||||
dimensional function as shown in the example that follows. This
|
||||
example uses the :obj:`mgrid <numpy.mgrid>` command in SciPy which is
|
||||
useful for defining a "mesh-grid "in many dimensions. (See also the
|
||||
:obj:`ogrid <numpy.ogrid>` command if the full-mesh is not
|
||||
needed). The number of output arguments and the number of dimensions
|
||||
of each argument is determined by the number of indexing objects
|
||||
passed in :obj:`mgrid <numpy.mgrid>`.
|
||||
|
||||
.. plot::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> from scipy import interpolate
|
||||
>>> import matplotlib.pyplot as plt
|
||||
|
||||
Define function over sparse 20x20 grid
|
||||
|
||||
>>> x,y = np.mgrid[-1:1:20j,-1:1:20j]
|
||||
>>> z = (x+y)*np.exp(-6.0*(x*x+y*y))
|
||||
|
||||
>>> plt.figure()
|
||||
>>> plt.pcolor(x,y,z)
|
||||
>>> plt.colorbar()
|
||||
>>> plt.title("Sparsely sampled function.")
|
||||
>>> plt.show()
|
||||
|
||||
Interpolate function over new 70x70 grid
|
||||
|
||||
>>> xnew,ynew = np.mgrid[-1:1:70j,-1:1:70j]
|
||||
>>> tck = interpolate.bisplrep(x,y,z,s=0)
|
||||
>>> znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck)
|
||||
|
||||
>>> plt.figure()
|
||||
>>> plt.pcolor(xnew,ynew,znew)
|
||||
>>> plt.colorbar()
|
||||
>>> plt.title("Interpolated function.")
|
||||
>>> plt.show()
|
||||
|
||||
.. :caption: Example of two-dimensional spline interpolation.
|
||||
|
||||
|
||||
Two-dimensional spline representation: Object-oriented (:class:`BivariateSpline`)
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
The :class:`BivariateSpline` class is the 2-dimensional analog of the
|
||||
:class:`UnivariateSpline` class. It and its subclasses implement
|
||||
the FITPACK functions described above in an object oriented fashion,
|
||||
allowing objects to be instantiated that can be called to compute
|
||||
the spline value by passing in the two coordinates as the two
|
||||
arguments.
|
||||
|
||||
|
||||
Using radial basis functions for smoothing/interpolation
|
||||
---------------------------------------------------------
|
||||
|
||||
Radial basis functions can be used for smoothing/interpolating scattered
|
||||
data in n-dimensions, but should be used with caution for extrapolation
|
||||
outside of the observed data range.
|
||||
|
||||
1-d Example
|
||||
^^^^^^^^^^^
|
||||
|
||||
This example compares the usage of the Rbf and UnivariateSpline classes
|
||||
from the scipy.interpolate module.
|
||||
|
||||
.. plot::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> from scipy.interpolate import Rbf, InterpolatedUnivariateSpline
|
||||
>>> import matplotlib.pyplot as plt
|
||||
|
||||
>>> # setup data
|
||||
>>> x = np.linspace(0, 10, 9)
|
||||
>>> y = np.sin(x)
|
||||
>>> xi = np.linspace(0, 10, 101)
|
||||
|
||||
>>> # use fitpack2 method
|
||||
>>> ius = InterpolatedUnivariateSpline(x, y)
|
||||
>>> yi = ius(xi)
|
||||
|
||||
>>> plt.subplot(2, 1, 1)
|
||||
>>> plt.plot(x, y, 'bo')
|
||||
>>> plt.plot(xi, yi, 'g')
|
||||
>>> plt.plot(xi, np.sin(xi), 'r')
|
||||
>>> plt.title('Interpolation using univariate spline')
|
||||
|
||||
>>> # use RBF method
|
||||
>>> rbf = Rbf(x, y)
|
||||
>>> fi = rbf(xi)
|
||||
|
||||
>>> plt.subplot(2, 1, 2)
|
||||
>>> plt.plot(x, y, 'bo')
|
||||
>>> plt.plot(xi, fi, 'g')
|
||||
>>> plt.plot(xi, np.sin(xi), 'r')
|
||||
>>> plt.title('Interpolation using RBF - multiquadrics')
|
||||
>>> plt.show()
|
||||
|
||||
.. :caption: Example of one-dimensional RBF interpolation.
|
||||
|
||||
2-d Example
|
||||
^^^^^^^^^^^
|
||||
|
||||
This example shows how to interpolate scattered 2d data.
|
||||
|
||||
.. plot::
|
||||
|
||||
>>> import numpy as np
|
||||
>>> from scipy.interpolate import Rbf
|
||||
>>> import matplotlib.pyplot as plt
|
||||
>>> from matplotlib import cm
|
||||
|
||||
>>> # 2-d tests - setup scattered data
|
||||
>>> x = np.random.rand(100)*4.0-2.0
|
||||
>>> y = np.random.rand(100)*4.0-2.0
|
||||
>>> z = x*np.exp(-x**2-y**2)
|
||||
>>> ti = np.linspace(-2.0, 2.0, 100)
|
||||
>>> XI, YI = np.meshgrid(ti, ti)
|
||||
|
||||
>>> # use RBF
|
||||
>>> rbf = Rbf(x, y, z, epsilon=2)
|
||||
>>> ZI = rbf(XI, YI)
|
||||
|
||||
>>> # plot the result
|
||||
>>> n = plt.normalize(-2., 2.)
|
||||
>>> plt.subplot(1, 1, 1)
|
||||
>>> plt.pcolor(XI, YI, ZI, cmap=cm.jet)
|
||||
>>> plt.scatter(x, y, 100, z, cmap=cm.jet)
|
||||
>>> plt.title('RBF interpolation - multiquadrics')
|
||||
>>> plt.xlim(-2, 2)
|
||||
>>> plt.ylim(-2, 2)
|
||||
>>> plt.colorbar()
|
|
@ -1,376 +0,0 @@
|
|||
File IO (:mod:`scipy.io`)
|
||||
=========================
|
||||
|
||||
.. sectionauthor:: Matthew Brett
|
||||
|
||||
.. currentmodule:: scipy.io
|
||||
|
||||
.. seealso:: :ref:`numpy-reference.routines.io` (in numpy)
|
||||
|
||||
Matlab files
|
||||
------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
loadmat
|
||||
savemat
|
||||
|
||||
Getting started:
|
||||
|
||||
>>> import scipy.io as sio
|
||||
|
||||
If you are using IPython, try tab completing on ``sio``. You'll find::
|
||||
|
||||
sio.loadmat
|
||||
sio.savemat
|
||||
|
||||
These are the high-level functions you will most likely use. You'll also find::
|
||||
|
||||
sio.matlab
|
||||
|
||||
This is the package from which ``loadmat`` and ``savemat`` are imported.
|
||||
Within ``sio.matlab``, you will find the ``mio`` module - containing
|
||||
the machinery that ``loadmat`` and ``savemat`` use. From time to time
|
||||
you may find yourself re-using this machinery.
|
||||
|
||||
How do I start?
|
||||
```````````````
|
||||
|
||||
You may have a ``.mat`` file that you want to read into Scipy. Or, you
|
||||
want to pass some variables from Scipy / Numpy into Matlab.
|
||||
|
||||
To save us using a Matlab license, let's start in Octave_. Octave has
|
||||
Matlab-compatible save / load functions. Start Octave (``octave`` at
|
||||
the command line for me):
|
||||
|
||||
.. sourcecode:: octave
|
||||
|
||||
octave:1> a = 1:12
|
||||
a =
|
||||
|
||||
1 2 3 4 5 6 7 8 9 10 11 12
|
||||
|
||||
octave:2> a = reshape(a, [1 3 4])
|
||||
a =
|
||||
|
||||
ans(:,:,1) =
|
||||
|
||||
1 2 3
|
||||
|
||||
ans(:,:,2) =
|
||||
|
||||
4 5 6
|
||||
|
||||
ans(:,:,3) =
|
||||
|
||||
7 8 9
|
||||
|
||||
ans(:,:,4) =
|
||||
|
||||
10 11 12
|
||||
|
||||
|
||||
|
||||
octave:3> save -6 octave_a.mat a % Matlab 6 compatible
|
||||
octave:4> ls octave_a.mat
|
||||
octave_a.mat
|
||||
|
||||
Now, to Python:
|
||||
|
||||
>>> mat_contents = sio.loadmat('octave_a.mat')
|
||||
>>> print mat_contents
|
||||
{'a': array([[[ 1., 4., 7., 10.],
|
||||
[ 2., 5., 8., 11.],
|
||||
[ 3., 6., 9., 12.]]]), '__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, written by Octave 3.2.3, 2010-05-30 02:13:40 UTC', '__globals__': []}
|
||||
>>> oct_a = mat_contents['a']
|
||||
>>> print oct_a
|
||||
[[[ 1. 4. 7. 10.]
|
||||
[ 2. 5. 8. 11.]
|
||||
[ 3. 6. 9. 12.]]]
|
||||
>>> print oct_a.shape
|
||||
(1, 3, 4)
|
||||
|
||||
Now let's try the other way round:
|
||||
|
||||
>>> import numpy as np
|
||||
>>> vect = np.arange(10)
|
||||
>>> print vect.shape
|
||||
(10,)
|
||||
>>> sio.savemat('np_vector.mat', {'vect':vect})
|
||||
/Users/mb312/usr/local/lib/python2.6/site-packages/scipy/io/matlab/mio.py:196: FutureWarning: Using oned_as default value ('column') This will change to 'row' in future versions
|
||||
oned_as=oned_as)
|
||||
|
||||
Then back to Octave:
|
||||
|
||||
.. sourcecode:: octave
|
||||
|
||||
octave:5> load np_vector.mat
|
||||
octave:6> vect
|
||||
vect =
|
||||
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
6
|
||||
7
|
||||
8
|
||||
9
|
||||
|
||||
octave:7> size(vect)
|
||||
ans =
|
||||
|
||||
10 1
|
||||
|
||||
Note the deprecation warning. The ``oned_as`` keyword determines the way in
|
||||
which one-dimensional vectors are stored. In the future, this will default
|
||||
to ``row`` instead of ``column``:
|
||||
|
||||
>>> sio.savemat('np_vector.mat', {'vect':vect}, oned_as='row')
|
||||
|
||||
We can load this in Octave or Matlab:
|
||||
|
||||
.. sourcecode:: octave
|
||||
|
||||
octave:8> load np_vector.mat
|
||||
octave:9> vect
|
||||
vect =
|
||||
|
||||
0 1 2 3 4 5 6 7 8 9
|
||||
|
||||
octave:10> size(vect)
|
||||
ans =
|
||||
|
||||
1 10
|
||||
|
||||
|
||||
Matlab structs
|
||||
``````````````
|
||||
|
||||
Matlab structs are a little bit like Python dicts, except the field
|
||||
names must be strings. Any Matlab object can be a value of a field. As
|
||||
for all objects in Matlab, structs are in fact arrays of structs, where
|
||||
a single struct is an array of shape (1, 1).
|
||||
|
||||
.. sourcecode:: octave
|
||||
|
||||
octave:11> my_struct = struct('field1', 1, 'field2', 2)
|
||||
my_struct =
|
||||
{
|
||||
field1 = 1
|
||||
field2 = 2
|
||||
}
|
||||
|
||||
octave:12> save -6 octave_struct.mat my_struct
|
||||
|
||||
We can load this in Python:
|
||||
|
||||
>>> mat_contents = sio.loadmat('octave_struct.mat')
|
||||
>>> print mat_contents
|
||||
{'my_struct': array([[([[1.0]], [[2.0]])]],
|
||||
dtype=[('field1', '|O8'), ('field2', '|O8')]), '__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, written by Octave 3.2.3, 2010-05-30 02:00:26 UTC', '__globals__': []}
|
||||
>>> oct_struct = mat_contents['my_struct']
|
||||
>>> print oct_struct.shape
|
||||
(1, 1)
|
||||
>>> val = oct_struct[0,0]
|
||||
>>> print val
|
||||
([[1.0]], [[2.0]])
|
||||
>>> print val['field1']
|
||||
[[ 1.]]
|
||||
>>> print val['field2']
|
||||
[[ 2.]]
|
||||
>>> print val.dtype
|
||||
[('field1', '|O8'), ('field2', '|O8')]
|
||||
|
||||
In this version of Scipy (0.8.0), Matlab structs come back as numpy
|
||||
structured arrays, with fields named for the struct fields. You can see
|
||||
the field names in the ``dtype`` output above. Note also:
|
||||
|
||||
>>> val = oct_struct[0,0]
|
||||
|
||||
and:
|
||||
|
||||
.. sourcecode:: octave
|
||||
|
||||
octave:13> size(my_struct)
|
||||
ans =
|
||||
|
||||
1 1
|
||||
|
||||
So, in Matlab, the struct array must be at least 2D, and we replicate
|
||||
that when we read into Scipy. If you want all length 1 dimensions
|
||||
squeezed out, try this:
|
||||
|
||||
>>> mat_contents = sio.loadmat('octave_struct.mat', squeeze_me=True)
|
||||
>>> oct_struct = mat_contents['my_struct']
|
||||
>>> oct_struct.shape
|
||||
()
|
||||
|
||||
Sometimes, it's more convenient to load the matlab structs as python
|
||||
objects rather than numpy structured arrarys - it can make the access
|
||||
syntax in python a bit more similar to that in matlab. In order to do
|
||||
this, use the ``struct_as_record=False`` parameter to ``loadmat``.
|
||||
|
||||
>>> mat_contents = sio.loadmat('octave_struct.mat', struct_as_record=False)
|
||||
>>> oct_struct = mat_contents['my_struct']
|
||||
>>> oct_struct[0,0].field1
|
||||
array([[ 1.]])
|
||||
|
||||
``struct_as_record=False`` works nicely with ``squeeze_me``:
|
||||
|
||||
>>> mat_contents = sio.loadmat('octave_struct.mat', struct_as_record=False, squeeze_me=True)
|
||||
>>> oct_struct = mat_contents['my_struct']
|
||||
>>> oct_struct.shape # but no - it's a scalar
|
||||
Traceback (most recent call last):
|
||||
File "<stdin>", line 1, in <module>
|
||||
AttributeError: 'mat_struct' object has no attribute 'shape'
|
||||
>>> print type(oct_struct)
|
||||
<class 'scipy.io.matlab.mio5_params.mat_struct'>
|
||||
>>> print oct_struct.field1
|
||||
1.0
|
||||
|
||||
Saving struct arrays can be done in various ways. One simple method is
|
||||
to use dicts:
|
||||
|
||||
>>> a_dict = {'field1': 0.5, 'field2': 'a string'}
|
||||
>>> sio.savemat('saved_struct.mat', {'a_dict': a_dict})
|
||||
|
||||
loaded as:
|
||||
|
||||
.. sourcecode:: octave
|
||||
|
||||
octave:21> load saved_struct
|
||||
octave:22> a_dict
|
||||
a_dict =
|
||||
{
|
||||
field2 = a string
|
||||
field1 = 0.50000
|
||||
}
|
||||
|
||||
You can also save structs back again to Matlab (or Octave in our case)
|
||||
like this:
|
||||
|
||||
>>> dt = [('f1', 'f8'), ('f2', 'S10')]
|
||||
>>> arr = np.zeros((2,), dtype=dt)
|
||||
>>> print arr
|
||||
[(0.0, '') (0.0, '')]
|
||||
>>> arr[0]['f1'] = 0.5
|
||||
>>> arr[0]['f2'] = 'python'
|
||||
>>> arr[1]['f1'] = 99
|
||||
>>> arr[1]['f2'] = 'not perl'
|
||||
>>> sio.savemat('np_struct_arr.mat', {'arr': arr})
|
||||
|
||||
Matlab cell arrays
|
||||
``````````````````
|
||||
|
||||
Cell arrays in Matlab are rather like python lists, in the sense that
|
||||
the elements in the arrays can contain any type of Matlab object. In
|
||||
fact they are most similar to numpy object arrays, and that is how we
|
||||
load them into numpy.
|
||||
|
||||
.. sourcecode:: octave
|
||||
|
||||
octave:14> my_cells = {1, [2, 3]}
|
||||
my_cells =
|
||||
|
||||
{
|
||||
[1,1] = 1
|
||||
[1,2] =
|
||||
|
||||
2 3
|
||||
|
||||
}
|
||||
|
||||
octave:15> save -6 octave_cells.mat my_cells
|
||||
|
||||
Back to Python:
|
||||
|
||||
>>> mat_contents = sio.loadmat('octave_cells.mat')
|
||||
>>> oct_cells = mat_contents['my_cells']
|
||||
>>> print oct_cells.dtype
|
||||
object
|
||||
>>> val = oct_cells[0,0]
|
||||
>>> print val
|
||||
[[ 1.]]
|
||||
>>> print val.dtype
|
||||
float64
|
||||
|
||||
Saving to a Matlab cell array just involves making a numpy object array:
|
||||
|
||||
>>> obj_arr = np.zeros((2,), dtype=np.object)
|
||||
>>> obj_arr[0] = 1
|
||||
>>> obj_arr[1] = 'a string'
|
||||
>>> print obj_arr
|
||||
[1 a string]
|
||||
>>> sio.savemat('np_cells.mat', {'obj_arr':obj_arr})
|
||||
|
||||
.. sourcecode:: octave
|
||||
|
||||
octave:16> load np_cells.mat
|
||||
octave:17> obj_arr
|
||||
obj_arr =
|
||||
|
||||
{
|
||||
[1,1] = 1
|
||||
[2,1] = a string
|
||||
}
|
||||
|
||||
|
||||
Matrix Market files
|
||||
-------------------
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
mminfo
|
||||
mmread
|
||||
mmwrite
|
||||
|
||||
Other
|
||||
-----
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
save_as_module
|
||||
|
||||
Wav sound files (:mod:`scipy.io.wavfile`)
|
||||
-----------------------------------------
|
||||
|
||||
.. module:: scipy.io.wavfile
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
read
|
||||
write
|
||||
|
||||
Arff files (:mod:`scipy.io.arff`)
|
||||
---------------------------------
|
||||
|
||||
.. automodule:: scipy.io.arff
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
loadarff
|
||||
|
||||
Netcdf (:mod:`scipy.io.netcdf`)
|
||||
-------------------------------
|
||||
|
||||
.. module:: scipy.io.netcdf
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
netcdf_file
|
||||
|
||||
Allows reading of NetCDF files (version of pupynere_ package)
|
||||
|
||||
|
||||
.. _pupynere: http://pypi.python.org/pypi/pupynere/
|
||||
.. _octave: http://www.gnu.org/software/octave
|
||||
.. _matlab: http://www.mathworks.com/
|
|
@ -1,825 +0,0 @@
|
|||
Linear Algebra
|
||||
==============
|
||||
|
||||
.. sectionauthor:: Travis E. Oliphant
|
||||
|
||||
.. currentmodule: scipy
|
||||
|
||||
When SciPy is built using the optimized ATLAS LAPACK and BLAS
|
||||
libraries, it has very fast linear algebra capabilities. If you dig
|
||||
deep enough, all of the raw lapack and blas libraries are available
|
||||
for your use for even more speed. In this section, some easier-to-use
|
||||
interfaces to these routines are described.
|
||||
|
||||
All of these linear algebra routines expect an object that can be
|
||||
converted into a 2-dimensional array. The output of these routines is
|
||||
also a two-dimensional array. There is a matrix class defined in
|
||||
Numpy, which you can initialize with an appropriate Numpy array in
|
||||
order to get objects for which multiplication is matrix-multiplication
|
||||
instead of the default, element-by-element multiplication.
|
||||
|
||||
|
||||
Matrix Class
|
||||
------------
|
||||
|
||||
The matrix class is initialized with the SciPy command :obj:`mat`
|
||||
which is just convenient short-hand for :class:`matrix
|
||||
<numpy.matrix>`. If you are going to be doing a lot of matrix-math, it
|
||||
is convenient to convert arrays into matrices using this command. One
|
||||
advantage of using the :func:`mat` command is that you can enter
|
||||
two-dimensional matrices using MATLAB-like syntax with commas or
|
||||
spaces separating columns and semicolons separting rows as long as the
|
||||
matrix is placed in a string passed to :obj:`mat` .
|
||||
|
||||
|
||||
Basic routines
|
||||
--------------
|
||||
|
||||
|
||||
Finding Inverse
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
The inverse of a matrix :math:`\mathbf{A}` is the matrix
|
||||
:math:`\mathbf{B}` such that :math:`\mathbf{AB}=\mathbf{I}` where
|
||||
:math:`\mathbf{I}` is the identity matrix consisting of ones down the
|
||||
main diagonal. Usually :math:`\mathbf{B}` is denoted
|
||||
:math:`\mathbf{B}=\mathbf{A}^{-1}` . In SciPy, the matrix inverse of
|
||||
the Numpy array, A, is obtained using :obj:`linalg.inv` ``(A)`` , or
|
||||
using ``A.I`` if ``A`` is a Matrix. For example, let
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A=}\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]\]
|
||||
|
||||
then
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A^{-1}=\frac{1}{25}\left[\begin{array}{ccc} -37 & 9 & 22\\ 14 & 2 & -9\\ 4 & -3 & 1\end{array}\right]=\left[\begin{array}{ccc} -1.48 & 0.36 & 0.88\\ 0.56 & 0.08 & -0.36\\ 0.16 & -0.12 & 0.04\end{array}\right].}\]
|
||||
|
||||
The following example demonstrates this computation in SciPy
|
||||
|
||||
>>> A = mat('[1 3 5; 2 5 1; 2 3 8]')
|
||||
>>> A
|
||||
matrix([[1, 3, 5],
|
||||
[2, 5, 1],
|
||||
[2, 3, 8]])
|
||||
>>> A.I
|
||||
matrix([[-1.48, 0.36, 0.88],
|
||||
[ 0.56, 0.08, -0.36],
|
||||
[ 0.16, -0.12, 0.04]])
|
||||
>>> from scipy import linalg
|
||||
>>> linalg.inv(A)
|
||||
array([[-1.48, 0.36, 0.88],
|
||||
[ 0.56, 0.08, -0.36],
|
||||
[ 0.16, -0.12, 0.04]])
|
||||
|
||||
Solving linear system
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Solving linear systems of equations is straightforward using the scipy
|
||||
command :obj:`linalg.solve`. This command expects an input matrix and
|
||||
a right-hand-side vector. The solution vector is then computed. An
|
||||
option for entering a symmetrix matrix is offered which can speed up
|
||||
the processing when applicable. As an example, suppose it is desired
|
||||
to solve the following simultaneous equations:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} x+3y+5z & = & 10\\ 2x+5y+z & = & 8\\ 2x+3y+8z & = & 3\end{eqnarray*}
|
||||
|
||||
We could find the solution vector using a matrix inverse:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \left[\begin{array}{c} x\\ y\\ z\end{array}\right]=\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]^{-1}\left[\begin{array}{c} 10\\ 8\\ 3\end{array}\right]=\frac{1}{25}\left[\begin{array}{c} -232\\ 129\\ 19\end{array}\right]=\left[\begin{array}{c} -9.28\\ 5.16\\ 0.76\end{array}\right].\]
|
||||
|
||||
However, it is better to use the linalg.solve command which can be
|
||||
faster and more numerically stable. In this case it however gives the
|
||||
same answer as shown in the following example:
|
||||
|
||||
>>> A = mat('[1 3 5; 2 5 1; 2 3 8]')
|
||||
>>> b = mat('[10;8;3]')
|
||||
>>> A.I*b
|
||||
matrix([[-9.28],
|
||||
[ 5.16],
|
||||
[ 0.76]])
|
||||
>>> linalg.solve(A,b)
|
||||
array([[-9.28],
|
||||
[ 5.16],
|
||||
[ 0.76]])
|
||||
|
||||
|
||||
Finding Determinant
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The determinant of a square matrix :math:`\mathbf{A}` is often denoted
|
||||
:math:`\left|\mathbf{A}\right|` and is a quantity often used in linear
|
||||
algebra. Suppose :math:`a_{ij}` are the elements of the matrix
|
||||
:math:`\mathbf{A}` and let :math:`M_{ij}=\left|\mathbf{A}_{ij}\right|`
|
||||
be the determinant of the matrix left by removing the
|
||||
:math:`i^{\textrm{th}}` row and :math:`j^{\textrm{th}}` column from
|
||||
:math:`\mathbf{A}` . Then for any row :math:`i,`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \left|\mathbf{A}\right|=\sum_{j}\left(-1\right)^{i+j}a_{ij}M_{ij}.\]
|
||||
|
||||
This is a recursive way to define the determinant where the base case
|
||||
is defined by accepting that the determinant of a :math:`1\times1` matrix is the only matrix element. In SciPy the determinant can be
|
||||
calculated with :obj:`linalg.det` . For example, the determinant of
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A=}\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]\]
|
||||
|
||||
is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \left|\mathbf{A}\right| & = & 1\left|\begin{array}{cc} 5 & 1\\ 3 & 8\end{array}\right|-3\left|\begin{array}{cc} 2 & 1\\ 2 & 8\end{array}\right|+5\left|\begin{array}{cc} 2 & 5\\ 2 & 3\end{array}\right|\\ & = & 1\left(5\cdot8-3\cdot1\right)-3\left(2\cdot8-2\cdot1\right)+5\left(2\cdot3-2\cdot5\right)=-25.\end{eqnarray*}
|
||||
|
||||
In SciPy this is computed as shown in this example:
|
||||
|
||||
>>> A = mat('[1 3 5; 2 5 1; 2 3 8]')
|
||||
>>> linalg.det(A)
|
||||
-25.000000000000004
|
||||
|
||||
|
||||
Computing norms
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Matrix and vector norms can also be computed with SciPy. A wide range
|
||||
of norm definitions are available using different parameters to the
|
||||
order argument of :obj:`linalg.norm` . This function takes a rank-1
|
||||
(vectors) or a rank-2 (matrices) array and an optional order argument
|
||||
(default is 2). Based on these inputs a vector or matrix norm of the
|
||||
requested order is computed.
|
||||
|
||||
For vector *x* , the order parameter can be any real number including
|
||||
``inf`` or ``-inf``. The computed norm is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \left\Vert \mathbf{x}\right\Vert =\left\{ \begin{array}{cc} \max\left|x_{i}\right| & \textrm{ord}=\textrm{inf}\\ \min\left|x_{i}\right| & \textrm{ord}=-\textrm{inf}\\ \left(\sum_{i}\left|x_{i}\right|^{\textrm{ord}}\right)^{1/\textrm{ord}} & \left|\textrm{ord}\right|<\infty.\end{array}\right.\]
|
||||
|
||||
|
||||
|
||||
For matrix :math:`\mathbf{A}` the only valid values for norm are :math:`\pm2,\pm1,` :math:`\pm` inf, and 'fro' (or 'f') Thus,
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \left\Vert \mathbf{A}\right\Vert =\left\{ \begin{array}{cc} \max_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=\textrm{inf}\\ \min_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=-\textrm{inf}\\ \max_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=1\\ \min_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=-1\\ \max\sigma_{i} & \textrm{ord}=2\\ \min\sigma_{i} & \textrm{ord}=-2\\ \sqrt{\textrm{trace}\left(\mathbf{A}^{H}\mathbf{A}\right)} & \textrm{ord}=\textrm{'fro'}\end{array}\right.\]
|
||||
|
||||
where :math:`\sigma_{i}` are the singular values of :math:`\mathbf{A}` .
|
||||
|
||||
|
||||
Solving linear least-squares problems and pseudo-inverses
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Linear least-squares problems occur in many branches of applied
|
||||
mathematics. In this problem a set of linear scaling coefficients is
|
||||
sought that allow a model to fit data. In particular it is assumed
|
||||
that data :math:`y_{i}` is related to data :math:`\mathbf{x}_{i}`
|
||||
through a set of coefficients :math:`c_{j}` and model functions
|
||||
:math:`f_{j}\left(\mathbf{x}_{i}\right)` via the model
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y_{i}=\sum_{j}c_{j}f_{j}\left(\mathbf{x}_{i}\right)+\epsilon_{i}\]
|
||||
|
||||
where :math:`\epsilon_{i}` represents uncertainty in the data. The
|
||||
strategy of least squares is to pick the coefficients :math:`c_{j}` to
|
||||
minimize
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ J\left(\mathbf{c}\right)=\sum_{i}\left|y_{i}-\sum_{j}c_{j}f_{j}\left(x_{i}\right)\right|^{2}.\]
|
||||
|
||||
|
||||
|
||||
Theoretically, a global minimum will occur when
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \frac{\partial J}{\partial c_{n}^{*}}=0=\sum_{i}\left(y_{i}-\sum_{j}c_{j}f_{j}\left(x_{i}\right)\right)\left(-f_{n}^{*}\left(x_{i}\right)\right)\]
|
||||
|
||||
or
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \sum_{j}c_{j}\sum_{i}f_{j}\left(x_{i}\right)f_{n}^{*}\left(x_{i}\right) & = & \sum_{i}y_{i}f_{n}^{*}\left(x_{i}\right)\\ \mathbf{A}^{H}\mathbf{Ac} & = & \mathbf{A}^{H}\mathbf{y}\end{eqnarray*}
|
||||
|
||||
where
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \left\{ \mathbf{A}\right\} _{ij}=f_{j}\left(x_{i}\right).\]
|
||||
|
||||
When :math:`\mathbf{A^{H}A}` is invertible, then
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{c}=\left(\mathbf{A}^{H}\mathbf{A}\right)^{-1}\mathbf{A}^{H}\mathbf{y}=\mathbf{A}^{\dagger}\mathbf{y}\]
|
||||
|
||||
where :math:`\mathbf{A}^{\dagger}` is called the pseudo-inverse of
|
||||
:math:`\mathbf{A}.` Notice that using this definition of
|
||||
:math:`\mathbf{A}` the model can be written
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{y}=\mathbf{Ac}+\boldsymbol{\epsilon}.\]
|
||||
|
||||
The command :obj:`linalg.lstsq` will solve the linear least squares
|
||||
problem for :math:`\mathbf{c}` given :math:`\mathbf{A}` and
|
||||
:math:`\mathbf{y}` . In addition :obj:`linalg.pinv` or
|
||||
:obj:`linalg.pinv2` (uses a different method based on singular value
|
||||
decomposition) will find :math:`\mathbf{A}^{\dagger}` given
|
||||
:math:`\mathbf{A}.`
|
||||
|
||||
The following example and figure demonstrate the use of
|
||||
:obj:`linalg.lstsq` and :obj:`linalg.pinv` for solving a data-fitting
|
||||
problem. The data shown below were generated using the model:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y_{i}=c_{1}e^{-x_{i}}+c_{2}x_{i}\]
|
||||
|
||||
where :math:`x_{i}=0.1i` for :math:`i=1\ldots10` , :math:`c_{1}=5` ,
|
||||
and :math:`c_{2}=4.` Noise is added to :math:`y_{i}` and the
|
||||
coefficients :math:`c_{1}` and :math:`c_{2}` are estimated using
|
||||
linear least squares.
|
||||
|
||||
.. plot::
|
||||
|
||||
>>> from numpy import *
|
||||
>>> from scipy import linalg
|
||||
>>> import matplotlib.pyplot as plt
|
||||
|
||||
>>> c1,c2= 5.0,2.0
|
||||
>>> i = r_[1:11]
|
||||
>>> xi = 0.1*i
|
||||
>>> yi = c1*exp(-xi)+c2*xi
|
||||
>>> zi = yi + 0.05*max(yi)*random.randn(len(yi))
|
||||
|
||||
>>> A = c_[exp(-xi)[:,newaxis],xi[:,newaxis]]
|
||||
>>> c,resid,rank,sigma = linalg.lstsq(A,zi)
|
||||
|
||||
>>> xi2 = r_[0.1:1.0:100j]
|
||||
>>> yi2 = c[0]*exp(-xi2) + c[1]*xi2
|
||||
|
||||
>>> plt.plot(xi,zi,'x',xi2,yi2)
|
||||
>>> plt.axis([0,1.1,3.0,5.5])
|
||||
>>> plt.xlabel('$x_i$')
|
||||
>>> plt.title('Data fitting with linalg.lstsq')
|
||||
>>> plt.show()
|
||||
|
||||
.. :caption: Example of linear least-squares fit
|
||||
|
||||
Generalized inverse
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The generalized inverse is calculated using the command
|
||||
:obj:`linalg.pinv` or :obj:`linalg.pinv2`. These two commands differ
|
||||
in how they compute the generalized inverse. The first uses the
|
||||
linalg.lstsq algorithm while the second uses singular value
|
||||
decomposition. Let :math:`\mathbf{A}` be an :math:`M\times N` matrix,
|
||||
then if :math:`M>N` the generalized inverse is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}^{\dagger}=\left(\mathbf{A}^{H}\mathbf{A}\right)^{-1}\mathbf{A}^{H}\]
|
||||
|
||||
while if :math:`M<N` matrix the generalized inverse is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}^{\#}=\mathbf{A}^{H}\left(\mathbf{A}\mathbf{A}^{H}\right)^{-1}.\]
|
||||
|
||||
In both cases for :math:`M=N` , then
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}^{\dagger}=\mathbf{A}^{\#}=\mathbf{A}^{-1}\]
|
||||
|
||||
as long as :math:`\mathbf{A}` is invertible.
|
||||
|
||||
|
||||
Decompositions
|
||||
--------------
|
||||
|
||||
In many applications it is useful to decompose a matrix using other
|
||||
representations. There are several decompositions supported by SciPy.
|
||||
|
||||
|
||||
Eigenvalues and eigenvectors
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The eigenvalue-eigenvector problem is one of the most commonly
|
||||
employed linear algebra operations. In one popular form, the
|
||||
eigenvalue-eigenvector problem is to find for some square matrix
|
||||
:math:`\mathbf{A}` scalars :math:`\lambda` and corresponding vectors
|
||||
:math:`\mathbf{v}` such that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{Av}=\lambda\mathbf{v}.\]
|
||||
|
||||
For an :math:`N\times N` matrix, there are :math:`N` (not necessarily
|
||||
distinct) eigenvalues --- roots of the (characteristic) polynomial
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \left|\mathbf{A}-\lambda\mathbf{I}\right|=0.\]
|
||||
|
||||
The eigenvectors, :math:`\mathbf{v}` , are also sometimes called right
|
||||
eigenvectors to distinguish them from another set of left eigenvectors
|
||||
that satisfy
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{v}_{L}^{H}\mathbf{A}=\lambda\mathbf{v}_{L}^{H}\]
|
||||
|
||||
or
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}^{H}\mathbf{v}_{L}=\lambda^{*}\mathbf{v}_{L}.\]
|
||||
|
||||
With it's default optional arguments, the command :obj:`linalg.eig`
|
||||
returns :math:`\lambda` and :math:`\mathbf{v}.` However, it can also
|
||||
return :math:`\mathbf{v}_{L}` and just :math:`\lambda` by itself (
|
||||
:obj:`linalg.eigvals` returns just :math:`\lambda` as well).
|
||||
|
||||
In addtion, :obj:`linalg.eig` can also solve the more general eigenvalue problem
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \mathbf{Av} & = & \lambda\mathbf{Bv}\\ \mathbf{A}^{H}\mathbf{v}_{L} & = & \lambda^{*}\mathbf{B}^{H}\mathbf{v}_{L}\end{eqnarray*}
|
||||
|
||||
for square matrices :math:`\mathbf{A}` and :math:`\mathbf{B}.` The
|
||||
standard eigenvalue problem is an example of the general eigenvalue
|
||||
problem for :math:`\mathbf{B}=\mathbf{I}.` When a generalized
|
||||
eigenvalue problem can be solved, then it provides a decomposition of
|
||||
:math:`\mathbf{A}` as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}=\mathbf{BV}\boldsymbol{\Lambda}\mathbf{V}^{-1}\]
|
||||
|
||||
where :math:`\mathbf{V}` is the collection of eigenvectors into
|
||||
columns and :math:`\boldsymbol{\Lambda}` is a diagonal matrix of
|
||||
eigenvalues.
|
||||
|
||||
By definition, eigenvectors are only defined up to a constant scale
|
||||
factor. In SciPy, the scaling factor for the eigenvectors is chosen so
|
||||
that :math:`\left\Vert \mathbf{v}\right\Vert
|
||||
^{2}=\sum_{i}v_{i}^{2}=1.`
|
||||
|
||||
As an example, consider finding the eigenvalues and eigenvectors of
|
||||
the matrix
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}=\left[\begin{array}{ccc} 1 & 5 & 2\\ 2 & 4 & 1\\ 3 & 6 & 2\end{array}\right].\]
|
||||
|
||||
The characteristic polynomial is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \left|\mathbf{A}-\lambda\mathbf{I}\right| & = & \left(1-\lambda\right)\left[\left(4-\lambda\right)\left(2-\lambda\right)-6\right]-\\ & & 5\left[2\left(2-\lambda\right)-3\right]+2\left[12-3\left(4-\lambda\right)\right]\\ & = & -\lambda^{3}+7\lambda^{2}+8\lambda-3.\end{eqnarray*}
|
||||
|
||||
The roots of this polynomial are the eigenvalues of :math:`\mathbf{A}` :
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \lambda_{1} & = & 7.9579\\ \lambda_{2} & = & -1.2577\\ \lambda_{3} & = & 0.2997.\end{eqnarray*}
|
||||
|
||||
The eigenvectors corresponding to each eigenvalue can be found using
|
||||
the original equation. The eigenvectors associated with these
|
||||
eigenvalues can then be found.
|
||||
|
||||
>>> from scipy import linalg
|
||||
>>> A = mat('[1 5 2; 2 4 1; 3 6 2]')
|
||||
>>> la,v = linalg.eig(A)
|
||||
>>> l1,l2,l3 = la
|
||||
>>> print l1, l2, l3
|
||||
(7.95791620491+0j) (-1.25766470568+0j) (0.299748500767+0j)
|
||||
|
||||
>>> print v[:,0]
|
||||
[-0.5297175 -0.44941741 -0.71932146]
|
||||
>>> print v[:,1]
|
||||
[-0.90730751 0.28662547 0.30763439]
|
||||
>>> print v[:,2]
|
||||
[ 0.28380519 -0.39012063 0.87593408]
|
||||
>>> print sum(abs(v**2),axis=0)
|
||||
[ 1. 1. 1.]
|
||||
|
||||
>>> v1 = mat(v[:,0]).T
|
||||
>>> print max(ravel(abs(A*v1-l1*v1)))
|
||||
8.881784197e-16
|
||||
|
||||
|
||||
Singular value decomposition
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Singular Value Decompostion (SVD) can be thought of as an extension of
|
||||
the eigenvalue problem to matrices that are not square. Let
|
||||
:math:`\mathbf{A}` be an :math:`M\times N` matrix with :math:`M` and
|
||||
:math:`N` arbitrary. The matrices :math:`\mathbf{A}^{H}\mathbf{A}` and
|
||||
:math:`\mathbf{A}\mathbf{A}^{H}` are square hermitian matrices [#]_ of
|
||||
size :math:`N\times N` and :math:`M\times M` respectively. It is known
|
||||
that the eigenvalues of square hermitian matrices are real and
|
||||
non-negative. In addtion, there are at most
|
||||
:math:`\min\left(M,N\right)` identical non-zero eigenvalues of
|
||||
:math:`\mathbf{A}^{H}\mathbf{A}` and :math:`\mathbf{A}\mathbf{A}^{H}.`
|
||||
Define these positive eigenvalues as :math:`\sigma_{i}^{2}.` The
|
||||
square-root of these are called singular values of :math:`\mathbf{A}.`
|
||||
The eigenvectors of :math:`\mathbf{A}^{H}\mathbf{A}` are collected by
|
||||
columns into an :math:`N\times N` unitary [#]_ matrix
|
||||
:math:`\mathbf{V}` while the eigenvectors of
|
||||
:math:`\mathbf{A}\mathbf{A}^{H}` are collected by columns in the
|
||||
unitary matrix :math:`\mathbf{U}` , the singular values are collected
|
||||
in an :math:`M\times N` zero matrix
|
||||
:math:`\mathbf{\boldsymbol{\Sigma}}` with main diagonal entries set to
|
||||
the singular values. Then
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A=U}\boldsymbol{\Sigma}\mathbf{V}^{H}\]
|
||||
|
||||
is the singular-value decomposition of :math:`\mathbf{A}.` Every
|
||||
matrix has a singular value decomposition. Sometimes, the singular
|
||||
values are called the spectrum of :math:`\mathbf{A}.` The command
|
||||
:obj:`linalg.svd` will return :math:`\mathbf{U}` ,
|
||||
:math:`\mathbf{V}^{H}` , and :math:`\sigma_{i}` as an array of the
|
||||
singular values. To obtain the matrix :math:`\mathbf{\Sigma}` use
|
||||
:obj:`linalg.diagsvd`. The following example illustrates the use of
|
||||
:obj:`linalg.svd` .
|
||||
|
||||
>>> A = mat('[1 3 2; 1 2 3]')
|
||||
>>> M,N = A.shape
|
||||
>>> U,s,Vh = linalg.svd(A)
|
||||
>>> Sig = mat(linalg.diagsvd(s,M,N))
|
||||
>>> U, Vh = mat(U), mat(Vh)
|
||||
>>> print U
|
||||
[[-0.70710678 -0.70710678]
|
||||
[-0.70710678 0.70710678]]
|
||||
>>> print Sig
|
||||
[[ 5.19615242 0. 0. ]
|
||||
[ 0. 1. 0. ]]
|
||||
>>> print Vh
|
||||
[[ -2.72165527e-01 -6.80413817e-01 -6.80413817e-01]
|
||||
[ -6.18652536e-16 -7.07106781e-01 7.07106781e-01]
|
||||
[ -9.62250449e-01 1.92450090e-01 1.92450090e-01]]
|
||||
|
||||
>>> print A
|
||||
[[1 3 2]
|
||||
[1 2 3]]
|
||||
>>> print U*Sig*Vh
|
||||
[[ 1. 3. 2.]
|
||||
[ 1. 2. 3.]]
|
||||
|
||||
.. [#] A hermitian matrix :math:`\mathbf{D}` satisfies :math:`\mathbf{D}^{H}=\mathbf{D}.`
|
||||
|
||||
.. [#] A unitary matrix :math:`\mathbf{D}` satisfies :math:`\mathbf{D}^{H}\mathbf{D}=\mathbf{I}=\mathbf{D}\mathbf{D}^{H}` so that :math:`\mathbf{D}^{-1}=\mathbf{D}^{H}.`
|
||||
|
||||
|
||||
LU decomposition
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
The LU decompostion finds a representation for the :math:`M\times N` matrix :math:`\mathbf{A}` as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}=\mathbf{PLU}\]
|
||||
|
||||
where :math:`\mathbf{P}` is an :math:`M\times M` permutation matrix (a
|
||||
permutation of the rows of the identity matrix), :math:`\mathbf{L}` is
|
||||
in :math:`M\times K` lower triangular or trapezoidal matrix (
|
||||
:math:`K=\min\left(M,N\right)` ) with unit-diagonal, and
|
||||
:math:`\mathbf{U}` is an upper triangular or trapezoidal matrix. The
|
||||
SciPy command for this decomposition is :obj:`linalg.lu` .
|
||||
|
||||
Such a decomposition is often useful for solving many simultaneous
|
||||
equations where the left-hand-side does not change but the right hand
|
||||
side does. For example, suppose we are going to solve
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}\mathbf{x}_{i}=\mathbf{b}_{i}\]
|
||||
|
||||
for many different :math:`\mathbf{b}_{i}` . The LU decomposition allows this to be written as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{PLUx}_{i}=\mathbf{b}_{i}.\]
|
||||
|
||||
Because :math:`\mathbf{L}` is lower-triangular, the equation can be
|
||||
solved for :math:`\mathbf{U}\mathbf{x}_{i}` and finally
|
||||
:math:`\mathbf{x}_{i}` very rapidly using forward- and
|
||||
back-substitution. An initial time spent factoring :math:`\mathbf{A}`
|
||||
allows for very rapid solution of similar systems of equations in the
|
||||
future. If the intent for performing LU decomposition is for solving
|
||||
linear systems then the command :obj:`linalg.lu_factor` should be used
|
||||
followed by repeated applications of the command
|
||||
:obj:`linalg.lu_solve` to solve the system for each new
|
||||
right-hand-side.
|
||||
|
||||
|
||||
Cholesky decomposition
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Cholesky decomposition is a special case of LU decomposition
|
||||
applicable to Hermitian positive definite matrices. When
|
||||
:math:`\mathbf{A}=\mathbf{A}^{H}` and
|
||||
:math:`\mathbf{x}^{H}\mathbf{Ax}\geq0` for all :math:`\mathbf{x}` ,
|
||||
then decompositions of :math:`\mathbf{A}` can be found so that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \mathbf{A} & = & \mathbf{U}^{H}\mathbf{U}\\ \mathbf{A} & = & \mathbf{L}\mathbf{L}^{H}\end{eqnarray*}
|
||||
|
||||
where :math:`\mathbf{L}` is lower-triangular and :math:`\mathbf{U}` is
|
||||
upper triangular. Notice that :math:`\mathbf{L}=\mathbf{U}^{H}.` The
|
||||
command :obj:`linagl.cholesky` computes the cholesky
|
||||
factorization. For using cholesky factorization to solve systems of
|
||||
equations there are also :obj:`linalg.cho_factor` and
|
||||
:obj:`linalg.cho_solve` routines that work similarly to their LU
|
||||
decomposition counterparts.
|
||||
|
||||
|
||||
QR decomposition
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
The QR decomposition (sometimes called a polar decomposition) works
|
||||
for any :math:`M\times N` array and finds an :math:`M\times M` unitary
|
||||
matrix :math:`\mathbf{Q}` and an :math:`M\times N` upper-trapezoidal
|
||||
matrix :math:`\mathbf{R}` such that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A=QR}.\]
|
||||
|
||||
Notice that if the SVD of :math:`\mathbf{A}` is known then the QR decomposition can be found
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}=\mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^{H}=\mathbf{QR}\]
|
||||
|
||||
implies that :math:`\mathbf{Q}=\mathbf{U}` and
|
||||
:math:`\mathbf{R}=\boldsymbol{\Sigma}\mathbf{V}^{H}.` Note, however,
|
||||
that in SciPy independent algorithms are used to find QR and SVD
|
||||
decompositions. The command for QR decomposition is :obj:`linalg.qr` .
|
||||
|
||||
|
||||
Schur decomposition
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For a square :math:`N\times N` matrix, :math:`\mathbf{A}` , the Schur
|
||||
decomposition finds (not-necessarily unique) matrices
|
||||
:math:`\mathbf{T}` and :math:`\mathbf{Z}` such that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}=\mathbf{ZT}\mathbf{Z}^{H}\]
|
||||
|
||||
where :math:`\mathbf{Z}` is a unitary matrix and :math:`\mathbf{T}` is
|
||||
either upper-triangular or quasi-upper triangular depending on whether
|
||||
or not a real schur form or complex schur form is requested. For a
|
||||
real schur form both :math:`\mathbf{T}` and :math:`\mathbf{Z}` are
|
||||
real-valued when :math:`\mathbf{A}` is real-valued. When
|
||||
:math:`\mathbf{A}` is a real-valued matrix the real schur form is only
|
||||
quasi-upper triangular because :math:`2\times2` blocks extrude from
|
||||
the main diagonal corresponding to any complex- valued
|
||||
eigenvalues. The command :obj:`linalg.schur` finds the Schur
|
||||
decomposition while the command :obj:`linalg.rsf2csf` converts
|
||||
:math:`\mathbf{T}` and :math:`\mathbf{Z}` from a real Schur form to a
|
||||
complex Schur form. The Schur form is especially useful in calculating
|
||||
functions of matrices.
|
||||
|
||||
The following example illustrates the schur decomposition:
|
||||
|
||||
>>> from scipy import linalg
|
||||
>>> A = mat('[1 3 2; 1 4 5; 2 3 6]')
|
||||
>>> T,Z = linalg.schur(A)
|
||||
>>> T1,Z1 = linalg.schur(A,'complex')
|
||||
>>> T2,Z2 = linalg.rsf2csf(T,Z)
|
||||
>>> print T
|
||||
[[ 9.90012467 1.78947961 -0.65498528]
|
||||
[ 0. 0.54993766 -1.57754789]
|
||||
[ 0. 0.51260928 0.54993766]]
|
||||
>>> print T2
|
||||
[[ 9.90012467 +0.00000000e+00j -0.32436598 +1.55463542e+00j
|
||||
-0.88619748 +5.69027615e-01j]
|
||||
[ 0.00000000 +0.00000000e+00j 0.54993766 +8.99258408e-01j
|
||||
1.06493862 +1.37016050e-17j]
|
||||
[ 0.00000000 +0.00000000e+00j 0.00000000 +0.00000000e+00j
|
||||
0.54993766 -8.99258408e-01j]]
|
||||
>>> print abs(T1-T2) # different
|
||||
[[ 1.24357637e-14 2.09205364e+00 6.56028192e-01]
|
||||
[ 0.00000000e+00 4.00296604e-16 1.83223097e+00]
|
||||
[ 0.00000000e+00 0.00000000e+00 4.57756680e-16]]
|
||||
>>> print abs(Z1-Z2) # different
|
||||
[[ 0.06833781 1.10591375 0.23662249]
|
||||
[ 0.11857169 0.5585604 0.29617525]
|
||||
[ 0.12624999 0.75656818 0.22975038]]
|
||||
>>> T,Z,T1,Z1,T2,Z2 = map(mat,(T,Z,T1,Z1,T2,Z2))
|
||||
>>> print abs(A-Z*T*Z.H) # same
|
||||
[[ 1.11022302e-16 4.44089210e-16 4.44089210e-16]
|
||||
[ 4.44089210e-16 1.33226763e-15 8.88178420e-16]
|
||||
[ 8.88178420e-16 4.44089210e-16 2.66453526e-15]]
|
||||
>>> print abs(A-Z1*T1*Z1.H) # same
|
||||
[[ 1.00043248e-15 2.22301403e-15 5.55749485e-15]
|
||||
[ 2.88899660e-15 8.44927041e-15 9.77322008e-15]
|
||||
[ 3.11291538e-15 1.15463228e-14 1.15464861e-14]]
|
||||
>>> print abs(A-Z2*T2*Z2.H) # same
|
||||
[[ 3.34058710e-16 8.88611201e-16 4.18773089e-18]
|
||||
[ 1.48694940e-16 8.95109973e-16 8.92966151e-16]
|
||||
[ 1.33228956e-15 1.33582317e-15 3.55373104e-15]]
|
||||
|
||||
Matrix Functions
|
||||
----------------
|
||||
|
||||
Consider the function :math:`f\left(x\right)` with Taylor series expansion
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ f\left(x\right)=\sum_{k=0}^{\infty}\frac{f^{\left(k\right)}\left(0\right)}{k!}x^{k}.\]
|
||||
|
||||
A matrix function can be defined using this Taylor series for the
|
||||
square matrix :math:`\mathbf{A}` as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ f\left(\mathbf{A}\right)=\sum_{k=0}^{\infty}\frac{f^{\left(k\right)}\left(0\right)}{k!}\mathbf{A}^{k}.\]
|
||||
|
||||
While, this serves as a useful representation of a matrix function, it
|
||||
is rarely the best way to calculate a matrix function.
|
||||
|
||||
|
||||
Exponential and logarithm functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The matrix exponential is one of the more common matrix functions. It
|
||||
can be defined for square matrices as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ e^{\mathbf{A}}=\sum_{k=0}^{\infty}\frac{1}{k!}\mathbf{A}^{k}.\]
|
||||
|
||||
The command :obj:`linalg.expm3` uses this Taylor series definition to compute the matrix exponential.
|
||||
Due to poor convergence properties it is not often used.
|
||||
|
||||
Another method to compute the matrix exponential is to find an
|
||||
eigenvalue decomposition of :math:`\mathbf{A}` :
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}=\mathbf{V}\boldsymbol{\Lambda}\mathbf{V}^{-1}\]
|
||||
|
||||
and note that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ e^{\mathbf{A}}=\mathbf{V}e^{\boldsymbol{\Lambda}}\mathbf{V}^{-1}\]
|
||||
|
||||
where the matrix exponential of the diagonal matrix :math:`\boldsymbol{\Lambda}` is just the exponential of its elements. This method is implemented in :obj:`linalg.expm2` .
|
||||
|
||||
The preferred method for implementing the matrix exponential is to use
|
||||
scaling and a Padé approximation for :math:`e^{x}` . This algorithm is
|
||||
implemented as :obj:`linalg.expm` .
|
||||
|
||||
The inverse of the matrix exponential is the matrix logarithm defined
|
||||
as the inverse of the matrix exponential.
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{A}\equiv\exp\left(\log\left(\mathbf{A}\right)\right).\]
|
||||
|
||||
The matrix logarithm can be obtained with :obj:`linalg.logm` .
|
||||
|
||||
|
||||
Trigonometric functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The trigonometric functions :math:`\sin` , :math:`\cos` , and
|
||||
:math:`\tan` are implemented for matrices in :func:`linalg.sinm`,
|
||||
:func:`linalg.cosm`, and :obj:`linalg.tanm` respectively. The matrix
|
||||
sin and cosine can be defined using Euler's identity as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \sin\left(\mathbf{A}\right) & = & \frac{e^{j\mathbf{A}}-e^{-j\mathbf{A}}}{2j}\\ \cos\left(\mathbf{A}\right) & = & \frac{e^{j\mathbf{A}}+e^{-j\mathbf{A}}}{2}.\end{eqnarray*}
|
||||
|
||||
The tangent is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \tan\left(x\right)=\frac{\sin\left(x\right)}{\cos\left(x\right)}=\left[\cos\left(x\right)\right]^{-1}\sin\left(x\right)\]
|
||||
|
||||
and so the matrix tangent is defined as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \left[\cos\left(\mathbf{A}\right)\right]^{-1}\sin\left(\mathbf{A}\right).\]
|
||||
|
||||
|
||||
|
||||
|
||||
Hyperbolic trigonometric functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The hyperbolic trigonemetric functions :math:`\sinh` , :math:`\cosh` ,
|
||||
and :math:`\tanh` can also be defined for matrices using the familiar
|
||||
definitions:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \sinh\left(\mathbf{A}\right) & = & \frac{e^{\mathbf{A}}-e^{-\mathbf{A}}}{2}\\ \cosh\left(\mathbf{A}\right) & = & \frac{e^{\mathbf{A}}+e^{-\mathbf{A}}}{2}\\ \tanh\left(\mathbf{A}\right) & = & \left[\cosh\left(\mathbf{A}\right)\right]^{-1}\sinh\left(\mathbf{A}\right).\end{eqnarray*}
|
||||
|
||||
These matrix functions can be found using :obj:`linalg.sinhm`,
|
||||
:obj:`linalg.coshm` , and :obj:`linalg.tanhm`.
|
||||
|
||||
|
||||
Arbitrary function
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Finally, any arbitrary function that takes one complex number and
|
||||
returns a complex number can be called as a matrix function using the
|
||||
command :obj:`linalg.funm`. This command takes the matrix and an
|
||||
arbitrary Python function. It then implements an algorithm from Golub
|
||||
and Van Loan's book "Matrix Computations "to compute function applied
|
||||
to the matrix using a Schur decomposition. Note that *the function
|
||||
needs to accept complex numbers* as input in order to work with this
|
||||
algorithm. For example the following code computes the zeroth-order
|
||||
Bessel function applied to a matrix.
|
||||
|
||||
>>> from scipy import special, random, linalg
|
||||
>>> A = random.rand(3,3)
|
||||
>>> B = linalg.funm(A,lambda x: special.jv(0,x))
|
||||
>>> print A
|
||||
[[ 0.72578091 0.34105276 0.79570345]
|
||||
[ 0.65767207 0.73855618 0.541453 ]
|
||||
[ 0.78397086 0.68043507 0.4837898 ]]
|
||||
>>> print B
|
||||
[[ 0.72599893 -0.20545711 -0.22721101]
|
||||
[-0.27426769 0.77255139 -0.23422637]
|
||||
[-0.27612103 -0.21754832 0.7556849 ]]
|
||||
>>> print linalg.eigvals(A)
|
||||
[ 1.91262611+0.j 0.21846476+0.j -0.18296399+0.j]
|
||||
>>> print special.jv(0, linalg.eigvals(A))
|
||||
[ 0.27448286+0.j 0.98810383+0.j 0.99164854+0.j]
|
||||
>>> print linalg.eigvals(B)
|
||||
[ 0.27448286+0.j 0.98810383+0.j 0.99164854+0.j]
|
||||
|
||||
Note how, by virtue of how matrix analytic functions are defined,
|
||||
the Bessel function has acted on the matrix eigenvalues.
|
||||
|
File diff suppressed because it is too large
Load diff
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
@ -1,637 +0,0 @@
|
|||
Optimization (optimize)
|
||||
=======================
|
||||
|
||||
.. sectionauthor:: Travis E. Oliphant
|
||||
|
||||
.. currentmodule:: scipy.optimize
|
||||
|
||||
There are several classical optimization algorithms provided by SciPy
|
||||
in the :mod:`scipy.optimize` package. An overview of the module is
|
||||
available using :func:`help` (or :func:`pydoc.help`):
|
||||
|
||||
.. literalinclude:: examples/5-1
|
||||
|
||||
The first four algorithms are unconstrained minimization algorithms
|
||||
(:func:`fmin`: Nelder-Mead simplex, :func:`fmin_bfgs`: BFGS,
|
||||
:func:`fmin_ncg`: Newton Conjugate Gradient, and :func:`leastsq`:
|
||||
Levenburg-Marquardt). The last algorithm actually finds the roots of a
|
||||
general function of possibly many variables. It is included in the
|
||||
optimization package because at the (non-boundary) extreme points of a
|
||||
function, the gradient is equal to zero.
|
||||
|
||||
|
||||
Nelder-Mead Simplex algorithm (:func:`fmin`)
|
||||
--------------------------------------------
|
||||
|
||||
The simplex algorithm is probably the simplest way to minimize a
|
||||
fairly well-behaved function. The simplex algorithm requires only
|
||||
function evaluations and is a good choice for simple minimization
|
||||
problems. However, because it does not use any gradient evaluations,
|
||||
it may take longer to find the minimum. To demonstrate the
|
||||
minimization function consider the problem of minimizing the
|
||||
Rosenbrock function of :math:`N` variables:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ f\left(\mathbf{x}\right)=\sum_{i=1}^{N-1}100\left(x_{i}-x_{i-1}^{2}\right)^{2}+\left(1-x_{i-1}\right)^{2}.\]
|
||||
|
||||
The minimum value of this function is 0 which is achieved when :math:`x_{i}=1.` This minimum can be found using the :obj:`fmin` routine as shown in the example below:
|
||||
|
||||
>>> from scipy.optimize import fmin
|
||||
>>> def rosen(x):
|
||||
... """The Rosenbrock function"""
|
||||
... return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
|
||||
|
||||
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
|
||||
>>> xopt = fmin(rosen, x0, xtol=1e-8)
|
||||
Optimization terminated successfully.
|
||||
Current function value: 0.000000
|
||||
Iterations: 339
|
||||
Function evaluations: 571
|
||||
|
||||
>>> print xopt
|
||||
[ 1. 1. 1. 1. 1.]
|
||||
|
||||
Another optimization algorithm that needs only function calls to find
|
||||
the minimum is Powell's method available as :func:`fmin_powell`.
|
||||
|
||||
|
||||
Broyden-Fletcher-Goldfarb-Shanno algorithm (:func:`fmin_bfgs`)
|
||||
--------------------------------------------------------------
|
||||
|
||||
In order to converge more quickly to the solution, this routine uses
|
||||
the gradient of the objective function. If the gradient is not given
|
||||
by the user, then it is estimated using first-differences. The
|
||||
Broyden-Fletcher-Goldfarb-Shanno (BFGS) method typically requires
|
||||
fewer function calls than the simplex algorithm even when the gradient
|
||||
must be estimated.
|
||||
|
||||
To demonstrate this algorithm, the Rosenbrock function is again used.
|
||||
The gradient of the Rosenbrock function is the vector:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \frac{\partial f}{\partial x_{j}} & = & \sum_{i=1}^{N}200\left(x_{i}-x_{i-1}^{2}\right)\left(\delta_{i,j}-2x_{i-1}\delta_{i-1,j}\right)-2\left(1-x_{i-1}\right)\delta_{i-1,j}.\\ & = & 200\left(x_{j}-x_{j-1}^{2}\right)-400x_{j}\left(x_{j+1}-x_{j}^{2}\right)-2\left(1-x_{j}\right).\end{eqnarray*}
|
||||
|
||||
This expression is valid for the interior derivatives. Special cases
|
||||
are
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \frac{\partial f}{\partial x_{0}} & = & -400x_{0}\left(x_{1}-x_{0}^{2}\right)-2\left(1-x_{0}\right),\\ \frac{\partial f}{\partial x_{N-1}} & = & 200\left(x_{N-1}-x_{N-2}^{2}\right).\end{eqnarray*}
|
||||
|
||||
A Python function which computes this gradient is constructed by the
|
||||
code-segment:
|
||||
|
||||
>>> def rosen_der(x):
|
||||
... xm = x[1:-1]
|
||||
... xm_m1 = x[:-2]
|
||||
... xm_p1 = x[2:]
|
||||
... der = zeros_like(x)
|
||||
... der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm)
|
||||
... der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0])
|
||||
... der[-1] = 200*(x[-1]-x[-2]**2)
|
||||
... return der
|
||||
|
||||
The calling signature for the BFGS minimization algorithm is similar
|
||||
to :obj:`fmin` with the addition of the *fprime* argument. An example
|
||||
usage of :obj:`fmin_bfgs` is shown in the following example which
|
||||
minimizes the Rosenbrock function.
|
||||
|
||||
>>> from scipy.optimize import fmin_bfgs
|
||||
|
||||
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
|
||||
>>> xopt = fmin_bfgs(rosen, x0, fprime=rosen_der)
|
||||
Optimization terminated successfully.
|
||||
Current function value: 0.000000
|
||||
Iterations: 53
|
||||
Function evaluations: 65
|
||||
Gradient evaluations: 65
|
||||
>>> print xopt
|
||||
[ 1. 1. 1. 1. 1.]
|
||||
|
||||
|
||||
Newton-Conjugate-Gradient (:func:`fmin_ncg`)
|
||||
--------------------------------------------
|
||||
|
||||
The method which requires the fewest function calls and is therefore
|
||||
often the fastest method to minimize functions of many variables is
|
||||
:obj:`fmin_ncg`. This method is a modified Newton's method and uses a
|
||||
conjugate gradient algorithm to (approximately) invert the local
|
||||
Hessian. Newton's method is based on fitting the function locally to
|
||||
a quadratic form:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ f\left(\mathbf{x}\right)\approx f\left(\mathbf{x}_{0}\right)+\nabla f\left(\mathbf{x}_{0}\right)\cdot\left(\mathbf{x}-\mathbf{x}_{0}\right)+\frac{1}{2}\left(\mathbf{x}-\mathbf{x}_{0}\right)^{T}\mathbf{H}\left(\mathbf{x}_{0}\right)\left(\mathbf{x}-\mathbf{x}_{0}\right).\]
|
||||
|
||||
where :math:`\mathbf{H}\left(\mathbf{x}_{0}\right)` is a matrix of second-derivatives (the Hessian). If the Hessian is
|
||||
positive definite then the local minimum of this function can be found
|
||||
by setting the gradient of the quadratic form to zero, resulting in
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{x}_{\textrm{opt}}=\mathbf{x}_{0}-\mathbf{H}^{-1}\nabla f.\]
|
||||
|
||||
The inverse of the Hessian is evaluted using the conjugate-gradient
|
||||
method. An example of employing this method to minimizing the
|
||||
Rosenbrock function is given below. To take full advantage of the
|
||||
NewtonCG method, a function which computes the Hessian must be
|
||||
provided. The Hessian matrix itself does not need to be constructed,
|
||||
only a vector which is the product of the Hessian with an arbitrary
|
||||
vector needs to be available to the minimization routine. As a result,
|
||||
the user can provide either a function to compute the Hessian matrix,
|
||||
or a function to compute the product of the Hessian with an arbitrary
|
||||
vector.
|
||||
|
||||
|
||||
Full Hessian example:
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The Hessian of the Rosenbrock function is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} H_{ij}=\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}} & = & 200\left(\delta_{i,j}-2x_{i-1}\delta_{i-1,j}\right)-400x_{i}\left(\delta_{i+1,j}-2x_{i}\delta_{i,j}\right)-400\delta_{i,j}\left(x_{i+1}-x_{i}^{2}\right)+2\delta_{i,j},\\ & = & \left(202+1200x_{i}^{2}-400x_{i+1}\right)\delta_{i,j}-400x_{i}\delta_{i+1,j}-400x_{i-1}\delta_{i-1,j},\end{eqnarray*}
|
||||
|
||||
if :math:`i,j\in\left[1,N-2\right]` with :math:`i,j\in\left[0,N-1\right]` defining the :math:`N\times N` matrix. Other non-zero entries of the matrix are
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \frac{\partial^{2}f}{\partial x_{0}^{2}} & = & 1200x_{0}^{2}-400x_{1}+2,\\ \frac{\partial^{2}f}{\partial x_{0}\partial x_{1}}=\frac{\partial^{2}f}{\partial x_{1}\partial x_{0}} & = & -400x_{0},\\ \frac{\partial^{2}f}{\partial x_{N-1}\partial x_{N-2}}=\frac{\partial^{2}f}{\partial x_{N-2}\partial x_{N-1}} & = & -400x_{N-2},\\ \frac{\partial^{2}f}{\partial x_{N-1}^{2}} & = & 200.\end{eqnarray*}
|
||||
|
||||
For example, the Hessian when :math:`N=5` is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{H}=\left[\begin{array}{ccccc} 1200x_{0}^{2}-400x_{1}+2 & -400x_{0} & 0 & 0 & 0\\ -400x_{0} & 202+1200x_{1}^{2}-400x_{2} & -400x_{1} & 0 & 0\\ 0 & -400x_{1} & 202+1200x_{2}^{2}-400x_{3} & -400x_{2} & 0\\ 0 & & -400x_{2} & 202+1200x_{3}^{2}-400x_{4} & -400x_{3}\\ 0 & 0 & 0 & -400x_{3} & 200\end{array}\right].\]
|
||||
|
||||
The code which computes this Hessian along with the code to minimize
|
||||
the function using :obj:`fmin_ncg` is shown in the following example:
|
||||
|
||||
>>> from scipy.optimize import fmin_ncg
|
||||
>>> def rosen_hess(x):
|
||||
... x = asarray(x)
|
||||
... H = diag(-400*x[:-1],1) - diag(400*x[:-1],-1)
|
||||
... diagonal = zeros_like(x)
|
||||
... diagonal[0] = 1200*x[0]-400*x[1]+2
|
||||
... diagonal[-1] = 200
|
||||
... diagonal[1:-1] = 202 + 1200*x[1:-1]**2 - 400*x[2:]
|
||||
... H = H + diag(diagonal)
|
||||
... return H
|
||||
|
||||
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
|
||||
>>> xopt = fmin_ncg(rosen, x0, rosen_der, fhess=rosen_hess, avextol=1e-8)
|
||||
Optimization terminated successfully.
|
||||
Current function value: 0.000000
|
||||
Iterations: 23
|
||||
Function evaluations: 26
|
||||
Gradient evaluations: 23
|
||||
Hessian evaluations: 23
|
||||
>>> print xopt
|
||||
[ 1. 1. 1. 1. 1.]
|
||||
|
||||
|
||||
Hessian product example:
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For larger minimization problems, storing the entire Hessian matrix
|
||||
can consume considerable time and memory. The Newton-CG algorithm only
|
||||
needs the product of the Hessian times an arbitrary vector. As a
|
||||
result, the user can supply code to compute this product rather than
|
||||
the full Hessian by setting the *fhess_p* keyword to the desired
|
||||
function. The *fhess_p* function should take the minimization vector as
|
||||
the first argument and the arbitrary vector as the second
|
||||
argument. Any extra arguments passed to the function to be minimized
|
||||
will also be passed to this function. If possible, using Newton-CG
|
||||
with the hessian product option is probably the fastest way to
|
||||
minimize the function.
|
||||
|
||||
In this case, the product of the Rosenbrock Hessian with an arbitrary
|
||||
vector is not difficult to compute. If :math:`\mathbf{p}` is the arbitrary vector, then :math:`\mathbf{H}\left(\mathbf{x}\right)\mathbf{p}` has elements:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mathbf{H}\left(\mathbf{x}\right)\mathbf{p}=\left[\begin{array}{c} \left(1200x_{0}^{2}-400x_{1}+2\right)p_{0}-400x_{0}p_{1}\\ \vdots\\ -400x_{i-1}p_{i-1}+\left(202+1200x_{i}^{2}-400x_{i+1}\right)p_{i}-400x_{i}p_{i+1}\\ \vdots\\ -400x_{N-2}p_{N-2}+200p_{N-1}\end{array}\right].\]
|
||||
|
||||
Code which makes use of the *fhess_p* keyword to minimize the
|
||||
Rosenbrock function using :obj:`fmin_ncg` follows:
|
||||
|
||||
>>> from scipy.optimize import fmin_ncg
|
||||
>>> def rosen_hess_p(x,p):
|
||||
... x = asarray(x)
|
||||
... Hp = zeros_like(x)
|
||||
... Hp[0] = (1200*x[0]**2 - 400*x[1] + 2)*p[0] - 400*x[0]*p[1]
|
||||
... Hp[1:-1] = -400*x[:-2]*p[:-2]+(202+1200*x[1:-1]**2-400*x[2:])*p[1:-1] \
|
||||
... -400*x[1:-1]*p[2:]
|
||||
... Hp[-1] = -400*x[-2]*p[-2] + 200*p[-1]
|
||||
... return Hp
|
||||
|
||||
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
|
||||
>>> xopt = fmin_ncg(rosen, x0, rosen_der, fhess_p=rosen_hess_p, avextol=1e-8)
|
||||
Optimization terminated successfully.
|
||||
Current function value: 0.000000
|
||||
Iterations: 22
|
||||
Function evaluations: 25
|
||||
Gradient evaluations: 22
|
||||
Hessian evaluations: 54
|
||||
>>> print xopt
|
||||
[ 1. 1. 1. 1. 1.]
|
||||
|
||||
|
||||
Least-square fitting (:func:`leastsq`)
|
||||
--------------------------------------
|
||||
|
||||
All of the previously-explained minimization procedures can be used to
|
||||
solve a least-squares problem provided the appropriate objective
|
||||
function is constructed. For example, suppose it is desired to fit a
|
||||
set of data :math:`\left\{\mathbf{x}_{i}, \mathbf{y}_{i}\right\}`
|
||||
to a known model,
|
||||
:math:`\mathbf{y}=\mathbf{f}\left(\mathbf{x},\mathbf{p}\right)`
|
||||
where :math:`\mathbf{p}` is a vector of parameters for the model that
|
||||
need to be found. A common method for determining which parameter
|
||||
vector gives the best fit to the data is to minimize the sum of squares
|
||||
of the residuals. The residual is usually defined for each observed
|
||||
data-point as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ e_{i}\left(\mathbf{p},\mathbf{y}_{i},\mathbf{x}_{i}\right)=\left\Vert \mathbf{y}_{i}-\mathbf{f}\left(\mathbf{x}_{i},\mathbf{p}\right)\right\Vert .\]
|
||||
|
||||
An objective function to pass to any of the previous minization
|
||||
algorithms to obtain a least-squares fit is.
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ J\left(\mathbf{p}\right)=\sum_{i=0}^{N-1}e_{i}^{2}\left(\mathbf{p}\right).\]
|
||||
|
||||
|
||||
|
||||
The :obj:`leastsq` algorithm performs this squaring and summing of the
|
||||
residuals automatically. It takes as an input argument the vector
|
||||
function :math:`\mathbf{e}\left(\mathbf{p}\right)` and returns the
|
||||
value of :math:`\mathbf{p}` which minimizes
|
||||
:math:`J\left(\mathbf{p}\right)=\mathbf{e}^{T}\mathbf{e}`
|
||||
directly. The user is also encouraged to provide the Jacobian matrix
|
||||
of the function (with derivatives down the columns or across the
|
||||
rows). If the Jacobian is not provided, it is estimated.
|
||||
|
||||
An example should clarify the usage. Suppose it is believed some
|
||||
measured data follow a sinusoidal pattern
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y_{i}=A\sin\left(2\pi kx_{i}+\theta\right)\]
|
||||
|
||||
where the parameters :math:`A,` :math:`k` , and :math:`\theta` are unknown. The residual vector is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ e_{i}=\left|y_{i}-A\sin\left(2\pi kx_{i}+\theta\right)\right|.\]
|
||||
|
||||
By defining a function to compute the residuals and (selecting an
|
||||
appropriate starting position), the least-squares fit routine can be
|
||||
used to find the best-fit parameters :math:`\hat{A},\,\hat{k},\,\hat{\theta}`.
|
||||
This is shown in the following example:
|
||||
|
||||
.. plot::
|
||||
|
||||
>>> from numpy import *
|
||||
>>> x = arange(0,6e-2,6e-2/30)
|
||||
>>> A,k,theta = 10, 1.0/3e-2, pi/6
|
||||
>>> y_true = A*sin(2*pi*k*x+theta)
|
||||
>>> y_meas = y_true + 2*random.randn(len(x))
|
||||
|
||||
>>> def residuals(p, y, x):
|
||||
... A,k,theta = p
|
||||
... err = y-A*sin(2*pi*k*x+theta)
|
||||
... return err
|
||||
|
||||
>>> def peval(x, p):
|
||||
... return p[0]*sin(2*pi*p[1]*x+p[2])
|
||||
|
||||
>>> p0 = [8, 1/2.3e-2, pi/3]
|
||||
>>> print array(p0)
|
||||
[ 8. 43.4783 1.0472]
|
||||
|
||||
>>> from scipy.optimize import leastsq
|
||||
>>> plsq = leastsq(residuals, p0, args=(y_meas, x))
|
||||
>>> print plsq[0]
|
||||
[ 10.9437 33.3605 0.5834]
|
||||
|
||||
>>> print array([A, k, theta])
|
||||
[ 10. 33.3333 0.5236]
|
||||
|
||||
>>> import matplotlib.pyplot as plt
|
||||
>>> plt.plot(x,peval(x,plsq[0]),x,y_meas,'o',x,y_true)
|
||||
>>> plt.title('Least-squares fit to noisy data')
|
||||
>>> plt.legend(['Fit', 'Noisy', 'True'])
|
||||
>>> plt.show()
|
||||
|
||||
.. :caption: Least-square fitting to noisy data using
|
||||
.. :obj:`scipy.optimize.leastsq`
|
||||
|
||||
|
||||
.. _tutorial-sqlsp:
|
||||
|
||||
Sequential Least-square fitting with constraints (:func:`fmin_slsqp`)
|
||||
---------------------------------------------------------------------
|
||||
|
||||
This module implements the Sequential Least SQuares Programming optimization algorithm (SLSQP).
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \min F(x) \\ \text{subject to } & C_j(X) = 0 , &j = 1,...,\text{MEQ}\\
|
||||
& C_j(x) \geq 0 , &j = \text{MEQ}+1,...,M\\
|
||||
& XL \leq x \leq XU , &I = 1,...,N. \end{eqnarray*}
|
||||
|
||||
The following script shows examples for how constraints can be specified.
|
||||
|
||||
::
|
||||
|
||||
"""
|
||||
This script tests fmin_slsqp using Example 14.4 from Numerical Methods for
|
||||
Engineers by Steven Chapra and Raymond Canale. This example maximizes the
|
||||
function f(x) = 2*x*y + 2*x - x**2 - 2*y**2, which has a maximum at x=2,y=1.
|
||||
"""
|
||||
|
||||
from scipy.optimize import fmin_slsqp
|
||||
from numpy import array, asfarray, finfo,ones, sqrt, zeros
|
||||
|
||||
|
||||
def testfunc(d,*args):
|
||||
"""
|
||||
Arguments:
|
||||
d - A list of two elements, where d[0] represents x and
|
||||
d[1] represents y in the following equation.
|
||||
sign - A multiplier for f. Since we want to optimize it, and the scipy
|
||||
optimizers can only minimize functions, we need to multiply it by
|
||||
-1 to achieve the desired solution
|
||||
Returns:
|
||||
2*x*y + 2*x - x**2 - 2*y**2
|
||||
|
||||
"""
|
||||
try:
|
||||
sign = args[0]
|
||||
except:
|
||||
sign = 1.0
|
||||
x = d[0]
|
||||
y = d[1]
|
||||
return sign*(2*x*y + 2*x - x**2 - 2*y**2)
|
||||
|
||||
def testfunc_deriv(d,*args):
|
||||
""" This is the derivative of testfunc, returning a numpy array
|
||||
representing df/dx and df/dy
|
||||
|
||||
"""
|
||||
try:
|
||||
sign = args[0]
|
||||
except:
|
||||
sign = 1.0
|
||||
x = d[0]
|
||||
y = d[1]
|
||||
dfdx = sign*(-2*x + 2*y + 2)
|
||||
dfdy = sign*(2*x - 4*y)
|
||||
return array([ dfdx, dfdy ],float)
|
||||
|
||||
|
||||
from time import time
|
||||
|
||||
print '\n\n'
|
||||
|
||||
print "Unbounded optimization. Derivatives approximated."
|
||||
t0 = time()
|
||||
x = fmin_slsqp(testfunc, [-1.0,1.0], args=(-1.0,), iprint=2, full_output=1)
|
||||
print "Elapsed time:", 1000*(time()-t0), "ms"
|
||||
print "Results",x
|
||||
print "\n\n"
|
||||
|
||||
print "Unbounded optimization. Derivatives provided."
|
||||
t0 = time()
|
||||
x = fmin_slsqp(testfunc, [-1.0,1.0], args=(-1.0,), iprint=2, full_output=1)
|
||||
print "Elapsed time:", 1000*(time()-t0), "ms"
|
||||
print "Results",x
|
||||
print "\n\n"
|
||||
|
||||
print "Bound optimization. Derivatives approximated."
|
||||
t0 = time()
|
||||
x = fmin_slsqp(testfunc, [-1.0,1.0], args=(-1.0,),
|
||||
eqcons=[lambda x, y: x[0]-x[1] ], iprint=2, full_output=1)
|
||||
print "Elapsed time:", 1000*(time()-t0), "ms"
|
||||
print "Results",x
|
||||
print "\n\n"
|
||||
|
||||
print "Bound optimization (equality constraints). Derivatives provided."
|
||||
t0 = time()
|
||||
x = fmin_slsqp(testfunc, [-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,),
|
||||
eqcons=[lambda x, y: x[0]-x[1] ], iprint=2, full_output=1)
|
||||
print "Elapsed time:", 1000*(time()-t0), "ms"
|
||||
print "Results",x
|
||||
print "\n\n"
|
||||
|
||||
print "Bound optimization (equality and inequality constraints)."
|
||||
print "Derivatives provided."
|
||||
|
||||
t0 = time()
|
||||
x = fmin_slsqp(testfunc,[-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,),
|
||||
eqcons=[lambda x, y: x[0]-x[1] ],
|
||||
ieqcons=[lambda x, y: x[0]-.5], iprint=2, full_output=1)
|
||||
print "Elapsed time:", 1000*(time()-t0), "ms"
|
||||
print "Results",x
|
||||
print "\n\n"
|
||||
|
||||
|
||||
def test_eqcons(d,*args):
|
||||
try:
|
||||
sign = args[0]
|
||||
except:
|
||||
sign = 1.0
|
||||
x = d[0]
|
||||
y = d[1]
|
||||
return array([ x**3-y ])
|
||||
|
||||
|
||||
def test_ieqcons(d,*args):
|
||||
try:
|
||||
sign = args[0]
|
||||
except:
|
||||
sign = 1.0
|
||||
x = d[0]
|
||||
y = d[1]
|
||||
return array([ y-1 ])
|
||||
|
||||
print "Bound optimization (equality and inequality constraints)."
|
||||
print "Derivatives provided via functions."
|
||||
t0 = time()
|
||||
x = fmin_slsqp(testfunc, [-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,),
|
||||
f_eqcons=test_eqcons, f_ieqcons=test_ieqcons,
|
||||
iprint=2, full_output=1)
|
||||
print "Elapsed time:", 1000*(time()-t0), "ms"
|
||||
print "Results",x
|
||||
print "\n\n"
|
||||
|
||||
|
||||
def test_fprime_eqcons(d,*args):
|
||||
try:
|
||||
sign = args[0]
|
||||
except:
|
||||
sign = 1.0
|
||||
x = d[0]
|
||||
y = d[1]
|
||||
return array([ 3.0*(x**2.0), -1.0 ])
|
||||
|
||||
|
||||
def test_fprime_ieqcons(d,*args):
|
||||
try:
|
||||
sign = args[0]
|
||||
except:
|
||||
sign = 1.0
|
||||
x = d[0]
|
||||
y = d[1]
|
||||
return array([ 0.0, 1.0 ])
|
||||
|
||||
print "Bound optimization (equality and inequality constraints)."
|
||||
print "Derivatives provided via functions."
|
||||
print "Constraint jacobians provided via functions"
|
||||
t0 = time()
|
||||
x = fmin_slsqp(testfunc,[-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,),
|
||||
f_eqcons=test_eqcons, f_ieqcons=test_ieqcons,
|
||||
fprime_eqcons=test_fprime_eqcons,
|
||||
fprime_ieqcons=test_fprime_ieqcons, iprint=2, full_output=1)
|
||||
print "Elapsed time:", 1000*(time()-t0), "ms"
|
||||
print "Results",x
|
||||
print "\n\n"
|
||||
|
||||
|
||||
|
||||
|
||||
Scalar function minimizers
|
||||
--------------------------
|
||||
|
||||
Often only the minimum of a scalar function is needed (a scalar
|
||||
function is one that takes a scalar as input and returns a scalar
|
||||
output). In these circumstances, other optimization techniques have
|
||||
been developed that can work faster.
|
||||
|
||||
|
||||
Unconstrained minimization (:func:`brent`)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
There are actually two methods that can be used to minimize a scalar
|
||||
function (:obj:`brent` and :func:`golden`), but :obj:`golden` is
|
||||
included only for academic purposes and should rarely be used. The
|
||||
brent method uses Brent's algorithm for locating a minimum. Optimally
|
||||
a bracket should be given which contains the minimum desired. A
|
||||
bracket is a triple :math:`\left(a,b,c\right)` such that
|
||||
:math:`f\left(a\right)>f\left(b\right)<f\left(c\right)` and
|
||||
:math:`a<b<c` . If this is not given, then alternatively two starting
|
||||
points can be chosen and a bracket will be found from these points
|
||||
using a simple marching algorithm. If these two starting points are
|
||||
not provided 0 and 1 will be used (this may not be the right choice
|
||||
for your function and result in an unexpected minimum being returned).
|
||||
|
||||
|
||||
Bounded minimization (:func:`fminbound`)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Thus far all of the minimization routines described have been
|
||||
unconstrained minimization routines. Very often, however, there are
|
||||
constraints that can be placed on the solution space before
|
||||
minimization occurs. The :obj:`fminbound` function is an example of a
|
||||
constrained minimization procedure that provides a rudimentary
|
||||
interval constraint for scalar functions. The interval constraint
|
||||
allows the minimization to occur only between two fixed endpoints.
|
||||
|
||||
For example, to find the minimum of :math:`J_{1}\left(x\right)` near :math:`x=5` , :obj:`fminbound` can be called using the interval :math:`\left[4,7\right]` as a constraint. The result is :math:`x_{\textrm{min}}=5.3314` :
|
||||
|
||||
>>> from scipy.special import j1
|
||||
>>> from scipy.optimize import fminbound
|
||||
>>> xmin = fminbound(j1, 4, 7)
|
||||
>>> print xmin
|
||||
5.33144184241
|
||||
|
||||
|
||||
Root finding
|
||||
------------
|
||||
|
||||
|
||||
Sets of equations
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
To find the roots of a polynomial, the command :obj:`roots
|
||||
<scipy.roots>` is useful. To find a root of a set of non-linear
|
||||
equations, the command :obj:`fsolve` is needed. For example, the
|
||||
following example finds the roots of the single-variable
|
||||
transcendental equation
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ x+2\cos\left(x\right)=0,\]
|
||||
|
||||
and the set of non-linear equations
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} x_{0}\cos\left(x_{1}\right) & = & 4,\\ x_{0}x_{1}-x_{1} & = & 5.\end{eqnarray*}
|
||||
|
||||
The results are :math:`x=-1.0299` and :math:`x_{0}=6.5041,\, x_{1}=0.9084` .
|
||||
|
||||
>>> def func(x):
|
||||
... return x + 2*cos(x)
|
||||
|
||||
>>> def func2(x):
|
||||
... out = [x[0]*cos(x[1]) - 4]
|
||||
... out.append(x[1]*x[0] - x[1] - 5)
|
||||
... return out
|
||||
|
||||
>>> from scipy.optimize import fsolve
|
||||
>>> x0 = fsolve(func, 0.3)
|
||||
>>> print x0
|
||||
-1.02986652932
|
||||
|
||||
>>> x02 = fsolve(func2, [1, 1])
|
||||
>>> print x02
|
||||
[ 6.50409711 0.90841421]
|
||||
|
||||
|
||||
|
||||
Scalar function root finding
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If one has a single-variable equation, there are four different root
|
||||
finder algorithms that can be tried. Each of these root finding
|
||||
algorithms requires the endpoints of an interval where a root is
|
||||
suspected (because the function changes signs). In general
|
||||
:obj:`brentq` is the best choice, but the other methods may be useful
|
||||
in certain circumstances or for academic purposes.
|
||||
|
||||
|
||||
Fixed-point solving
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A problem closely related to finding the zeros of a function is the
|
||||
problem of finding a fixed-point of a function. A fixed point of a
|
||||
function is the point at which evaluation of the function returns the
|
||||
point: :math:`g\left(x\right)=x.` Clearly the fixed point of :math:`g`
|
||||
is the root of :math:`f\left(x\right)=g\left(x\right)-x.`
|
||||
Equivalently, the root of :math:`f` is the fixed_point of
|
||||
:math:`g\left(x\right)=f\left(x\right)+x.` The routine
|
||||
:obj:`fixed_point` provides a simple iterative method using Aitkens
|
||||
sequence acceleration to estimate the fixed point of :math:`g` given a
|
||||
starting point.
|
|
@ -1,544 +0,0 @@
|
|||
Signal Processing (signal)
|
||||
==========================
|
||||
|
||||
.. sectionauthor:: Travis E. Oliphant
|
||||
|
||||
The signal processing toolbox currently contains some filtering
|
||||
functions, a limited set of filter design tools, and a few B-spline
|
||||
interpolation algorithms for one- and two-dimensional data. While the
|
||||
B-spline algorithms could technically be placed under the
|
||||
interpolation category, they are included here because they only work
|
||||
with equally-spaced data and make heavy use of filter-theory and
|
||||
transfer-function formalism to provide a fast B-spline transform. To
|
||||
understand this section you will need to understand that a signal in
|
||||
SciPy is an array of real or complex numbers.
|
||||
|
||||
|
||||
B-splines
|
||||
---------
|
||||
|
||||
A B-spline is an approximation of a continuous function over a finite-
|
||||
domain in terms of B-spline coefficients and knot points. If the knot-
|
||||
points are equally spaced with spacing :math:`\Delta x` , then the B-spline approximation to a 1-dimensional function is the
|
||||
finite-basis expansion.
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y\left(x\right)\approx\sum_{j}c_{j}\beta^{o}\left(\frac{x}{\Delta x}-j\right).\]
|
||||
|
||||
In two dimensions with knot-spacing :math:`\Delta x` and :math:`\Delta y` , the function representation is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ z\left(x,y\right)\approx\sum_{j}\sum_{k}c_{jk}\beta^{o}\left(\frac{x}{\Delta x}-j\right)\beta^{o}\left(\frac{y}{\Delta y}-k\right).\]
|
||||
|
||||
In these expressions, :math:`\beta^{o}\left(\cdot\right)` is the space-limited B-spline basis function of order, :math:`o` . The requirement of equally-spaced knot-points and equally-spaced
|
||||
data points, allows the development of fast (inverse-filtering)
|
||||
algorithms for determining the coefficients, :math:`c_{j}` , from sample-values, :math:`y_{n}` . Unlike the general spline interpolation algorithms, these algorithms
|
||||
can quickly find the spline coefficients for large images.
|
||||
|
||||
The advantage of representing a set of samples via B-spline basis
|
||||
functions is that continuous-domain operators (derivatives, re-
|
||||
sampling, integral, etc.) which assume that the data samples are drawn
|
||||
from an underlying continuous function can be computed with relative
|
||||
ease from the spline coefficients. For example, the second-derivative
|
||||
of a spline is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y{}^{\prime\prime}\left(x\right)=\frac{1}{\Delta x^{2}}\sum_{j}c_{j}\beta^{o\prime\prime}\left(\frac{x}{\Delta x}-j\right).\]
|
||||
|
||||
Using the property of B-splines that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \frac{d^{2}\beta^{o}\left(w\right)}{dw^{2}}=\beta^{o-2}\left(w+1\right)-2\beta^{o-2}\left(w\right)+\beta^{o-2}\left(w-1\right)\]
|
||||
|
||||
it can be seen that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y^{\prime\prime}\left(x\right)=\frac{1}{\Delta x^{2}}\sum_{j}c_{j}\left[\beta^{o-2}\left(\frac{x}{\Delta x}-j+1\right)-2\beta^{o-2}\left(\frac{x}{\Delta x}-j\right)+\beta^{o-2}\left(\frac{x}{\Delta x}-j-1\right)\right].\]
|
||||
|
||||
If :math:`o=3` , then at the sample points,
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \Delta x^{2}\left.y^{\prime}\left(x\right)\right|_{x=n\Delta x} & = & \sum_{j}c_{j}\delta_{n-j+1}-2c_{j}\delta_{n-j}+c_{j}\delta_{n-j-1},\\ & = & c_{n+1}-2c_{n}+c_{n-1}.\end{eqnarray*}
|
||||
|
||||
Thus, the second-derivative signal can be easily calculated from the
|
||||
spline fit. if desired, smoothing splines can be found to make the
|
||||
second-derivative less sensitive to random-errors.
|
||||
|
||||
The savvy reader will have already noticed that the data samples are
|
||||
related to the knot coefficients via a convolution operator, so that
|
||||
simple convolution with the sampled B-spline function recovers the
|
||||
original data from the spline coefficients. The output of convolutions
|
||||
can change depending on how boundaries are handled (this becomes
|
||||
increasingly more important as the number of dimensions in the data-
|
||||
set increases). The algorithms relating to B-splines in the signal-
|
||||
processing sub package assume mirror-symmetric boundary conditions.
|
||||
Thus, spline coefficients are computed based on that assumption, and
|
||||
data-samples can be recovered exactly from the spline coefficients by
|
||||
assuming them to be mirror-symmetric also.
|
||||
|
||||
Currently the package provides functions for determining second- and
|
||||
third-order cubic spline coefficients from equally spaced samples in
|
||||
one- and two-dimensions (:func:`signal.qspline1d`,
|
||||
:func:`signal.qspline2d`, :func:`signal.cspline1d`,
|
||||
:func:`signal.cspline2d`). The package also supplies a function (
|
||||
:obj:`signal.bspline` ) for evaluating the bspline basis function,
|
||||
:math:`\beta^{o}\left(x\right)` for arbitrary order and :math:`x.` For
|
||||
large :math:`o` , the B-spline basis function can be approximated well
|
||||
by a zero-mean Gaussian function with standard-deviation equal to
|
||||
:math:`\sigma_{o}=\left(o+1\right)/12` :
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \beta^{o}\left(x\right)\approx\frac{1}{\sqrt{2\pi\sigma_{o}^{2}}}\exp\left(-\frac{x^{2}}{2\sigma_{o}}\right).\]
|
||||
|
||||
A function to compute this Gaussian for arbitrary :math:`x` and
|
||||
:math:`o` is also available ( :obj:`signal.gauss_spline` ). The
|
||||
following code and Figure uses spline-filtering to compute an
|
||||
edge-image (the second-derivative of a smoothed spline) of Lena's face
|
||||
which is an array returned by the command :func:`lena`. The command
|
||||
:obj:`signal.sepfir2d` was used to apply a separable two-dimensional
|
||||
FIR filter with mirror- symmetric boundary conditions to the spline
|
||||
coefficients. This function is ideally suited for reconstructing
|
||||
samples from spline coefficients and is faster than
|
||||
:obj:`signal.convolve2d` which convolves arbitrary two-dimensional
|
||||
filters and allows for choosing mirror-symmetric boundary conditions.
|
||||
|
||||
.. plot::
|
||||
|
||||
>>> from numpy import *
|
||||
>>> from scipy import signal, misc
|
||||
>>> import matplotlib.pyplot as plt
|
||||
|
||||
>>> image = misc.lena().astype(float32)
|
||||
>>> derfilt = array([1.0,-2,1.0],float32)
|
||||
>>> ck = signal.cspline2d(image,8.0)
|
||||
>>> deriv = signal.sepfir2d(ck, derfilt, [1]) + \
|
||||
>>> signal.sepfir2d(ck, [1], derfilt)
|
||||
|
||||
Alternatively we could have done::
|
||||
|
||||
laplacian = array([[0,1,0],[1,-4,1],[0,1,0]],float32)
|
||||
deriv2 = signal.convolve2d(ck,laplacian,mode='same',boundary='symm')
|
||||
|
||||
>>> plt.figure()
|
||||
>>> plt.imshow(image)
|
||||
>>> plt.gray()
|
||||
>>> plt.title('Original image')
|
||||
>>> plt.show()
|
||||
|
||||
>>> plt.figure()
|
||||
>>> plt.imshow(deriv)
|
||||
>>> plt.gray()
|
||||
>>> plt.title('Output of spline edge filter')
|
||||
>>> plt.show()
|
||||
|
||||
.. :caption: Example of using smoothing splines to filter images.
|
||||
|
||||
|
||||
Filtering
|
||||
---------
|
||||
|
||||
Filtering is a generic name for any system that modifies an input
|
||||
signal in some way. In SciPy a signal can be thought of as a Numpy
|
||||
array. There are different kinds of filters for different kinds of
|
||||
operations. There are two broad kinds of filtering operations: linear
|
||||
and non-linear. Linear filters can always be reduced to multiplication
|
||||
of the flattened Numpy array by an appropriate matrix resulting in
|
||||
another flattened Numpy array. Of course, this is not usually the best
|
||||
way to compute the filter as the matrices and vectors involved may be
|
||||
huge. For example filtering a :math:`512 \times 512` image with this
|
||||
method would require multiplication of a :math:`512^2 \times 512^2`
|
||||
matrix with a :math:`512^2` vector. Just trying to store the
|
||||
:math:`512^2 \times 512^2` matrix using a standard Numpy array would
|
||||
require :math:`68,719,476,736` elements. At 4 bytes per element this
|
||||
would require :math:`256\textrm{GB}` of memory. In most applications
|
||||
most of the elements of this matrix are zero and a different method
|
||||
for computing the output of the filter is employed.
|
||||
|
||||
|
||||
Convolution/Correlation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Many linear filters also have the property of shift-invariance. This
|
||||
means that the filtering operation is the same at different locations
|
||||
in the signal and it implies that the filtering matrix can be
|
||||
constructed from knowledge of one row (or column) of the matrix alone.
|
||||
In this case, the matrix multiplication can be accomplished using
|
||||
Fourier transforms.
|
||||
|
||||
Let :math:`x\left[n\right]` define a one-dimensional signal indexed by the integer :math:`n.` Full convolution of two one-dimensional signals can be expressed as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y\left[n\right]=\sum_{k=-\infty}^{\infty}x\left[k\right]h\left[n-k\right].\]
|
||||
|
||||
This equation can only be implemented directly if we limit the
|
||||
sequences to finite support sequences that can be stored in a
|
||||
computer, choose :math:`n=0` to be the starting point of both
|
||||
sequences, let :math:`K+1` be that value for which
|
||||
:math:`y\left[n\right]=0` for all :math:`n>K+1` and :math:`M+1` be
|
||||
that value for which :math:`x\left[n\right]=0` for all :math:`n>M+1` ,
|
||||
then the discrete convolution expression is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y\left[n\right]=\sum_{k=\max\left(n-M,0\right)}^{\min\left(n,K\right)}x\left[k\right]h\left[n-k\right].\]
|
||||
|
||||
For convenience assume :math:`K\geq M.` Then, more explicitly the output of this operation is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} y\left[0\right] & = & x\left[0\right]h\left[0\right]\\ y\left[1\right] & = & x\left[0\right]h\left[1\right]+x\left[1\right]h\left[0\right]\\ y\left[2\right] & = & x\left[0\right]h\left[2\right]+x\left[1\right]h\left[1\right]+x\left[2\right]h\left[0\right]\\ \vdots & \vdots & \vdots\\ y\left[M\right] & = & x\left[0\right]h\left[M\right]+x\left[1\right]h\left[M-1\right]+\cdots+x\left[M\right]h\left[0\right]\\ y\left[M+1\right] & = & x\left[1\right]h\left[M\right]+x\left[2\right]h\left[M-1\right]+\cdots+x\left[M+1\right]h\left[0\right]\\ \vdots & \vdots & \vdots\\ y\left[K\right] & = & x\left[K-M\right]h\left[M\right]+\cdots+x\left[K\right]h\left[0\right]\\ y\left[K+1\right] & = & x\left[K+1-M\right]h\left[M\right]+\cdots+x\left[K\right]h\left[1\right]\\ \vdots & \vdots & \vdots\\ y\left[K+M-1\right] & = & x\left[K-1\right]h\left[M\right]+x\left[K\right]h\left[M-1\right]\\ y\left[K+M\right] & = & x\left[K\right]h\left[M\right].\end{eqnarray*}
|
||||
|
||||
Thus, the full discrete convolution of two finite sequences of lengths :math:`K+1` and :math:`M+1` respectively results in a finite sequence of length :math:`K+M+1=\left(K+1\right)+\left(M+1\right)-1.`
|
||||
|
||||
One dimensional convolution is implemented in SciPy with the function
|
||||
``signal.convolve`` . This function takes as inputs the signals
|
||||
:math:`x,` :math:`h` , and an optional flag and returns the signal
|
||||
:math:`y.` The optional flag allows for specification of which part of
|
||||
the output signal to return. The default value of 'full' returns the
|
||||
entire signal. If the flag has a value of 'same' then only the middle
|
||||
:math:`K` values are returned starting at :math:`y\left[\left\lfloor
|
||||
\frac{M-1}{2}\right\rfloor \right]` so that the output has the same
|
||||
length as the largest input. If the flag has a value of 'valid' then
|
||||
only the middle :math:`K-M+1=\left(K+1\right)-\left(M+1\right)+1`
|
||||
output values are returned where :math:`z` depends on all of the
|
||||
values of the smallest input from :math:`h\left[0\right]` to
|
||||
:math:`h\left[M\right].` In other words only the values
|
||||
:math:`y\left[M\right]` to :math:`y\left[K\right]` inclusive are
|
||||
returned.
|
||||
|
||||
This same function ``signal.convolve`` can actually take :math:`N`
|
||||
-dimensional arrays as inputs and will return the :math:`N`
|
||||
-dimensional convolution of the two arrays. The same input flags are
|
||||
available for that case as well.
|
||||
|
||||
Correlation is very similar to convolution except for the minus sign
|
||||
becomes a plus sign. Thus
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ w\left[n\right]=\sum_{k=-\infty}^{\infty}y\left[k\right]x\left[n+k\right]\]
|
||||
|
||||
is the (cross) correlation of the signals :math:`y` and :math:`x.` For finite-length signals with :math:`y\left[n\right]=0` outside of the range :math:`\left[0,K\right]` and :math:`x\left[n\right]=0` outside of the range :math:`\left[0,M\right],` the summation can simplify to
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ w\left[n\right]=\sum_{k=\max\left(0,-n\right)}^{\min\left(K,M-n\right)}y\left[k\right]x\left[n+k\right].\]
|
||||
|
||||
Assuming again that :math:`K\geq M` this is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} w\left[-K\right] & = & y\left[K\right]x\left[0\right]\\ w\left[-K+1\right] & = & y\left[K-1\right]x\left[0\right]+y\left[K\right]x\left[1\right]\\ \vdots & \vdots & \vdots\\ w\left[M-K\right] & = & y\left[K-M\right]x\left[0\right]+y\left[K-M+1\right]x\left[1\right]+\cdots+y\left[K\right]x\left[M\right]\\ w\left[M-K+1\right] & = & y\left[K-M-1\right]x\left[0\right]+\cdots+y\left[K-1\right]x\left[M\right]\\ \vdots & \vdots & \vdots\\ w\left[-1\right] & = & y\left[1\right]x\left[0\right]+y\left[2\right]x\left[1\right]+\cdots+y\left[M+1\right]x\left[M\right]\\ w\left[0\right] & = & y\left[0\right]x\left[0\right]+y\left[1\right]x\left[1\right]+\cdots+y\left[M\right]x\left[M\right]\\ w\left[1\right] & = & y\left[0\right]x\left[1\right]+y\left[1\right]x\left[2\right]+\cdots+y\left[M-1\right]x\left[M\right]\\ w\left[2\right] & = & y\left[0\right]x\left[2\right]+y\left[1\right]x\left[3\right]+\cdots+y\left[M-2\right]x\left[M\right]\\ \vdots & \vdots & \vdots\\ w\left[M-1\right] & = & y\left[0\right]x\left[M-1\right]+y\left[1\right]x\left[M\right]\\ w\left[M\right] & = & y\left[0\right]x\left[M\right].\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
The SciPy function ``signal.correlate`` implements this
|
||||
operation. Equivalent flags are available for this operation to return
|
||||
the full :math:`K+M+1` length sequence ('full') or a sequence with the
|
||||
same size as the largest sequence starting at
|
||||
:math:`w\left[-K+\left\lfloor \frac{M-1}{2}\right\rfloor \right]`
|
||||
('same') or a sequence where the values depend on all the values of
|
||||
the smallest sequence ('valid'). This final option returns the
|
||||
:math:`K-M+1` values :math:`w\left[M-K\right]` to
|
||||
:math:`w\left[0\right]` inclusive.
|
||||
|
||||
The function :obj:`signal.correlate` can also take arbitrary :math:`N`
|
||||
-dimensional arrays as input and return the :math:`N` -dimensional
|
||||
convolution of the two arrays on output.
|
||||
|
||||
When :math:`N=2,` :obj:`signal.correlate` and/or
|
||||
:obj:`signal.convolve` can be used to construct arbitrary image
|
||||
filters to perform actions such as blurring, enhancing, and
|
||||
edge-detection for an image.
|
||||
|
||||
Convolution is mainly used for filtering when one of the signals is
|
||||
much smaller than the other ( :math:`K\gg M` ), otherwise linear
|
||||
filtering is more easily accomplished in the frequency domain (see
|
||||
Fourier Transforms).
|
||||
|
||||
|
||||
Difference-equation filtering
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A general class of linear one-dimensional filters (that includes
|
||||
convolution filters) are filters described by the difference equation
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \sum_{k=0}^{N}a_{k}y\left[n-k\right]=\sum_{k=0}^{M}b_{k}x\left[n-k\right]\]
|
||||
|
||||
where :math:`x\left[n\right]` is the input sequence and
|
||||
:math:`y\left[n\right]` is the output sequence. If we assume initial
|
||||
rest so that :math:`y\left[n\right]=0` for :math:`n<0` , then this
|
||||
kind of filter can be implemented using convolution. However, the
|
||||
convolution filter sequence :math:`h\left[n\right]` could be infinite
|
||||
if :math:`a_{k}\neq0` for :math:`k\geq1.` In addition, this general
|
||||
class of linear filter allows initial conditions to be placed on
|
||||
:math:`y\left[n\right]` for :math:`n<0` resulting in a filter that
|
||||
cannot be expressed using convolution.
|
||||
|
||||
The difference equation filter can be thought of as finding :math:`y\left[n\right]` recursively in terms of it's previous values
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ a_{0}y\left[n\right]=-a_{1}y\left[n-1\right]-\cdots-a_{N}y\left[n-N\right]+\cdots+b_{0}x\left[n\right]+\cdots+b_{M}x\left[n-M\right].\]
|
||||
|
||||
Often :math:`a_{0}=1` is chosen for normalization. The implementation
|
||||
in SciPy of this general difference equation filter is a little more
|
||||
complicated then would be implied by the previous equation. It is
|
||||
implemented so that only one signal needs to be delayed. The actual
|
||||
implementation equations are (assuming :math:`a_{0}=1` ).
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} y\left[n\right] & = & b_{0}x\left[n\right]+z_{0}\left[n-1\right]\\ z_{0}\left[n\right] & = & b_{1}x\left[n\right]+z_{1}\left[n-1\right]-a_{1}y\left[n\right]\\ z_{1}\left[n\right] & = & b_{2}x\left[n\right]+z_{2}\left[n-1\right]-a_{2}y\left[n\right]\\ \vdots & \vdots & \vdots\\ z_{K-2}\left[n\right] & = & b_{K-1}x\left[n\right]+z_{K-1}\left[n-1\right]-a_{K-1}y\left[n\right]\\ z_{K-1}\left[n\right] & = & b_{K}x\left[n\right]-a_{K}y\left[n\right],\end{eqnarray*}
|
||||
|
||||
where :math:`K=\max\left(N,M\right).` Note that :math:`b_{K}=0` if
|
||||
:math:`K>M` and :math:`a_{K}=0` if :math:`K>N.` In this way, the
|
||||
output at time :math:`n` depends only on the input at time :math:`n`
|
||||
and the value of :math:`z_{0}` at the previous time. This can always
|
||||
be calculated as long as the :math:`K` values
|
||||
:math:`z_{0}\left[n-1\right]\ldots z_{K-1}\left[n-1\right]` are
|
||||
computed and stored at each time step.
|
||||
|
||||
The difference-equation filter is called using the command
|
||||
:obj:`signal.lfilter` in SciPy. This command takes as inputs the
|
||||
vector :math:`b,` the vector, :math:`a,` a signal :math:`x` and
|
||||
returns the vector :math:`y` (the same length as :math:`x` ) computed
|
||||
using the equation given above. If :math:`x` is :math:`N`
|
||||
-dimensional, then the filter is computed along the axis provided. If,
|
||||
desired, initial conditions providing the values of
|
||||
:math:`z_{0}\left[-1\right]` to :math:`z_{K-1}\left[-1\right]` can be
|
||||
provided or else it will be assumed that they are all zero. If initial
|
||||
conditions are provided, then the final conditions on the intermediate
|
||||
variables are also returned. These could be used, for example, to
|
||||
restart the calculation in the same state.
|
||||
|
||||
Sometimes it is more convenient to express the initial conditions in
|
||||
terms of the signals :math:`x\left[n\right]` and
|
||||
:math:`y\left[n\right].` In other words, perhaps you have the values
|
||||
of :math:`x\left[-M\right]` to :math:`x\left[-1\right]` and the values
|
||||
of :math:`y\left[-N\right]` to :math:`y\left[-1\right]` and would like
|
||||
to determine what values of :math:`z_{m}\left[-1\right]` should be
|
||||
delivered as initial conditions to the difference-equation filter. It
|
||||
is not difficult to show that for :math:`0\leq m<K,`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ z_{m}\left[n\right]=\sum_{p=0}^{K-m-1}\left(b_{m+p+1}x\left[n-p\right]-a_{m+p+1}y\left[n-p\right]\right).\]
|
||||
|
||||
Using this formula we can find the intial condition vector :math:`z_{0}\left[-1\right]` to :math:`z_{K-1}\left[-1\right]` given initial conditions on :math:`y` (and :math:`x` ). The command :obj:`signal.lfiltic` performs this function.
|
||||
|
||||
|
||||
Other filters
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
The signal processing package provides many more filters as well.
|
||||
|
||||
|
||||
Median Filter
|
||||
"""""""""""""
|
||||
|
||||
A median filter is commonly applied when noise is markedly non-
|
||||
Gaussian or when it is desired to preserve edges. The median filter
|
||||
works by sorting all of the array pixel values in a rectangular region
|
||||
surrounding the point of interest. The sample median of this list of
|
||||
neighborhood pixel values is used as the value for the output array.
|
||||
The sample median is the middle array value in a sorted list of
|
||||
neighborhood values. If there are an even number of elements in the
|
||||
neighborhood, then the average of the middle two values is used as the
|
||||
median. A general purpose median filter that works on N-dimensional
|
||||
arrays is :obj:`signal.medfilt` . A specialized version that works
|
||||
only for two-dimensional arrays is available as
|
||||
:obj:`signal.medfilt2d` .
|
||||
|
||||
|
||||
Order Filter
|
||||
""""""""""""
|
||||
|
||||
A median filter is a specific example of a more general class of
|
||||
filters called order filters. To compute the output at a particular
|
||||
pixel, all order filters use the array values in a region surrounding
|
||||
that pixel. These array values are sorted and then one of them is
|
||||
selected as the output value. For the median filter, the sample median
|
||||
of the list of array values is used as the output. A general order
|
||||
filter allows the user to select which of the sorted values will be
|
||||
used as the output. So, for example one could choose to pick the
|
||||
maximum in the list or the minimum. The order filter takes an
|
||||
additional argument besides the input array and the region mask that
|
||||
specifies which of the elements in the sorted list of neighbor array
|
||||
values should be used as the output. The command to perform an order
|
||||
filter is :obj:`signal.order_filter` .
|
||||
|
||||
|
||||
Wiener filter
|
||||
"""""""""""""
|
||||
|
||||
The Wiener filter is a simple deblurring filter for denoising images.
|
||||
This is not the Wiener filter commonly described in image
|
||||
reconstruction problems but instead it is a simple, local-mean filter.
|
||||
Let :math:`x` be the input signal, then the output is
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ y=\left\{ \begin{array}{cc} \frac{\sigma^{2}}{\sigma_{x}^{2}}m_{x}+\left(1-\frac{\sigma^{2}}{\sigma_{x}^{2}}\right)x & \sigma_{x}^{2}\geq\sigma^{2},\\ m_{x} & \sigma_{x}^{2}<\sigma^{2}.\end{array}\right.\]
|
||||
|
||||
Where :math:`m_{x}` is the local estimate of the mean and
|
||||
:math:`\sigma_{x}^{2}` is the local estimate of the variance. The
|
||||
window for these estimates is an optional input parameter (default is
|
||||
:math:`3\times3` ). The parameter :math:`\sigma^{2}` is a threshold
|
||||
noise parameter. If :math:`\sigma` is not given then it is estimated
|
||||
as the average of the local variances.
|
||||
|
||||
|
||||
Hilbert filter
|
||||
""""""""""""""
|
||||
|
||||
The Hilbert transform constructs the complex-valued analytic signal
|
||||
from a real signal. For example if :math:`x=\cos\omega n` then
|
||||
:math:`y=\textrm{hilbert}\left(x\right)` would return (except near the
|
||||
edges) :math:`y=\exp\left(j\omega n\right).` In the frequency domain,
|
||||
the hilbert transform performs
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ Y=X\cdot H\]
|
||||
|
||||
where :math:`H` is 2 for positive frequencies, :math:`0` for negative frequencies and :math:`1` for zero-frequencies.
|
||||
|
||||
.. XXX: TODO
|
||||
..
|
||||
.. Detrend
|
||||
.. """""""
|
||||
..
|
||||
.. Filter design
|
||||
.. -------------
|
||||
..
|
||||
..
|
||||
.. Finite-impulse response design
|
||||
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Inifinite-impulse response design
|
||||
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Analog filter frequency response
|
||||
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Digital filter frequency response
|
||||
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Linear Time-Invariant Systems
|
||||
.. -----------------------------
|
||||
..
|
||||
..
|
||||
.. LTI Object
|
||||
.. ^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Continuous-Time Simulation
|
||||
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Step response
|
||||
.. ^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Impulse response
|
||||
.. ^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Input/Output
|
||||
.. ============
|
||||
..
|
||||
..
|
||||
.. Binary
|
||||
.. ------
|
||||
..
|
||||
..
|
||||
.. Arbitrary binary input and output (fopen)
|
||||
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Read and write Matlab .mat files
|
||||
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Saving workspace
|
||||
.. ^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Text-file
|
||||
.. ---------
|
||||
..
|
||||
..
|
||||
.. Read text-files (read_array)
|
||||
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Write a text-file (write_array)
|
||||
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
..
|
||||
..
|
||||
.. Fourier Transforms
|
||||
.. ==================
|
||||
..
|
||||
..
|
||||
.. One-dimensional
|
||||
.. ---------------
|
||||
..
|
||||
..
|
||||
.. Two-dimensional
|
||||
.. ---------------
|
||||
..
|
||||
..
|
||||
.. N-dimensional
|
||||
.. -------------
|
||||
..
|
||||
..
|
||||
.. Shifting
|
||||
.. --------
|
||||
..
|
||||
..
|
||||
.. Sample frequencies
|
||||
.. ------------------
|
||||
..
|
||||
..
|
||||
.. Hilbert transform
|
||||
.. -----------------
|
||||
..
|
||||
..
|
||||
.. Tilbert transform
|
||||
.. -----------------
|
|
@ -1,23 +0,0 @@
|
|||
Special functions (:mod:`scipy.special`)
|
||||
========================================
|
||||
|
||||
.. sectionauthor:: Travis E. Oliphant
|
||||
|
||||
.. currentmodule:: scipy.special
|
||||
|
||||
The main feature of the :mod:`scipy.special` package is the definition of
|
||||
numerous special functions of mathematical physics. Available
|
||||
functions include airy, elliptic, bessel, gamma, beta, hypergeometric,
|
||||
parabolic cylinder, mathieu, spheroidal wave, struve, and
|
||||
kelvin. There are also some low-level stats functions that are not
|
||||
intended for general use as an easier interface to these functions is
|
||||
provided by the ``stats`` module. Most of these functions can take
|
||||
array arguments and return array results following the same
|
||||
broadcasting rules as other math functions in Numerical Python. Many
|
||||
of these functions also accept complex numbers as input. For a
|
||||
complete list of the available functions with a one-line description
|
||||
type ``>>> help(special).`` Each function also has its own
|
||||
documentation accessible using help. If you don't see a function you
|
||||
need, consider writing it and contributing it to the library. You can
|
||||
write the function in either C, Fortran, or Python. Look in the source
|
||||
code of the library for examples of each of these kinds of functions.
|
|
@ -1,579 +0,0 @@
|
|||
Statistics
|
||||
==========
|
||||
|
||||
.. sectionauthor:: Travis E. Oliphant
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
SciPy has a tremendous number of basic statistics routines with more
|
||||
easily added by the end user (if you create one please contribute it).
|
||||
All of the statistics functions are located in the sub-package
|
||||
:mod:`scipy.stats` and a fairly complete listing of these functions
|
||||
can be had using ``info(stats)``.
|
||||
|
||||
Random Variables
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
There are two general distribution classes that have been implemented
|
||||
for encapsulating
|
||||
:ref:`continuous random variables <continuous-random-variables>`
|
||||
and
|
||||
:ref:`discrete random variables <discrete-random-variables>`
|
||||
. Over 80 continuous random variables and 10 discrete random
|
||||
variables have been implemented using these classes. The list of the
|
||||
random variables available is in the docstring for the stats sub-
|
||||
package.
|
||||
|
||||
|
||||
Note: The following is work in progress
|
||||
|
||||
Distributions
|
||||
-------------
|
||||
|
||||
|
||||
First some imports
|
||||
|
||||
>>> import numpy as np
|
||||
>>> from scipy import stats
|
||||
>>> import warnings
|
||||
>>> warnings.simplefilter('ignore', DeprecationWarning)
|
||||
|
||||
We can obtain the list of available distribution through introspection:
|
||||
|
||||
>>> dist_continu = [d for d in dir(stats) if
|
||||
... isinstance(getattr(stats,d), stats.rv_continuous)]
|
||||
>>> dist_discrete = [d for d in dir(stats) if
|
||||
... isinstance(getattr(stats,d), stats.rv_discrete)]
|
||||
>>> print 'number of continuous distributions:', len(dist_continu)
|
||||
number of continuous distributions: 84
|
||||
>>> print 'number of discrete distributions: ', len(dist_discrete)
|
||||
number of discrete distributions: 12
|
||||
|
||||
|
||||
|
||||
|
||||
Distributions can be used in one of two ways, either by passing all distribution
|
||||
parameters to each method call or by freezing the parameters for the instance
|
||||
of the distribution. As an example, we can get the median of the distribution by using
|
||||
the percent point function, ppf, which is the inverse of the cdf:
|
||||
|
||||
>>> print stats.nct.ppf(0.5, 10, 2.5)
|
||||
2.56880722561
|
||||
>>> my_nct = stats.nct(10, 2.5)
|
||||
>>> print my_nct.ppf(0.5)
|
||||
2.56880722561
|
||||
|
||||
``help(stats.nct)`` prints the complete docstring of the distribution. Instead
|
||||
we can print just some basic information::
|
||||
|
||||
>>> print stats.nct.extradoc #contains the distribution specific docs
|
||||
Non-central Student T distribution
|
||||
|
||||
df**(df/2) * gamma(df+1)
|
||||
nct.pdf(x,df,nc) = --------------------------------------------------
|
||||
2**df*exp(nc**2/2)*(df+x**2)**(df/2) * gamma(df/2)
|
||||
for df > 0, nc > 0.
|
||||
|
||||
|
||||
>>> print 'number of arguments: %d, shape parameters: %s'% (stats.nct.numargs,
|
||||
... stats.nct.shapes)
|
||||
number of arguments: 2, shape parameters: df,nc
|
||||
>>> print 'bounds of distribution lower: %s, upper: %s' % (stats.nct.a,
|
||||
... stats.nct.b)
|
||||
bounds of distribution lower: -1.#INF, upper: 1.#INF
|
||||
|
||||
We can list all methods and properties of the distribution with
|
||||
``dir(stats.nct)``. Some of the methods are private methods, that are
|
||||
not named as such, i.e. no leading underscore, for example veccdf or
|
||||
xa and xb are for internal calculation. The main methods we can see
|
||||
when we list the methods of the frozen distribution:
|
||||
|
||||
>>> print dir(my_nct) #reformatted
|
||||
['__class__', '__delattr__', '__dict__', '__doc__', '__getattribute__',
|
||||
'__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__',
|
||||
'__repr__', '__setattr__', '__str__', '__weakref__', 'args', 'cdf', 'dist',
|
||||
'entropy', 'isf', 'kwds', 'moment', 'pdf', 'pmf', 'ppf', 'rvs', 'sf', 'stats']
|
||||
|
||||
|
||||
The main public methods are:
|
||||
|
||||
* rvs: Random Variates
|
||||
* pdf: Probability Density Function
|
||||
* cdf: Cumulative Distribution Function
|
||||
* sf: Survival Function (1-CDF)
|
||||
* ppf: Percent Point Function (Inverse of CDF)
|
||||
* isf: Inverse Survival Function (Inverse of SF)
|
||||
* stats: Return mean, variance, (Fisher's) skew, or (Fisher's) kurtosis
|
||||
* moment: non-central moments of the distribution
|
||||
|
||||
The main additional methods of the not frozen distribution are related to the estimation
|
||||
of distrition parameters:
|
||||
|
||||
* fit: maximum likelihood estimation of distribution parameters, including location
|
||||
and scale
|
||||
* fit_loc_scale: estimation of location and scale when shape parameters are given
|
||||
* nnlf: negative log likelihood function
|
||||
* expect: Calculate the expectation of a function against the pdf or pmf
|
||||
|
||||
All continuous distributions take `loc` and `scale` as keyword
|
||||
parameters to adjust the location and scale of the distribution,
|
||||
e.g. for the standard normal distribution location is the mean and
|
||||
scale is the standard deviation. The standardized distribution for a
|
||||
random variable `x` is obtained through ``(x - loc) / scale``.
|
||||
|
||||
Discrete distribution have most of the same basic methods, however
|
||||
pdf is replaced the probability mass function `pmf`, no estimation
|
||||
methods, such as fit, are available, and scale is not a valid
|
||||
keyword parameter. The location parameter, keyword `loc` can be used
|
||||
to shift the distribution.
|
||||
|
||||
The basic methods, pdf, cdf, sf, ppf, and isf are vectorized with
|
||||
``np.vectorize``, and the usual numpy broadcasting is applied. For
|
||||
example, we can calculate the critical values for the upper tail of
|
||||
the t distribution for different probabilites and degrees of freedom.
|
||||
|
||||
>>> stats.t.isf([0.1, 0.05, 0.01], [[10], [11]])
|
||||
array([[ 1.37218364, 1.81246112, 2.76376946],
|
||||
[ 1.36343032, 1.79588482, 2.71807918]])
|
||||
|
||||
Here, the first row are the critical values for 10 degrees of freedom and the second row
|
||||
is for 11 d.o.f., i.e. this is the same as
|
||||
|
||||
>>> stats.t.isf([0.1, 0.05, 0.01], 10)
|
||||
array([ 1.37218364, 1.81246112, 2.76376946])
|
||||
>>> stats.t.isf([0.1, 0.05, 0.01], 11)
|
||||
array([ 1.36343032, 1.79588482, 2.71807918])
|
||||
|
||||
If both, probabilities and degrees of freedom have the same array shape, then element
|
||||
wise matching is used. As an example, we can obtain the 10% tail for 10 d.o.f., the 5% tail
|
||||
for 11 d.o.f. and the 1% tail for 12 d.o.f. by
|
||||
|
||||
>>> stats.t.isf([0.1, 0.05, 0.01], [10, 11, 12])
|
||||
array([ 1.37218364, 1.79588482, 2.68099799])
|
||||
|
||||
|
||||
|
||||
Performance and Remaining Issues
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The performance of the individual methods, in terms of speed, varies
|
||||
widely by distribution and method. The results of a method are
|
||||
obtained in one of two ways, either by explicit calculation or by a
|
||||
generic algorithm that is independent of the specific distribution.
|
||||
Explicit calculation, requires that the method is directly specified
|
||||
for the given distribution, either through analytic formulas or
|
||||
through special functions in scipy.special or numpy.random for
|
||||
`rvs`. These are usually relatively fast calculations. The generic
|
||||
methods are used if the distribution does not specify any explicit
|
||||
calculation. To define a distribution, only one of pdf or cdf is
|
||||
necessary, all other methods can be derived using numeric integration
|
||||
and root finding. These indirect methods can be very slow. As an
|
||||
example, ``rgh = stats.gausshyper.rvs(0.5, 2, 2, 2, size=100)`` creates
|
||||
random variables in a very indirect way and takes about 19 seconds
|
||||
for 100 random variables on my computer, while one million random
|
||||
variables from the standard normal or from the t distribution take
|
||||
just above one second.
|
||||
|
||||
|
||||
The distributions in scipy.stats have recently been corrected and improved
|
||||
and gained a considerable test suite, however a few issues remain:
|
||||
|
||||
* skew and kurtosis, 3rd and 4th moments and entropy are not thoroughly
|
||||
tested and some coarse testing indicates that there are still some
|
||||
incorrect results left.
|
||||
* the distributions have been tested over some range of parameters,
|
||||
however in some corner ranges, a few incorrect results may remain.
|
||||
* the maximum likelihood estimation in `fit` does not work with
|
||||
default starting parameters for all distributions and the user
|
||||
needs to supply good starting parameters. Also, for some
|
||||
distribution using a maximum likelihood estimator might
|
||||
inherently not be the best choice.
|
||||
|
||||
|
||||
The next example shows how to build our own discrete distribution,
|
||||
and more examples for the usage of the distributions are shown below
|
||||
together with the statistical tests.
|
||||
|
||||
|
||||
|
||||
Example: discrete distribution rv_discrete
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In the following we use stats.rv_discrete to generate a discrete distribution
|
||||
that has the probabilites of the truncated normal for the intervalls
|
||||
centered around the integers.
|
||||
|
||||
|
||||
>>> npoints = 20 # number of integer support points of the distribution minus 1
|
||||
>>> npointsh = npoints / 2
|
||||
>>> npointsf = float(npoints)
|
||||
>>> nbound = 4 # bounds for the truncated normal
|
||||
>>> normbound = (1+1/npointsf) * nbound # actual bounds of truncated normal
|
||||
>>> grid = np.arange(-npointsh, npointsh+2, 1) # integer grid
|
||||
>>> gridlimitsnorm = (grid-0.5) / npointsh * nbound # bin limits for the truncnorm
|
||||
>>> gridlimits = grid - 0.5
|
||||
>>> grid = grid[:-1]
|
||||
>>> probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound))
|
||||
>>> gridint = grid
|
||||
>>> normdiscrete = stats.rv_discrete(values = (gridint,
|
||||
... np.round(probs, decimals=7)), name='normdiscrete')
|
||||
|
||||
From the docstring of rv_discrete:
|
||||
"You can construct an aribtrary discrete rv where P{X=xk} = pk by
|
||||
passing to the rv_discrete initialization method (through the values=
|
||||
keyword) a tuple of sequences (xk, pk) which describes only those
|
||||
values of X (xk) that occur with nonzero probability (pk)."
|
||||
|
||||
There are some requirements for this distribution to work. The
|
||||
keyword `name` is required. The support points of the distribution
|
||||
xk have to be integers. Also, I needed to limit the number of
|
||||
decimals. If the last two requirements are not satisfied an
|
||||
exception may be raised or the resulting numbers may be incorrect.
|
||||
|
||||
After defining the distribution, we obtain access to all methods of
|
||||
discrete distributions.
|
||||
|
||||
>>> print 'mean = %6.4f, variance = %6.4f, skew = %6.4f, kurtosis = %6.4f'% \
|
||||
... normdiscrete.stats(moments = 'mvsk')
|
||||
mean = -0.0000, variance = 6.3302, skew = 0.0000, kurtosis = -0.0076
|
||||
|
||||
>>> nd_std = np.sqrt(normdiscrete.stats(moments = 'v'))
|
||||
|
||||
**Generate a random sample and compare observed frequencies with probabilities**
|
||||
|
||||
>>> n_sample = 500
|
||||
>>> np.random.seed(87655678) # fix the seed for replicability
|
||||
>>> rvs = normdiscrete.rvs(size=n_sample)
|
||||
>>> rvsnd = rvs
|
||||
>>> f, l = np.histogram(rvs, bins=gridlimits)
|
||||
>>> sfreq = np.vstack([gridint, f, probs*n_sample]).T
|
||||
>>> print sfreq
|
||||
[[ -1.00000000e+01 0.00000000e+00 2.95019349e-02]
|
||||
[ -9.00000000e+00 0.00000000e+00 1.32294142e-01]
|
||||
[ -8.00000000e+00 0.00000000e+00 5.06497902e-01]
|
||||
[ -7.00000000e+00 2.00000000e+00 1.65568919e+00]
|
||||
[ -6.00000000e+00 1.00000000e+00 4.62125309e+00]
|
||||
[ -5.00000000e+00 9.00000000e+00 1.10137298e+01]
|
||||
[ -4.00000000e+00 2.60000000e+01 2.24137683e+01]
|
||||
[ -3.00000000e+00 3.70000000e+01 3.89503370e+01]
|
||||
[ -2.00000000e+00 5.10000000e+01 5.78004747e+01]
|
||||
[ -1.00000000e+00 7.10000000e+01 7.32455414e+01]
|
||||
[ 0.00000000e+00 7.40000000e+01 7.92618251e+01]
|
||||
[ 1.00000000e+00 8.90000000e+01 7.32455414e+01]
|
||||
[ 2.00000000e+00 5.50000000e+01 5.78004747e+01]
|
||||
[ 3.00000000e+00 5.00000000e+01 3.89503370e+01]
|
||||
[ 4.00000000e+00 1.70000000e+01 2.24137683e+01]
|
||||
[ 5.00000000e+00 1.10000000e+01 1.10137298e+01]
|
||||
[ 6.00000000e+00 4.00000000e+00 4.62125309e+00]
|
||||
[ 7.00000000e+00 3.00000000e+00 1.65568919e+00]
|
||||
[ 8.00000000e+00 0.00000000e+00 5.06497902e-01]
|
||||
[ 9.00000000e+00 0.00000000e+00 1.32294142e-01]
|
||||
[ 1.00000000e+01 0.00000000e+00 2.95019349e-02]]
|
||||
|
||||
|
||||
.. plot:: examples/normdiscr_plot1.py
|
||||
:align: center
|
||||
:include-source: 0
|
||||
|
||||
|
||||
.. plot:: examples/normdiscr_plot2.py
|
||||
:align: center
|
||||
:include-source: 0
|
||||
|
||||
|
||||
Next, we can test, whether our sample was generated by our normdiscrete
|
||||
distribution. This also verifies, whether the random numbers are generated
|
||||
correctly
|
||||
|
||||
The chisquare test requires that there are a minimum number of observations
|
||||
in each bin. We combine the tail bins into larger bins so that they contain
|
||||
enough observations.
|
||||
|
||||
>>> f2 = np.hstack([f[:5].sum(), f[5:-5], f[-5:].sum()])
|
||||
>>> p2 = np.hstack([probs[:5].sum(), probs[5:-5], probs[-5:].sum()])
|
||||
>>> ch2, pval = stats.chisquare(f2, p2*n_sample)
|
||||
|
||||
>>> print 'chisquare for normdiscrete: chi2 = %6.3f pvalue = %6.4f' % (ch2, pval)
|
||||
chisquare for normdiscrete: chi2 = 12.466 pvalue = 0.4090
|
||||
|
||||
The pvalue in this case is high, so we can be quite confident that
|
||||
our random sample was actually generated by the distribution.
|
||||
|
||||
|
||||
|
||||
Analysing One Sample
|
||||
--------------------
|
||||
|
||||
First, we create some random variables. We set a seed so that in each run
|
||||
we get identical results to look at. As an example we take a sample from
|
||||
the Student t distribution:
|
||||
|
||||
>>> np.random.seed(282629734)
|
||||
>>> x = stats.t.rvs(10, size=1000)
|
||||
|
||||
Here, we set the required shape parameter of the t distribution, which
|
||||
in statistics corresponds to the degrees of freedom, to 10. Using size=100 means
|
||||
that our sample consists of 1000 independently drawn (pseudo) random numbers.
|
||||
Since we did not specify the keyword arguments `loc` and `scale`, those are
|
||||
set to their default values zero and one.
|
||||
|
||||
Descriptive Statistics
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
`x` is a numpy array, and we have direct access to all array methods, e.g.
|
||||
|
||||
>>> print x.max(), x.min() # equivalent to np.max(x), np.min(x)
|
||||
5.26327732981 -3.78975572422
|
||||
>>> print x.mean(), x.var() # equivalent to np.mean(x), np.var(x)
|
||||
0.0140610663985 1.28899386208
|
||||
|
||||
|
||||
How do the some sample properties compare to their theoretical counterparts?
|
||||
|
||||
>>> m, v, s, k = stats.t.stats(10, moments='mvsk')
|
||||
>>> n, (smin, smax), sm, sv, ss, sk = stats.describe(x)
|
||||
|
||||
>>> print 'distribution:',
|
||||
distribution:
|
||||
>>> sstr = 'mean = %6.4f, variance = %6.4f, skew = %6.4f, kurtosis = %6.4f'
|
||||
>>> print sstr %(m, v, s ,k)
|
||||
mean = 0.0000, variance = 1.2500, skew = 0.0000, kurtosis = 1.0000
|
||||
>>> print 'sample: ',
|
||||
sample:
|
||||
>>> print sstr %(sm, sv, ss, sk)
|
||||
mean = 0.0141, variance = 1.2903, skew = 0.2165, kurtosis = 1.0556
|
||||
|
||||
Note: stats.describe uses the unbiased estimator for the variance, while
|
||||
np.var is the biased estimator.
|
||||
|
||||
|
||||
For our sample the sample statistics differ a by a small amount from
|
||||
their theoretical counterparts.
|
||||
|
||||
|
||||
T-test and KS-test
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
We can use the t-test to test whether the mean of our sample differs
|
||||
in a statistcally significant way from the theoretical expectation.
|
||||
|
||||
>>> print 't-statistic = %6.3f pvalue = %6.4f' % stats.ttest_1samp(x, m)
|
||||
t-statistic = 0.391 pvalue = 0.6955
|
||||
|
||||
The pvalue is 0.7, this means that with an alpha error of, for
|
||||
example, 10%, we cannot reject the hypothesis that the sample mean
|
||||
is equal to zero, the expectation of the standard t-distribution.
|
||||
|
||||
|
||||
As an exercise, we can calculate our ttest also directly without
|
||||
using the provided function, which should give us the same answer,
|
||||
and so it does:
|
||||
|
||||
>>> tt = (sm-m)/np.sqrt(sv/float(n)) # t-statistic for mean
|
||||
>>> pval = stats.t.sf(np.abs(tt), n-1)*2 # two-sided pvalue = Prob(abs(t)>tt)
|
||||
>>> print 't-statistic = %6.3f pvalue = %6.4f' % (tt, pval)
|
||||
t-statistic = 0.391 pvalue = 0.6955
|
||||
|
||||
The Kolmogorov-Smirnov test can be used to test the hypothesis that
|
||||
the sample comes from the standard t-distribution
|
||||
|
||||
>>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x, 't', (10,))
|
||||
KS-statistic D = 0.016 pvalue = 0.9606
|
||||
|
||||
Again the p-value is high enough that we cannot reject the
|
||||
hypothesis that the random sample really is distributed according to the
|
||||
t-distribution. In real applications, we don't know what the
|
||||
underlying distribution is. If we perform the Kolmogorov-Smirnov
|
||||
test of our sample against the standard normal distribution, then we
|
||||
also cannot reject the hypothesis that our sample was generated by the
|
||||
normal distribution given that in this example the p-value is almost 40%.
|
||||
|
||||
>>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x,'norm')
|
||||
KS-statistic D = 0.028 pvalue = 0.3949
|
||||
|
||||
However, the standard normal distribution has a variance of 1, while our
|
||||
sample has a variance of 1.29. If we standardize our sample and test it
|
||||
against the normal distribution, then the p-value is again large enough
|
||||
that we cannot reject the hypothesis that the sample came form the
|
||||
normal distribution.
|
||||
|
||||
>>> d, pval = stats.kstest((x-x.mean())/x.std(), 'norm')
|
||||
>>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % (d, pval)
|
||||
KS-statistic D = 0.032 pvalue = 0.2402
|
||||
|
||||
Note: The Kolmogorov-Smirnov test assumes that we test against a
|
||||
distribution with given parameters, since in the last case we
|
||||
estimated mean and variance, this assumption is violated, and the
|
||||
distribution of the test statistic on which the p-value is based, is
|
||||
not correct.
|
||||
|
||||
Tails of the distribution
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Finally, we can check the upper tail of the distribution. We can use
|
||||
the percent point function ppf, which is the inverse of the cdf
|
||||
function, to obtain the critical values, or, more directly, we can use
|
||||
the inverse of the survival function
|
||||
|
||||
>>> crit01, crit05, crit10 = stats.t.ppf([1-0.01, 1-0.05, 1-0.10], 10)
|
||||
>>> print 'critical values from ppf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% (crit01, crit05, crit10)
|
||||
critical values from ppf at 1%, 5% and 10% 2.7638 1.8125 1.3722
|
||||
>>> print 'critical values from isf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% tuple(stats.t.isf([0.01,0.05,0.10],10))
|
||||
critical values from isf at 1%, 5% and 10% 2.7638 1.8125 1.3722
|
||||
|
||||
>>> freq01 = np.sum(x>crit01) / float(n) * 100
|
||||
>>> freq05 = np.sum(x>crit05) / float(n) * 100
|
||||
>>> freq10 = np.sum(x>crit10) / float(n) * 100
|
||||
>>> print 'sample %%-frequency at 1%%, 5%% and 10%% tail %8.4f %8.4f %8.4f'% (freq01, freq05, freq10)
|
||||
sample %-frequency at 1%, 5% and 10% tail 1.4000 5.8000 10.5000
|
||||
|
||||
In all three cases, our sample has more weight in the top tail than the
|
||||
underlying distribution.
|
||||
We can briefly check a larger sample to see if we get a closer match. In this
|
||||
case the empirical frequency is quite close to the theoretical probability,
|
||||
but if we repeat this several times the fluctuations are still pretty large.
|
||||
|
||||
>>> freq05l = np.sum(stats.t.rvs(10, size=10000) > crit05) / 10000.0 * 100
|
||||
>>> print 'larger sample %%-frequency at 5%% tail %8.4f'% freq05l
|
||||
larger sample %-frequency at 5% tail 4.8000
|
||||
|
||||
We can also compare it with the tail of the normal distribution, which
|
||||
has less weight in the tails:
|
||||
|
||||
>>> print 'tail prob. of normal at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% \
|
||||
... tuple(stats.norm.sf([crit01, crit05, crit10])*100)
|
||||
tail prob. of normal at 1%, 5% and 10% 0.2857 3.4957 8.5003
|
||||
|
||||
The chisquare test can be used to test, whether for a finite number of bins,
|
||||
the observed frequencies differ significantly from the probabilites of the
|
||||
hypothesized distribution.
|
||||
|
||||
>>> quantiles = [0.0, 0.01, 0.05, 0.1, 1-0.10, 1-0.05, 1-0.01, 1.0]
|
||||
>>> crit = stats.t.ppf(quantiles, 10)
|
||||
>>> print crit
|
||||
[ -Inf -2.76376946 -1.81246112 -1.37218364 1.37218364 1.81246112
|
||||
2.76376946 Inf]
|
||||
>>> n_sample = x.size
|
||||
>>> freqcount = np.histogram(x, bins=crit)[0]
|
||||
>>> tprob = np.diff(quantiles)
|
||||
>>> nprob = np.diff(stats.norm.cdf(crit))
|
||||
>>> tch, tpval = stats.chisquare(freqcount, tprob*n_sample)
|
||||
>>> nch, npval = stats.chisquare(freqcount, nprob*n_sample)
|
||||
>>> print 'chisquare for t: chi2 = %6.3f pvalue = %6.4f' % (tch, tpval)
|
||||
chisquare for t: chi2 = 2.300 pvalue = 0.8901
|
||||
>>> print 'chisquare for normal: chi2 = %6.3f pvalue = %6.4f' % (nch, npval)
|
||||
chisquare for normal: chi2 = 64.605 pvalue = 0.0000
|
||||
|
||||
We see that the standard normal distribution is clearly rejected while the
|
||||
standard t-distribution cannot be rejected. Since the variance of our sample
|
||||
differs from both standard distribution, we can again redo the test taking
|
||||
the estimate for scale and location into account.
|
||||
|
||||
The fit method of the distributions can be used to estimate the parameters
|
||||
of the distribution, and the test is repeated using probabilites of the
|
||||
estimated distribution.
|
||||
|
||||
>>> tdof, tloc, tscale = stats.t.fit(x)
|
||||
>>> nloc, nscale = stats.norm.fit(x)
|
||||
>>> tprob = np.diff(stats.t.cdf(crit, tdof, loc=tloc, scale=tscale))
|
||||
>>> nprob = np.diff(stats.norm.cdf(crit, loc=nloc, scale=nscale))
|
||||
>>> tch, tpval = stats.chisquare(freqcount, tprob*n_sample)
|
||||
>>> nch, npval = stats.chisquare(freqcount, nprob*n_sample)
|
||||
>>> print 'chisquare for t: chi2 = %6.3f pvalue = %6.4f' % (tch, tpval)
|
||||
chisquare for t: chi2 = 1.577 pvalue = 0.9542
|
||||
>>> print 'chisquare for normal: chi2 = %6.3f pvalue = %6.4f' % (nch, npval)
|
||||
chisquare for normal: chi2 = 11.084 pvalue = 0.0858
|
||||
|
||||
Taking account of the estimated parameters, we can still reject the
|
||||
hypothesis that our sample came from a normal distribution (at the 5% level),
|
||||
but again, with a p-value of 0.95, we cannot reject the t distribution.
|
||||
|
||||
|
||||
|
||||
Special tests for normal distributions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Since the normal distribution is the most common distribution in statistics,
|
||||
there are several additional functions available to test whether a sample
|
||||
could have been drawn from a normal distribution
|
||||
|
||||
First we can test if skew and kurtosis of our sample differ significantly from
|
||||
those of a normal distribution:
|
||||
|
||||
>>> print 'normal skewtest teststat = %6.3f pvalue = %6.4f' % stats.skewtest(x)
|
||||
normal skewtest teststat = 2.785 pvalue = 0.0054
|
||||
>>> print 'normal kurtosistest teststat = %6.3f pvalue = %6.4f' % stats.kurtosistest(x)
|
||||
normal kurtosistest teststat = 4.757 pvalue = 0.0000
|
||||
|
||||
These two tests are combined in the normality test
|
||||
|
||||
>>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(x)
|
||||
normaltest teststat = 30.379 pvalue = 0.0000
|
||||
|
||||
In all three tests the p-values are very low and we can reject the hypothesis
|
||||
that the our sample has skew and kurtosis of the normal distribution.
|
||||
|
||||
Since skew and kurtosis of our sample are based on central moments, we get
|
||||
exactly the same results if we test the standardized sample:
|
||||
|
||||
>>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % \
|
||||
... stats.normaltest((x-x.mean())/x.std())
|
||||
normaltest teststat = 30.379 pvalue = 0.0000
|
||||
|
||||
Because normality is rejected so strongly, we can check whether the
|
||||
normaltest gives reasonable results for other cases:
|
||||
|
||||
>>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(stats.t.rvs(10, size=100))
|
||||
normaltest teststat = 4.698 pvalue = 0.0955
|
||||
>>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(stats.norm.rvs(size=1000))
|
||||
normaltest teststat = 0.613 pvalue = 0.7361
|
||||
|
||||
When testing for normality of a small sample of t-distributed observations
|
||||
and a large sample of normal distributed observation, then in neither case
|
||||
can we reject the null hypothesis that the sample comes from a normal
|
||||
distribution. In the first case this is because the test is not powerful
|
||||
enough to distinguish a t and a normally distributed random variable in a
|
||||
small sample.
|
||||
|
||||
|
||||
Comparing two samples
|
||||
---------------------
|
||||
|
||||
In the following, we are given two samples, which can come either from the
|
||||
same or from different distribution, and we want to test whether these
|
||||
samples have the same statistical properties.
|
||||
|
||||
Comparing means
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Test with sample with identical means:
|
||||
|
||||
>>> rvs1 = stats.norm.rvs(loc=5, scale=10, size=500)
|
||||
>>> rvs2 = stats.norm.rvs(loc=5, scale=10, size=500)
|
||||
>>> stats.ttest_ind(rvs1, rvs2)
|
||||
(-0.54890361750888583, 0.5831943748663857)
|
||||
|
||||
|
||||
Test with sample with different means:
|
||||
|
||||
>>> rvs3 = stats.norm.rvs(loc=8, scale=10, size=500)
|
||||
>>> stats.ttest_ind(rvs1, rvs3)
|
||||
(-4.5334142901750321, 6.507128186505895e-006)
|
||||
|
||||
|
||||
|
||||
Kolmogorov-Smirnov test for two samples ks_2samp
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For the example where both samples are drawn from the same distribution,
|
||||
we cannot reject the null hypothesis since the pvalue is high
|
||||
|
||||
>>> stats.ks_2samp(rvs1, rvs2)
|
||||
(0.025999999999999995, 0.99541195173064878)
|
||||
|
||||
In the second example, with different location, i.e. means, we can
|
||||
reject the null hypothesis since the pvalue is below 1%
|
||||
|
||||
>>> stats.ks_2samp(rvs1, rvs3)
|
||||
(0.11399999999999999, 0.0027132103661283141)
|
File diff suppressed because it is too large
Load diff
|
@ -1,690 +0,0 @@
|
|||
.. _discrete-random-variables:
|
||||
|
||||
|
||||
==================================
|
||||
Discrete Statistical Distributions
|
||||
==================================
|
||||
|
||||
Discrete random variables take on only a countable number of values.
|
||||
The commonly used distributions are included in SciPy and described in
|
||||
this document. Each discrete distribution can take one extra integer
|
||||
parameter: :math:`L.` The relationship between the general distribution
|
||||
:math:`p` and the standard distribution :math:`p_{0}` is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ p\left(x\right)=p_{0}\left(x-L\right)\]
|
||||
|
||||
which allows for shifting of the input. When a distribution generator
|
||||
is initialized, the discrete distribution can either specify the
|
||||
beginning and ending (integer) values :math:`a` and :math:`b` which must be such that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ p_{0}\left(x\right)=0\quad x<a\textrm{ or }x>b\]
|
||||
|
||||
in which case, it is assumed that the pdf function is specified on the
|
||||
integers :math:`a+mk\leq b` where :math:`k` is a non-negative integer ( :math:`0,1,2,\ldots` ) and :math:`m` is a positive integer multiplier. Alternatively, the two lists :math:`x_{k}` and :math:`p\left(x_{k}\right)` can be provided directly in which case a dictionary is set up
|
||||
internally to evaulate probabilities and generate random variates.
|
||||
|
||||
|
||||
Probability Mass Function (PMF)
|
||||
-------------------------------
|
||||
|
||||
The probability mass function of a random variable X is defined as the
|
||||
probability that the random variable takes on a particular value.
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ p\left(x_{k}\right)=P\left[X=x_{k}\right]\]
|
||||
|
||||
This is also sometimes called the probability density function,
|
||||
although technically
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ f\left(x\right)=\sum_{k}p\left(x_{k}\right)\delta\left(x-x_{k}\right)\]
|
||||
|
||||
is the probability density function for a discrete distribution [#]_ .
|
||||
|
||||
|
||||
|
||||
.. [#]
|
||||
XXX: Unknown layout Plain Layout: Note that we will be using :math:`p` to represent the probability mass function and a parameter (a
|
||||
XXX: probability). The usage should be obvious from context.
|
||||
|
||||
|
||||
|
||||
Cumulative Distribution Function (CDF)
|
||||
--------------------------------------
|
||||
|
||||
The cumulative distribution function is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ F\left(x\right)=P\left[X\leq x\right]=\sum_{x_{k}\leq x}p\left(x_{k}\right)\]
|
||||
|
||||
and is also useful to be able to compute. Note that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ F\left(x_{k}\right)-F\left(x_{k-1}\right)=p\left(x_{k}\right)\]
|
||||
|
||||
|
||||
|
||||
|
||||
Survival Function
|
||||
-----------------
|
||||
|
||||
The survival function is just
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ S\left(x\right)=1-F\left(x\right)=P\left[X>k\right]\]
|
||||
|
||||
the probability that the random variable is strictly larger than :math:`k` .
|
||||
|
||||
|
||||
Percent Point Function (Inverse CDF)
|
||||
------------------------------------
|
||||
|
||||
The percent point function is the inverse of the cumulative
|
||||
distribution function and is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ G\left(q\right)=F^{-1}\left(q\right)\]
|
||||
|
||||
for discrete distributions, this must be modified for cases where
|
||||
there is no :math:`x_{k}` such that :math:`F\left(x_{k}\right)=q.` In these cases we choose :math:`G\left(q\right)` to be the smallest value :math:`x_{k}=G\left(q\right)` for which :math:`F\left(x_{k}\right)\geq q` . If :math:`q=0` then we define :math:`G\left(0\right)=a-1` . This definition allows random variates to be defined in the same way
|
||||
as with continuous rv's using the inverse cdf on a uniform
|
||||
distribution to generate random variates.
|
||||
|
||||
|
||||
Inverse survival function
|
||||
-------------------------
|
||||
|
||||
The inverse survival function is the inverse of the survival function
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ Z\left(\alpha\right)=S^{-1}\left(\alpha\right)=G\left(1-\alpha\right)\]
|
||||
|
||||
and is thus the smallest non-negative integer :math:`k` for which :math:`F\left(k\right)\geq1-\alpha` or the smallest non-negative integer :math:`k` for which :math:`S\left(k\right)\leq\alpha.`
|
||||
|
||||
|
||||
Hazard functions
|
||||
----------------
|
||||
|
||||
If desired, the hazard function and the cumulative hazard function
|
||||
could be defined as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ h\left(x_{k}\right)=\frac{p\left(x_{k}\right)}{1-F\left(x_{k}\right)}\]
|
||||
|
||||
and
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ H\left(x\right)=\sum_{x_{k}\leq x}h\left(x_{k}\right)=\sum_{x_{k}\leq x}\frac{F\left(x_{k}\right)-F\left(x_{k-1}\right)}{1-F\left(x_{k}\right)}.\]
|
||||
|
||||
|
||||
|
||||
|
||||
Moments
|
||||
-------
|
||||
|
||||
Non-central moments are defined using the PDF
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mu_{m}^{\prime}=E\left[X^{m}\right]=\sum_{k}x_{k}^{m}p\left(x_{k}\right).\]
|
||||
|
||||
Central moments are computed similarly :math:`\mu=\mu_{1}^{\prime}`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \mu_{m}=E\left[\left(X-\mu\right)^{m}\right] & = & \sum_{k}\left(x_{k}-\mu\right)^{m}p\left(x_{k}\right)\\ & = & \sum_{k=0}^{m}\left(-1\right)^{m-k}\left(\begin{array}{c} m\\ k\end{array}\right)\mu^{m-k}\mu_{k}^{\prime}\end{eqnarray*}
|
||||
|
||||
The mean is the first moment
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mu=\mu_{1}^{\prime}=E\left[X\right]=\sum_{k}x_{k}p\left(x_{k}\right)\]
|
||||
|
||||
the variance is the second central moment
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mu_{2}=E\left[\left(X-\mu\right)^{2}\right]=\sum_{x_{k}}x_{k}^{2}p\left(x_{k}\right)-\mu^{2}.\]
|
||||
|
||||
Skewness is defined as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \gamma_{1}=\frac{\mu_{3}}{\mu_{2}^{3/2}}\]
|
||||
|
||||
while (Fisher) kurtosis is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \gamma_{2}=\frac{\mu_{4}}{\mu_{2}^{2}}-3,\]
|
||||
|
||||
so that a normal distribution has a kurtosis of zero.
|
||||
|
||||
|
||||
Moment generating function
|
||||
--------------------------
|
||||
|
||||
The moment generating funtion is defined as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ M_{X}\left(t\right)=E\left[e^{Xt}\right]=\sum_{x_{k}}e^{x_{k}t}p\left(x_{k}\right)\]
|
||||
|
||||
Moments are found as the derivatives of the moment generating function
|
||||
evaluated at :math:`0.`
|
||||
|
||||
|
||||
Fitting data
|
||||
------------
|
||||
|
||||
To fit data to a distribution, maximizing the likelihood function is
|
||||
common. Alternatively, some distributions have well-known minimum
|
||||
variance unbiased estimators. These will be chosen by default, but the
|
||||
likelihood function will always be available for minimizing.
|
||||
|
||||
If :math:`f_{i}\left(k;\boldsymbol{\theta}\right)` is the PDF of a random-variable where :math:`\boldsymbol{\theta}` is a vector of parameters ( *e.g.* :math:`L` and :math:`S` ), then for a collection of :math:`N` independent samples from this distribution, the joint distribution the
|
||||
random vector :math:`\mathbf{k}` is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ f\left(\mathbf{k};\boldsymbol{\theta}\right)=\prod_{i=1}^{N}f_{i}\left(k_{i};\boldsymbol{\theta}\right).\]
|
||||
|
||||
The maximum likelihood estimate of the parameters :math:`\boldsymbol{\theta}` are the parameters which maximize this function with :math:`\mathbf{x}` fixed and given by the data:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \hat{\boldsymbol{\theta}} & = & \arg\max_{\boldsymbol{\theta}}f\left(\mathbf{k};\boldsymbol{\theta}\right)\\ & = & \arg\min_{\boldsymbol{\theta}}l_{\mathbf{k}}\left(\boldsymbol{\theta}\right).\end{eqnarray*}
|
||||
|
||||
Where
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} l_{\mathbf{k}}\left(\boldsymbol{\theta}\right) & = & -\sum_{i=1}^{N}\log f\left(k_{i};\boldsymbol{\theta}\right)\\ & = & -N\overline{\log f\left(k_{i};\boldsymbol{\theta}\right)}\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
Standard notation for mean
|
||||
--------------------------
|
||||
|
||||
We will use
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \overline{y\left(\mathbf{x}\right)}=\frac{1}{N}\sum_{i=1}^{N}y\left(x_{i}\right)\]
|
||||
|
||||
where :math:`N` should be clear from context.
|
||||
|
||||
|
||||
Combinations
|
||||
------------
|
||||
|
||||
Note that
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ k!=k\cdot\left(k-1\right)\cdot\left(k-2\right)\cdot\cdots\cdot1=\Gamma\left(k+1\right)\]
|
||||
|
||||
and has special cases of
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} 0! & \equiv & 1\\ k! & \equiv & 0\quad k<0\end{eqnarray*}
|
||||
|
||||
and
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \left(\begin{array}{c} n\\ k\end{array}\right)=\frac{n!}{\left(n-k\right)!k!}.\]
|
||||
|
||||
If :math:`n<0` or :math:`k<0` or :math:`k>n` we define :math:`\left(\begin{array}{c} n\\ k\end{array}\right)=0`
|
||||
|
||||
|
||||
Bernoulli
|
||||
=========
|
||||
|
||||
A Bernoulli random variable of parameter :math:`p` takes one of only two values :math:`X=0` or :math:`X=1` . The probability of success ( :math:`X=1` ) is :math:`p` , and the probability of failure ( :math:`X=0` ) is :math:`1-p.` It can be thought of as a binomial random variable with :math:`n=1` . The PMF is :math:`p\left(k\right)=0` for :math:`k\neq0,1` and
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;p\right) & = & \begin{cases} 1-p & k=0\\ p & k=1\end{cases}\\ F\left(x;p\right) & = & \begin{cases} 0 & x<0\\ 1-p & 0\le x<1\\ 1 & 1\leq x\end{cases}\\ G\left(q;p\right) & = & \begin{cases} 0 & 0\leq q<1-p\\ 1 & 1-p\leq q\leq1\end{cases}\\ \mu & = & p\\ \mu_{2} & = & p\left(1-p\right)\\ \gamma_{3} & = & \frac{1-2p}{\sqrt{p\left(1-p\right)}}\\ \gamma_{4} & = & \frac{1-6p\left(1-p\right)}{p\left(1-p\right)}\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ M\left(t\right)=1-p\left(1-e^{t}\right)\]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mu_{m}^{\prime}=p\]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ h\left[X\right]=p\log p+\left(1-p\right)\log\left(1-p\right)\]
|
||||
|
||||
|
||||
|
||||
|
||||
Binomial
|
||||
========
|
||||
|
||||
A binomial random variable with parameters :math:`\left(n,p\right)` can be described as the sum of :math:`n` independent Bernoulli random variables of parameter :math:`p;`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ Y=\sum_{i=1}^{n}X_{i}.\]
|
||||
|
||||
Therefore, this random variable counts the number of successes in :math:`n` independent trials of a random experiment where the probability of
|
||||
success is :math:`p.`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;n,p\right) & = & \left(\begin{array}{c} n\\ k\end{array}\right)p^{k}\left(1-p\right)^{n-k}\,\, k\in\left\{ 0,1,\ldots n\right\} ,\\ F\left(x;n,p\right) & = & \sum_{k\leq x}\left(\begin{array}{c} n\\ k\end{array}\right)p^{k}\left(1-p\right)^{n-k}=I_{1-p}\left(n-\left\lfloor x\right\rfloor ,\left\lfloor x\right\rfloor +1\right)\quad x\geq0\end{eqnarray*}
|
||||
|
||||
where the incomplete beta integral is
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ I_{x}\left(a,b\right)=\frac{\Gamma\left(a+b\right)}{\Gamma\left(a\right)\Gamma\left(b\right)}\int_{0}^{x}t^{a-1}\left(1-t\right)^{b-1}dt.\]
|
||||
|
||||
Now
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \mu & = & np\\ \mu_{2} & = & np\left(1-p\right)\\ \gamma_{1} & = & \frac{1-2p}{\sqrt{np\left(1-p\right)}}\\ \gamma_{2} & = & \frac{1-6p\left(1-p\right)}{np\left(1-p\right)}.\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ M\left(t\right)=\left[1-p\left(1-e^{t}\right)\right]^{n}\]
|
||||
|
||||
|
||||
|
||||
|
||||
Boltzmann (truncated Planck)
|
||||
============================
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;N,\lambda\right) & = & \frac{1-e^{-\lambda}}{1-e^{-\lambda N}}\exp\left(-\lambda k\right)\quad k\in\left\{ 0,1,\ldots,N-1\right\} \\ F\left(x;N,\lambda\right) & = & \left\{ \begin{array}{cc} 0 & x<0\\ \frac{1-\exp\left[-\lambda\left(\left\lfloor x\right\rfloor +1\right)\right]}{1-\exp\left(-\lambda N\right)} & 0\leq x\leq N-1\\ 1 & x\geq N-1\end{array}\right.\\ G\left(q,\lambda\right) & = & \left\lceil -\frac{1}{\lambda}\log\left[1-q\left(1-e^{-\lambda N}\right)\right]-1\right\rceil \end{eqnarray*}
|
||||
|
||||
Define :math:`z=e^{-\lambda}`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \mu & = & \frac{z}{1-z}-\frac{Nz^{N}}{1-z^{N}}\\ \mu_{2} & = & \frac{z}{\left(1-z\right)^{2}}-\frac{N^{2}z^{N}}{\left(1-z^{N}\right)^{2}}\\ \gamma_{1} & = & \frac{z\left(1+z\right)\left(\frac{1-z^{N}}{1-z}\right)^{3}-N^{3}z^{N}\left(1+z^{N}\right)}{\left[z\left(\frac{1-z^{N}}{1-z}\right)^{2}-N^{2}z^{N}\right]^{3/2}}\\ \gamma_{2} & = & \frac{z\left(1+4z+z^{2}\right)\left(\frac{1-z^{N}}{1-z}\right)^{4}-N^{4}z^{N}\left(1+4z^{N}+z^{2N}\right)}{\left[z\left(\frac{1-z^{N}}{1-z}\right)^{2}-N^{2}z^{N}\right]^{2}}\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ M\left(t\right)=\frac{1-e^{N\left(t-\lambda\right)}}{1-e^{t-\lambda}}\frac{1-e^{-\lambda}}{1-e^{-\lambda N}}\]
|
||||
|
||||
|
||||
|
||||
|
||||
Planck (discrete exponential)
|
||||
=============================
|
||||
|
||||
Named Planck because of its relationship to the black-body problem he
|
||||
solved.
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;\lambda\right) & = & \left(1-e^{-\lambda}\right)e^{-\lambda k}\quad k\lambda\geq0\\ F\left(x;\lambda\right) & = & 1-e^{-\lambda\left(\left\lfloor x\right\rfloor +1\right)}\quad x\lambda\geq0\\ G\left(q;\lambda\right) & = & \left\lceil -\frac{1}{\lambda}\log\left[1-q\right]-1\right\rceil .\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \mu & = & \frac{1}{e^{\lambda}-1}\\ \mu_{2} & = & \frac{e^{-\lambda}}{\left(1-e^{-\lambda}\right)^{2}}\\ \gamma_{1} & = & 2\cosh\left(\frac{\lambda}{2}\right)\\ \gamma_{2} & = & 4+2\cosh\left(\lambda\right)\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ M\left(t\right)=\frac{1-e^{-\lambda}}{1-e^{t-\lambda}}\]
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ h\left[X\right]=\frac{\lambda e^{-\lambda}}{1-e^{-\lambda}}-\log\left(1-e^{-\lambda}\right)\]
|
||||
|
||||
|
||||
|
||||
|
||||
Poisson
|
||||
=======
|
||||
|
||||
The Poisson random variable counts the number of successes in :math:`n` independent Bernoulli trials in the limit as :math:`n\rightarrow\infty` and :math:`p\rightarrow0` where the probability of success in each trial is :math:`p` and :math:`np=\lambda\geq0` is a constant. It can be used to approximate the Binomial random
|
||||
variable or in it's own right to count the number of events that occur
|
||||
in the interval :math:`\left[0,t\right]` for a process satisfying certain "sparsity "constraints. The functions are
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;\lambda\right) & = & e^{-\lambda}\frac{\lambda^{k}}{k!}\quad k\geq0,\\ F\left(x;\lambda\right) & = & \sum_{n=0}^{\left\lfloor x\right\rfloor }e^{-\lambda}\frac{\lambda^{n}}{n!}=\frac{1}{\Gamma\left(\left\lfloor x\right\rfloor +1\right)}\int_{\lambda}^{\infty}t^{\left\lfloor x\right\rfloor }e^{-t}dt,\\ \mu & = & \lambda\\ \mu_{2} & = & \lambda\\ \gamma_{1} & = & \frac{1}{\sqrt{\lambda}}\\ \gamma_{2} & = & \frac{1}{\lambda}.\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ M\left(t\right)=\exp\left[\lambda\left(e^{t}-1\right)\right].\]
|
||||
|
||||
|
||||
|
||||
|
||||
Geometric
|
||||
=========
|
||||
|
||||
The geometric random variable with parameter :math:`p\in\left(0,1\right)` can be defined as the number of trials required to obtain a success
|
||||
where the probability of success on each trial is :math:`p` . Thus,
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;p\right) & = & \left(1-p\right)^{k-1}p\quad k\geq1\\ F\left(x;p\right) & = & 1-\left(1-p\right)^{\left\lfloor x\right\rfloor }\quad x\geq1\\ G\left(q;p\right) & = & \left\lceil \frac{\log\left(1-q\right)}{\log\left(1-p\right)}\right\rceil \\ \mu & = & \frac{1}{p}\\ \mu_{2} & = & \frac{1-p}{p^{2}}\\ \gamma_{1} & = & \frac{2-p}{\sqrt{1-p}}\\ \gamma_{2} & = & \frac{p^{2}-6p+6}{1-p}.\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} M\left(t\right) & = & \frac{p}{e^{-t}-\left(1-p\right)}\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
Negative Binomial
|
||||
=================
|
||||
|
||||
The negative binomial random variable with parameters :math:`n` and :math:`p\in\left(0,1\right)` can be defined as the number of *extra* independent trials (beyond :math:`n` ) required to accumulate a total of :math:`n` successes where the probability of a success on each trial is :math:`p.` Equivalently, this random variable is the number of failures
|
||||
encoutered while accumulating :math:`n` successes during independent trials of an experiment that succeeds
|
||||
with probability :math:`p.` Thus,
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;n,p\right) & = & \left(\begin{array}{c} k+n-1\\ n-1\end{array}\right)p^{n}\left(1-p\right)^{k}\quad k\geq0\\ F\left(x;n,p\right) & = & \sum_{i=0}^{\left\lfloor x\right\rfloor }\left(\begin{array}{c} i+n-1\\ i\end{array}\right)p^{n}\left(1-p\right)^{i}\quad x\geq0\\ & = & I_{p}\left(n,\left\lfloor x\right\rfloor +1\right)\quad x\geq0\\ \mu & = & n\frac{1-p}{p}\\ \mu_{2} & = & n\frac{1-p}{p^{2}}\\ \gamma_{1} & = & \frac{2-p}{\sqrt{n\left(1-p\right)}}\\ \gamma_{2} & = & \frac{p^{2}+6\left(1-p\right)}{n\left(1-p\right)}.\end{eqnarray*}
|
||||
|
||||
Recall that :math:`I_{p}\left(a,b\right)` is the incomplete beta integral.
|
||||
|
||||
|
||||
Hypergeometric
|
||||
==============
|
||||
|
||||
The hypergeometric random variable with parameters :math:`\left(M,n,N\right)` counts the number of "good "objects in a sample of size :math:`N` chosen without replacement from a population of :math:`M` objects where :math:`n` is the number of "good "objects in the total population.
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;N,n,M\right) & = & \frac{\left(\begin{array}{c} n\\ k\end{array}\right)\left(\begin{array}{c} M-n\\ N-k\end{array}\right)}{\left(\begin{array}{c} M\\ N\end{array}\right)}\quad N-\left(M-n\right)\leq k\leq\min\left(n,N\right)\\ F\left(x;N,n,M\right) & = & \sum_{k=0}^{\left\lfloor x\right\rfloor }\frac{\left(\begin{array}{c} m\\ k\end{array}\right)\left(\begin{array}{c} N-m\\ n-k\end{array}\right)}{\left(\begin{array}{c} N\\ n\end{array}\right)},\\ \mu & = & \frac{nN}{M}\\ \mu_{2} & = & \frac{nN\left(M-n\right)\left(M-N\right)}{M^{2}\left(M-1\right)}\\ \gamma_{1} & = & \frac{\left(M-2n\right)\left(M-2N\right)}{M-2}\sqrt{\frac{M-1}{nN\left(M-m\right)\left(M-n\right)}}\\ \gamma_{2} & = & \frac{g\left(N,n,M\right)}{nN\left(M-n\right)\left(M-3\right)\left(M-2\right)\left(N-M\right)}\end{eqnarray*}
|
||||
|
||||
where (defining :math:`m=M-n` )
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} g\left(N,n,M\right) & = & m^{3}-m^{5}+3m^{2}n-6m^{3}n+m^{4}n+3mn^{2}\\ & & -12m^{2}n^{2}+8m^{3}n^{2}+n^{3}-6mn^{3}+8m^{2}n^{3}\\ & & +mn^{4}-n^{5}-6m^{3}N+6m^{4}N+18m^{2}nN\\ & & -6m^{3}nN+18mn^{2}N-24m^{2}n^{2}N-6n^{3}N\\ & & -6mn^{3}N+6n^{4}N+6m^{2}N^{2}-6m^{3}N^{2}-24mnN^{2}\\ & & +12m^{2}nN^{2}+6n^{2}N^{2}+12mn^{2}N^{2}-6n^{3}N^{2}.\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
Zipf (Zeta)
|
||||
===========
|
||||
|
||||
A random variable has the zeta distribution (also called the zipf
|
||||
distribution) with parameter :math:`\alpha>1` if it's probability mass function is given by
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;\alpha\right) & = & \frac{1}{\zeta\left(\alpha\right)k^{\alpha}}\quad k\geq1\end{eqnarray*}
|
||||
|
||||
where
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \zeta\left(\alpha\right)=\sum_{n=1}^{\infty}\frac{1}{n^{\alpha}}\]
|
||||
|
||||
is the Riemann zeta function. Other functions of this distribution are
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} F\left(x;\alpha\right) & = & \frac{1}{\zeta\left(\alpha\right)}\sum_{k=1}^{\left\lfloor x\right\rfloor }\frac{1}{k^{\alpha}}\\ \mu & = & \frac{\zeta_{1}}{\zeta_{0}}\quad\alpha>2\\ \mu_{2} & = & \frac{\zeta_{2}\zeta_{0}-\zeta_{1}^{2}}{\zeta_{0}^{2}}\quad\alpha>3\\ \gamma_{1} & = & \frac{\zeta_{3}\zeta_{0}^{2}-3\zeta_{0}\zeta_{1}\zeta_{2}+2\zeta_{1}^{3}}{\left[\zeta_{2}\zeta_{0}-\zeta_{1}^{2}\right]^{3/2}}\quad\alpha>4\\ \gamma_{2} & = & \frac{\zeta_{4}\zeta_{0}^{3}-4\zeta_{3}\zeta_{1}\zeta_{0}^{2}+12\zeta_{2}\zeta_{1}^{2}\zeta_{0}-6\zeta_{1}^{4}-3\zeta_{2}^{2}\zeta_{0}^{2}}{\left(\zeta_{2}\zeta_{0}-\zeta_{1}^{2}\right)^{2}}.\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} M\left(t\right) & = & \frac{\textrm{Li}_{\alpha}\left(e^{t}\right)}{\zeta\left(\alpha\right)}\end{eqnarray*}
|
||||
|
||||
where :math:`\zeta_{i}=\zeta\left(\alpha-i\right)` and :math:`\textrm{Li}_{n}\left(z\right)` is the :math:`n^{\textrm{th}}` polylogarithm function of :math:`z` defined as
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \textrm{Li}_{n}\left(z\right)\equiv\sum_{k=1}^{\infty}\frac{z^{k}}{k^{n}}\]
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mu_{n}^{\prime}=\left.M^{\left(n\right)}\left(t\right)\right|_{t=0}=\left.\frac{\textrm{Li}_{\alpha-n}\left(e^{t}\right)}{\zeta\left(a\right)}\right|_{t=0}=\frac{\zeta\left(\alpha-n\right)}{\zeta\left(\alpha\right)}\]
|
||||
|
||||
|
||||
|
||||
|
||||
Logarithmic (Log-Series, Series)
|
||||
================================
|
||||
|
||||
The logarimthic distribution with parameter :math:`p` has a probability mass function with terms proportional to the Taylor
|
||||
series expansion of :math:`\log\left(1-p\right)`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;p\right) & = & -\frac{p^{k}}{k\log\left(1-p\right)}\quad k\geq1\\ F\left(x;p\right) & = & -\frac{1}{\log\left(1-p\right)}\sum_{k=1}^{\left\lfloor x\right\rfloor }\frac{p^{k}}{k}=1+\frac{p^{1+\left\lfloor x\right\rfloor }\Phi\left(p,1,1+\left\lfloor x\right\rfloor \right)}{\log\left(1-p\right)}\end{eqnarray*}
|
||||
|
||||
where
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \Phi\left(z,s,a\right)=\sum_{k=0}^{\infty}\frac{z^{k}}{\left(a+k\right)^{s}}\]
|
||||
|
||||
is the Lerch Transcendent. Also define :math:`r=\log\left(1-p\right)`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \mu & = & -\frac{p}{\left(1-p\right)r}\\ \mu_{2} & = & -\frac{p\left[p+r\right]}{\left(1-p\right)^{2}r^{2}}\\ \gamma_{1} & = & -\frac{2p^{2}+3pr+\left(1+p\right)r^{2}}{r\left(p+r\right)\sqrt{-p\left(p+r\right)}}r\\ \gamma_{2} & = & -\frac{6p^{3}+12p^{2}r+p\left(4p+7\right)r^{2}+\left(p^{2}+4p+1\right)r^{3}}{p\left(p+r\right)^{2}}.\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} M\left(t\right) & = & -\frac{1}{\log\left(1-p\right)}\sum_{k=1}^{\infty}\frac{e^{tk}p^{k}}{k}\\ & = & \frac{\log\left(1-pe^{t}\right)}{\log\left(1-p\right)}\end{eqnarray*}
|
||||
|
||||
Thus,
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mu_{n}^{\prime}=\left.M^{\left(n\right)}\left(t\right)\right|_{t=0}=\left.\frac{\textrm{Li}_{1-n}\left(pe^{t}\right)}{\log\left(1-p\right)}\right|_{t=0}=-\frac{\textrm{Li}_{1-n}\left(p\right)}{\log\left(1-p\right)}.\]
|
||||
|
||||
|
||||
|
||||
|
||||
Discrete Uniform (randint)
|
||||
==========================
|
||||
|
||||
The discrete uniform distribution with parameters :math:`\left(a,b\right)` constructs a random variable that has an equal probability of being
|
||||
any one of the integers in the half-open range :math:`[a,b).` If :math:`a` is not given it is assumed to be zero and the only parameter is :math:`b.` Therefore,
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k;a,b\right) & = & \frac{1}{b-a}\quad a\leq k<b\\ F\left(x;a,b\right) & = & \frac{\left\lfloor x\right\rfloor -a}{b-a}\quad a\leq x\leq b\\ G\left(q;a,b\right) & = & \left\lceil q\left(b-a\right)+a\right\rceil \\ \mu & = & \frac{b+a-1}{2}\\ \mu_{2} & = & \frac{\left(b-a-1\right)\left(b-a+1\right)}{12}\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & -\frac{6}{5}\frac{\left(b-a\right)^{2}+1}{\left(b-a-1\right)\left(b-a+1\right)}.\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} M\left(t\right) & = & \frac{1}{b-a}\sum_{k=a}^{b-1}e^{tk}\\ & = & \frac{e^{bt}-e^{at}}{\left(b-a\right)\left(e^{t}-1\right)}\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
|
||||
Discrete Laplacian
|
||||
==================
|
||||
|
||||
Defined over all integers for :math:`a>0`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} p\left(k\right) & = & \tanh\left(\frac{a}{2}\right)e^{-a\left|k\right|},\\ F\left(x\right) & = & \left\{ \begin{array}{cc} \frac{e^{a\left(\left\lfloor x\right\rfloor +1\right)}}{e^{a}+1} & \left\lfloor x\right\rfloor <0,\\ 1-\frac{e^{-a\left\lfloor x\right\rfloor }}{e^{a}+1} & \left\lfloor x\right\rfloor \geq0.\end{array}\right.\\ G\left(q\right) & = & \left\{ \begin{array}{cc} \left\lceil \frac{1}{a}\log\left[q\left(e^{a}+1\right)\right]-1\right\rceil & q<\frac{1}{1+e^{-a}},\\ \left\lceil -\frac{1}{a}\log\left[\left(1-q\right)\left(1+e^{a}\right)\right]\right\rceil & q\geq\frac{1}{1+e^{-a}}.\end{array}\right.\end{eqnarray*}
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} M\left(t\right) & = & \tanh\left(\frac{a}{2}\right)\sum_{k=-\infty}^{\infty}e^{tk}e^{-a\left|k\right|}\\ & = & C\left(1+\sum_{k=1}^{\infty}e^{-\left(t+a\right)k}+\sum_{1}^{\infty}e^{\left(t-a\right)k}\right)\\ & = & \tanh\left(\frac{a}{2}\right)\left(1+\frac{e^{-\left(t+a\right)}}{1-e^{-\left(t+a\right)}}+\frac{e^{t-a}}{1-e^{t-a}}\right)\\ & = & \frac{\tanh\left(\frac{a}{2}\right)\sinh a}{\cosh a-\cosh t}.\end{eqnarray*}
|
||||
|
||||
Thus,
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ \mu_{n}^{\prime}=M^{\left(n\right)}\left(0\right)=\left[1+\left(-1\right)^{n}\right]\textrm{Li}_{-n}\left(e^{-a}\right)\]
|
||||
|
||||
where :math:`\textrm{Li}_{-n}\left(z\right)` is the polylogarithm function of order :math:`-n` evaluated at :math:`z.`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ h\left[X\right]=-\log\left(\tanh\left(\frac{a}{2}\right)\right)+\frac{a}{\sinh a}\]
|
||||
|
||||
|
||||
|
||||
|
||||
Discrete Gaussian*
|
||||
==================
|
||||
|
||||
Defined for all :math:`\mu` and :math:`\lambda>0` and :math:`k`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ p\left(k;\mu,\lambda\right)=\frac{1}{Z\left(\lambda\right)}\exp\left[-\lambda\left(k-\mu\right)^{2}\right]\]
|
||||
|
||||
where
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ Z\left(\lambda\right)=\sum_{k=-\infty}^{\infty}\exp\left[-\lambda k^{2}\right]\]
|
||||
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{eqnarray*} \mu & = & \mu\\ \mu_{2} & = & -\frac{\partial}{\partial\lambda}\log Z\left(\lambda\right)\\ & = & G\left(\lambda\right)e^{-\lambda}\end{eqnarray*}
|
||||
|
||||
where :math:`G\left(0\right)\rightarrow\infty` and :math:`G\left(\infty\right)\rightarrow2` with a minimum less than 2 near :math:`\lambda=1`
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\[ G\left(\lambda\right)=\frac{1}{Z\left(\lambda\right)}\sum_{k=-\infty}^{\infty}k^{2}\exp\left[-\lambda\left(k+1\right)\left(k-1\right)\right]\]
|
File diff suppressed because it is too large
Load diff
|
@ -1,19 +0,0 @@
|
|||
======================================
|
||||
C/C++ integration (:mod:`scipy.weave`)
|
||||
======================================
|
||||
|
||||
.. warning::
|
||||
|
||||
This documentation is work-in-progress and unorganized.
|
||||
|
||||
.. automodule:: scipy.weave
|
||||
:members:
|
||||
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated/
|
||||
|
||||
inline
|
||||
blitz
|
||||
ext_tools
|
||||
accelerate
|
|
@ -1,97 +0,0 @@
|
|||
-------------------------------------------------------------------------------
|
||||
The files
|
||||
- numpydoc.py
|
||||
- autosummary.py
|
||||
- autosummary_generate.py
|
||||
- docscrape.py
|
||||
- docscrape_sphinx.py
|
||||
- phantom_import.py
|
||||
have the following license:
|
||||
|
||||
Copyright (C) 2008 Stefan van der Walt <stefan@mentat.za.net>, Pauli Virtanen <pav@iki.fi>
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in
|
||||
the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
|
||||
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,
|
||||
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
||||
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
|
||||
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
|
||||
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
The files
|
||||
- compiler_unparse.py
|
||||
- comment_eater.py
|
||||
- traitsdoc.py
|
||||
have the following license:
|
||||
|
||||
This software is OSI Certified Open Source Software.
|
||||
OSI Certified is a certification mark of the Open Source Initiative.
|
||||
|
||||
Copyright (c) 2006, Enthought, Inc.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
* Neither the name of Enthought, Inc. nor the names of its contributors may
|
||||
be used to endorse or promote products derived from this software without
|
||||
specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
The files
|
||||
- only_directives.py
|
||||
- plot_directive.py
|
||||
originate from Matplotlib (http://matplotlib.sf.net/) which has
|
||||
the following license:
|
||||
|
||||
Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved.
|
||||
|
||||
1. This LICENSE AGREEMENT is between John D. Hunter (“JDH”), and the Individual or Organization (“Licensee”) accessing and otherwise using matplotlib software in source or binary form and its associated documentation.
|
||||
|
||||
2. Subject to the terms and conditions of this License Agreement, JDH hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use matplotlib 0.98.3 alone or in any derivative version, provided, however, that JDH’s License Agreement and JDH’s notice of copyright, i.e., “Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved” are retained in matplotlib 0.98.3 alone or in any derivative version prepared by Licensee.
|
||||
|
||||
3. In the event Licensee prepares a derivative work that is based on or incorporates matplotlib 0.98.3 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to matplotlib 0.98.3.
|
||||
|
||||
4. JDH is making matplotlib 0.98.3 available to Licensee on an “AS IS” basis. JDH MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, JDH MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB 0.98.3 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
|
||||
|
||||
5. JDH SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB 0.98.3 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING MATPLOTLIB 0.98.3, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||
|
||||
6. This License Agreement will automatically terminate upon a material breach of its terms and conditions.
|
||||
|
||||
7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between JDH and Licensee. This License Agreement does not grant permission to use JDH trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party.
|
||||
|
||||
8. By copying, installing or otherwise using matplotlib 0.98.3, Licensee agrees to be bound by the terms and conditions of this License Agreement.
|
||||
|
|
@ -1,2 +0,0 @@
|
|||
recursive-include tests *.py
|
||||
include *.txt
|
|
@ -1,52 +0,0 @@
|
|||
=====================================
|
||||
numpydoc -- Numpy's Sphinx extensions
|
||||
=====================================
|
||||
|
||||
Numpy's documentation uses several custom extensions to Sphinx. These
|
||||
are shipped in this ``numpydoc`` package, in case you want to make use
|
||||
of them in third-party projects.
|
||||
|
||||
The following extensions are available:
|
||||
|
||||
- ``numpydoc``: support for the Numpy docstring format in Sphinx, and add
|
||||
the code description directives ``np-function``, ``np-cfunction``, etc.
|
||||
that support the Numpy docstring syntax.
|
||||
|
||||
- ``numpydoc.traitsdoc``: For gathering documentation about Traits attributes.
|
||||
|
||||
- ``numpydoc.plot_directives``: Adaptation of Matplotlib's ``plot::``
|
||||
directive. Note that this implementation may still undergo severe
|
||||
changes or eventually be deprecated.
|
||||
|
||||
- ``numpydoc.only_directives``: (DEPRECATED)
|
||||
|
||||
- ``numpydoc.autosummary``: (DEPRECATED) An ``autosummary::`` directive.
|
||||
Available in Sphinx 0.6.2 and (to-be) 1.0 as ``sphinx.ext.autosummary``,
|
||||
and it the Sphinx 1.0 version is recommended over that included in
|
||||
Numpydoc.
|
||||
|
||||
|
||||
numpydoc
|
||||
========
|
||||
|
||||
Numpydoc inserts a hook into Sphinx's autodoc that converts docstrings
|
||||
following the Numpy/Scipy format to a form palatable to Sphinx.
|
||||
|
||||
Options
|
||||
-------
|
||||
|
||||
The following options can be set in conf.py:
|
||||
|
||||
- numpydoc_use_plots: bool
|
||||
|
||||
Whether to produce ``plot::`` directives for Examples sections that
|
||||
contain ``import matplotlib``.
|
||||
|
||||
- numpydoc_show_class_members: bool
|
||||
|
||||
Whether to show all members of a class in the Methods and Attributes
|
||||
sections automatically.
|
||||
|
||||
- numpydoc_edit_link: bool (DEPRECATED -- edit your HTML template instead)
|
||||
|
||||
Whether to insert an edit link after docstrings.
|
|
@ -1 +0,0 @@
|
|||
from numpydoc import setup
|
|
@ -1,349 +0,0 @@
|
|||
"""
|
||||
===========
|
||||
autosummary
|
||||
===========
|
||||
|
||||
Sphinx extension that adds an autosummary:: directive, which can be
|
||||
used to generate function/method/attribute/etc. summary lists, similar
|
||||
to those output eg. by Epydoc and other API doc generation tools.
|
||||
|
||||
An :autolink: role is also provided.
|
||||
|
||||
autosummary directive
|
||||
---------------------
|
||||
|
||||
The autosummary directive has the form::
|
||||
|
||||
.. autosummary::
|
||||
:nosignatures:
|
||||
:toctree: generated/
|
||||
|
||||
module.function_1
|
||||
module.function_2
|
||||
...
|
||||
|
||||
and it generates an output table (containing signatures, optionally)
|
||||
|
||||
======================== =============================================
|
||||
module.function_1(args) Summary line from the docstring of function_1
|
||||
module.function_2(args) Summary line from the docstring
|
||||
...
|
||||
======================== =============================================
|
||||
|
||||
If the :toctree: option is specified, files matching the function names
|
||||
are inserted to the toctree with the given prefix:
|
||||
|
||||
generated/module.function_1
|
||||
generated/module.function_2
|
||||
...
|
||||
|
||||
Note: The file names contain the module:: or currentmodule:: prefixes.
|
||||
|
||||
.. seealso:: autosummary_generate.py
|
||||
|
||||
|
||||
autolink role
|
||||
-------------
|
||||
|
||||
The autolink role functions as ``:obj:`` when the name referred can be
|
||||
resolved to a Python object, and otherwise it becomes simple emphasis.
|
||||
This can be used as the default role to make links 'smart'.
|
||||
|
||||
"""
|
||||
import sys, os, posixpath, re
|
||||
|
||||
from docutils.parsers.rst import directives
|
||||
from docutils.statemachine import ViewList
|
||||
from docutils import nodes
|
||||
|
||||
import sphinx.addnodes, sphinx.roles
|
||||
from sphinx.util import patfilter
|
||||
|
||||
from docscrape_sphinx import get_doc_object
|
||||
|
||||
import warnings
|
||||
warnings.warn(
|
||||
"The numpydoc.autosummary extension can also be found as "
|
||||
"sphinx.ext.autosummary in Sphinx >= 0.6, and the version in "
|
||||
"Sphinx >= 0.7 is superior to the one in numpydoc. This numpydoc "
|
||||
"version of autosummary is no longer maintained.",
|
||||
DeprecationWarning, stacklevel=2)
|
||||
|
||||
def setup(app):
|
||||
app.add_directive('autosummary', autosummary_directive, True, (0, 0, False),
|
||||
toctree=directives.unchanged,
|
||||
nosignatures=directives.flag)
|
||||
app.add_role('autolink', autolink_role)
|
||||
|
||||
app.add_node(autosummary_toc,
|
||||
html=(autosummary_toc_visit_html, autosummary_toc_depart_noop),
|
||||
latex=(autosummary_toc_visit_latex, autosummary_toc_depart_noop))
|
||||
app.connect('doctree-read', process_autosummary_toc)
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# autosummary_toc node
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
class autosummary_toc(nodes.comment):
|
||||
pass
|
||||
|
||||
def process_autosummary_toc(app, doctree):
|
||||
"""
|
||||
Insert items described in autosummary:: to the TOC tree, but do
|
||||
not generate the toctree:: list.
|
||||
|
||||
"""
|
||||
env = app.builder.env
|
||||
crawled = {}
|
||||
def crawl_toc(node, depth=1):
|
||||
crawled[node] = True
|
||||
for j, subnode in enumerate(node):
|
||||
try:
|
||||
if (isinstance(subnode, autosummary_toc)
|
||||
and isinstance(subnode[0], sphinx.addnodes.toctree)):
|
||||
env.note_toctree(env.docname, subnode[0])
|
||||
continue
|
||||
except IndexError:
|
||||
continue
|
||||
if not isinstance(subnode, nodes.section):
|
||||
continue
|
||||
if subnode not in crawled:
|
||||
crawl_toc(subnode, depth+1)
|
||||
crawl_toc(doctree)
|
||||
|
||||
def autosummary_toc_visit_html(self, node):
|
||||
"""Hide autosummary toctree list in HTML output"""
|
||||
raise nodes.SkipNode
|
||||
|
||||
def autosummary_toc_visit_latex(self, node):
|
||||
"""Show autosummary toctree (= put the referenced pages here) in Latex"""
|
||||
pass
|
||||
|
||||
def autosummary_toc_depart_noop(self, node):
|
||||
pass
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# .. autosummary::
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
def autosummary_directive(dirname, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
"""
|
||||
Pretty table containing short signatures and summaries of functions etc.
|
||||
|
||||
autosummary also generates a (hidden) toctree:: node.
|
||||
|
||||
"""
|
||||
|
||||
names = []
|
||||
names += [x.strip().split()[0] for x in content
|
||||
if x.strip() and re.search(r'^[a-zA-Z_]', x.strip()[0])]
|
||||
|
||||
table, warnings, real_names = get_autosummary(names, state,
|
||||
'nosignatures' in options)
|
||||
node = table
|
||||
|
||||
env = state.document.settings.env
|
||||
suffix = env.config.source_suffix
|
||||
all_docnames = env.found_docs.copy()
|
||||
dirname = posixpath.dirname(env.docname)
|
||||
|
||||
if 'toctree' in options:
|
||||
tree_prefix = options['toctree'].strip()
|
||||
docnames = []
|
||||
for name in names:
|
||||
name = real_names.get(name, name)
|
||||
|
||||
docname = tree_prefix + name
|
||||
if docname.endswith(suffix):
|
||||
docname = docname[:-len(suffix)]
|
||||
docname = posixpath.normpath(posixpath.join(dirname, docname))
|
||||
if docname not in env.found_docs:
|
||||
warnings.append(state.document.reporter.warning(
|
||||
'toctree references unknown document %r' % docname,
|
||||
line=lineno))
|
||||
docnames.append(docname)
|
||||
|
||||
tocnode = sphinx.addnodes.toctree()
|
||||
tocnode['includefiles'] = docnames
|
||||
tocnode['maxdepth'] = -1
|
||||
tocnode['glob'] = None
|
||||
tocnode['entries'] = [(None, docname) for docname in docnames]
|
||||
|
||||
tocnode = autosummary_toc('', '', tocnode)
|
||||
return warnings + [node] + [tocnode]
|
||||
else:
|
||||
return warnings + [node]
|
||||
|
||||
def get_autosummary(names, state, no_signatures=False):
|
||||
"""
|
||||
Generate a proper table node for autosummary:: directive.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
names : list of str
|
||||
Names of Python objects to be imported and added to the table.
|
||||
document : document
|
||||
Docutils document object
|
||||
|
||||
"""
|
||||
document = state.document
|
||||
|
||||
real_names = {}
|
||||
warnings = []
|
||||
|
||||
prefixes = ['']
|
||||
prefixes.insert(0, document.settings.env.currmodule)
|
||||
|
||||
table = nodes.table('')
|
||||
group = nodes.tgroup('', cols=2)
|
||||
table.append(group)
|
||||
group.append(nodes.colspec('', colwidth=10))
|
||||
group.append(nodes.colspec('', colwidth=90))
|
||||
body = nodes.tbody('')
|
||||
group.append(body)
|
||||
|
||||
def append_row(*column_texts):
|
||||
row = nodes.row('')
|
||||
for text in column_texts:
|
||||
node = nodes.paragraph('')
|
||||
vl = ViewList()
|
||||
vl.append(text, '<autosummary>')
|
||||
state.nested_parse(vl, 0, node)
|
||||
try:
|
||||
if isinstance(node[0], nodes.paragraph):
|
||||
node = node[0]
|
||||
except IndexError:
|
||||
pass
|
||||
row.append(nodes.entry('', node))
|
||||
body.append(row)
|
||||
|
||||
for name in names:
|
||||
try:
|
||||
obj, real_name = import_by_name(name, prefixes=prefixes)
|
||||
except ImportError:
|
||||
warnings.append(document.reporter.warning(
|
||||
'failed to import %s' % name))
|
||||
append_row(":obj:`%s`" % name, "")
|
||||
continue
|
||||
|
||||
real_names[name] = real_name
|
||||
|
||||
doc = get_doc_object(obj)
|
||||
|
||||
if doc['Summary']:
|
||||
title = " ".join(doc['Summary'])
|
||||
else:
|
||||
title = ""
|
||||
|
||||
col1 = u":obj:`%s <%s>`" % (name, real_name)
|
||||
if doc['Signature']:
|
||||
sig = re.sub('^[^(\[]*', '', doc['Signature'].strip())
|
||||
if '=' in sig:
|
||||
# abbreviate optional arguments
|
||||
sig = re.sub(r', ([a-zA-Z0-9_]+)=', r'[, \1=', sig, count=1)
|
||||
sig = re.sub(r'\(([a-zA-Z0-9_]+)=', r'([\1=', sig, count=1)
|
||||
sig = re.sub(r'=[^,)]+,', ',', sig)
|
||||
sig = re.sub(r'=[^,)]+\)$', '])', sig)
|
||||
# shorten long strings
|
||||
sig = re.sub(r'(\[.{16,16}[^,]*?),.*?\]\)', r'\1, ...])', sig)
|
||||
else:
|
||||
sig = re.sub(r'(\(.{16,16}[^,]*?),.*?\)', r'\1, ...)', sig)
|
||||
# make signature contain non-breaking spaces
|
||||
col1 += u"\\ \u00a0" + unicode(sig).replace(u" ", u"\u00a0")
|
||||
col2 = title
|
||||
append_row(col1, col2)
|
||||
|
||||
return table, warnings, real_names
|
||||
|
||||
def import_by_name(name, prefixes=[None]):
|
||||
"""
|
||||
Import a Python object that has the given name, under one of the prefixes.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
name : str
|
||||
Name of a Python object, eg. 'numpy.ndarray.view'
|
||||
prefixes : list of (str or None), optional
|
||||
Prefixes to prepend to the name (None implies no prefix).
|
||||
The first prefixed name that results to successful import is used.
|
||||
|
||||
Returns
|
||||
-------
|
||||
obj
|
||||
The imported object
|
||||
name
|
||||
Name of the imported object (useful if `prefixes` was used)
|
||||
|
||||
"""
|
||||
for prefix in prefixes:
|
||||
try:
|
||||
if prefix:
|
||||
prefixed_name = '.'.join([prefix, name])
|
||||
else:
|
||||
prefixed_name = name
|
||||
return _import_by_name(prefixed_name), prefixed_name
|
||||
except ImportError:
|
||||
pass
|
||||
raise ImportError
|
||||
|
||||
def _import_by_name(name):
|
||||
"""Import a Python object given its full name"""
|
||||
try:
|
||||
# try first interpret `name` as MODNAME.OBJ
|
||||
name_parts = name.split('.')
|
||||
try:
|
||||
modname = '.'.join(name_parts[:-1])
|
||||
__import__(modname)
|
||||
return getattr(sys.modules[modname], name_parts[-1])
|
||||
except (ImportError, IndexError, AttributeError):
|
||||
pass
|
||||
|
||||
# ... then as MODNAME, MODNAME.OBJ1, MODNAME.OBJ1.OBJ2, ...
|
||||
last_j = 0
|
||||
modname = None
|
||||
for j in reversed(range(1, len(name_parts)+1)):
|
||||
last_j = j
|
||||
modname = '.'.join(name_parts[:j])
|
||||
try:
|
||||
__import__(modname)
|
||||
except ImportError:
|
||||
continue
|
||||
if modname in sys.modules:
|
||||
break
|
||||
|
||||
if last_j < len(name_parts):
|
||||
obj = sys.modules[modname]
|
||||
for obj_name in name_parts[last_j:]:
|
||||
obj = getattr(obj, obj_name)
|
||||
return obj
|
||||
else:
|
||||
return sys.modules[modname]
|
||||
except (ValueError, ImportError, AttributeError, KeyError), e:
|
||||
raise ImportError(e)
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# :autolink: (smart default role)
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
def autolink_role(typ, rawtext, etext, lineno, inliner,
|
||||
options={}, content=[]):
|
||||
"""
|
||||
Smart linking role.
|
||||
|
||||
Expands to ":obj:`text`" if `text` is an object that can be imported;
|
||||
otherwise expands to "*text*".
|
||||
"""
|
||||
r = sphinx.roles.xfileref_role('obj', rawtext, etext, lineno, inliner,
|
||||
options, content)
|
||||
pnode = r[0][0]
|
||||
|
||||
prefixes = [None]
|
||||
#prefixes.insert(0, inliner.document.settings.env.currmodule)
|
||||
try:
|
||||
obj, name = import_by_name(pnode['reftarget'], prefixes)
|
||||
except ImportError:
|
||||
content = pnode[0]
|
||||
r[0][0] = nodes.emphasis(rawtext, content[0].astext(),
|
||||
classes=content['classes'])
|
||||
return r
|
|
@ -1,219 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
r"""
|
||||
autosummary_generate.py OPTIONS FILES
|
||||
|
||||
Generate automatic RST source files for items referred to in
|
||||
autosummary:: directives.
|
||||
|
||||
Each generated RST file contains a single auto*:: directive which
|
||||
extracts the docstring of the referred item.
|
||||
|
||||
Example Makefile rule::
|
||||
|
||||
generate:
|
||||
./ext/autosummary_generate.py -o source/generated source/*.rst
|
||||
|
||||
"""
|
||||
import glob, re, inspect, os, optparse, pydoc
|
||||
from autosummary import import_by_name
|
||||
|
||||
try:
|
||||
from phantom_import import import_phantom_module
|
||||
except ImportError:
|
||||
import_phantom_module = lambda x: x
|
||||
|
||||
def main():
|
||||
p = optparse.OptionParser(__doc__.strip())
|
||||
p.add_option("-p", "--phantom", action="store", type="string",
|
||||
dest="phantom", default=None,
|
||||
help="Phantom import modules from a file")
|
||||
p.add_option("-o", "--output-dir", action="store", type="string",
|
||||
dest="output_dir", default=None,
|
||||
help=("Write all output files to the given directory (instead "
|
||||
"of writing them as specified in the autosummary:: "
|
||||
"directives)"))
|
||||
options, args = p.parse_args()
|
||||
|
||||
if len(args) == 0:
|
||||
p.error("wrong number of arguments")
|
||||
|
||||
if options.phantom and os.path.isfile(options.phantom):
|
||||
import_phantom_module(options.phantom)
|
||||
|
||||
# read
|
||||
names = {}
|
||||
for name, loc in get_documented(args).items():
|
||||
for (filename, sec_title, keyword, toctree) in loc:
|
||||
if toctree is not None:
|
||||
path = os.path.join(os.path.dirname(filename), toctree)
|
||||
names[name] = os.path.abspath(path)
|
||||
|
||||
# write
|
||||
for name, path in sorted(names.items()):
|
||||
if options.output_dir is not None:
|
||||
path = options.output_dir
|
||||
|
||||
if not os.path.isdir(path):
|
||||
os.makedirs(path)
|
||||
|
||||
try:
|
||||
obj, name = import_by_name(name)
|
||||
except ImportError, e:
|
||||
print "Failed to import '%s': %s" % (name, e)
|
||||
continue
|
||||
|
||||
fn = os.path.join(path, '%s.rst' % name)
|
||||
|
||||
if os.path.exists(fn):
|
||||
# skip
|
||||
continue
|
||||
|
||||
f = open(fn, 'w')
|
||||
|
||||
try:
|
||||
f.write('%s\n%s\n\n' % (name, '='*len(name)))
|
||||
|
||||
if inspect.isclass(obj):
|
||||
if issubclass(obj, Exception):
|
||||
f.write(format_modulemember(name, 'autoexception'))
|
||||
else:
|
||||
f.write(format_modulemember(name, 'autoclass'))
|
||||
elif inspect.ismodule(obj):
|
||||
f.write(format_modulemember(name, 'automodule'))
|
||||
elif inspect.ismethod(obj) or inspect.ismethoddescriptor(obj):
|
||||
f.write(format_classmember(name, 'automethod'))
|
||||
elif callable(obj):
|
||||
f.write(format_modulemember(name, 'autofunction'))
|
||||
elif hasattr(obj, '__get__'):
|
||||
f.write(format_classmember(name, 'autoattribute'))
|
||||
else:
|
||||
f.write(format_modulemember(name, 'autofunction'))
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
def format_modulemember(name, directive):
|
||||
parts = name.split('.')
|
||||
mod, name = '.'.join(parts[:-1]), parts[-1]
|
||||
return ".. currentmodule:: %s\n\n.. %s:: %s\n" % (mod, directive, name)
|
||||
|
||||
def format_classmember(name, directive):
|
||||
parts = name.split('.')
|
||||
mod, name = '.'.join(parts[:-2]), '.'.join(parts[-2:])
|
||||
return ".. currentmodule:: %s\n\n.. %s:: %s\n" % (mod, directive, name)
|
||||
|
||||
def get_documented(filenames):
|
||||
"""
|
||||
Find out what items are documented in source/*.rst
|
||||
See `get_documented_in_lines`.
|
||||
|
||||
"""
|
||||
documented = {}
|
||||
for filename in filenames:
|
||||
f = open(filename, 'r')
|
||||
lines = f.read().splitlines()
|
||||
documented.update(get_documented_in_lines(lines, filename=filename))
|
||||
f.close()
|
||||
return documented
|
||||
|
||||
def get_documented_in_docstring(name, module=None, filename=None):
|
||||
"""
|
||||
Find out what items are documented in the given object's docstring.
|
||||
See `get_documented_in_lines`.
|
||||
|
||||
"""
|
||||
try:
|
||||
obj, real_name = import_by_name(name)
|
||||
lines = pydoc.getdoc(obj).splitlines()
|
||||
return get_documented_in_lines(lines, module=name, filename=filename)
|
||||
except AttributeError:
|
||||
pass
|
||||
except ImportError, e:
|
||||
print "Failed to import '%s': %s" % (name, e)
|
||||
return {}
|
||||
|
||||
def get_documented_in_lines(lines, module=None, filename=None):
|
||||
"""
|
||||
Find out what items are documented in the given lines
|
||||
|
||||
Returns
|
||||
-------
|
||||
documented : dict of list of (filename, title, keyword, toctree)
|
||||
Dictionary whose keys are documented names of objects.
|
||||
The value is a list of locations where the object was documented.
|
||||
Each location is a tuple of filename, the current section title,
|
||||
the name of the directive, and the value of the :toctree: argument
|
||||
(if present) of the directive.
|
||||
|
||||
"""
|
||||
title_underline_re = re.compile("^[-=*_^#]{3,}\s*$")
|
||||
autodoc_re = re.compile(".. auto(function|method|attribute|class|exception|module)::\s*([A-Za-z0-9_.]+)\s*$")
|
||||
autosummary_re = re.compile(r'^\.\.\s+autosummary::\s*')
|
||||
module_re = re.compile(r'^\.\.\s+(current)?module::\s*([a-zA-Z0-9_.]+)\s*$')
|
||||
autosummary_item_re = re.compile(r'^\s+([_a-zA-Z][a-zA-Z0-9_.]*)\s*.*?')
|
||||
toctree_arg_re = re.compile(r'^\s+:toctree:\s*(.*?)\s*$')
|
||||
|
||||
documented = {}
|
||||
|
||||
current_title = []
|
||||
last_line = None
|
||||
toctree = None
|
||||
current_module = module
|
||||
in_autosummary = False
|
||||
|
||||
for line in lines:
|
||||
try:
|
||||
if in_autosummary:
|
||||
m = toctree_arg_re.match(line)
|
||||
if m:
|
||||
toctree = m.group(1)
|
||||
continue
|
||||
|
||||
if line.strip().startswith(':'):
|
||||
continue # skip options
|
||||
|
||||
m = autosummary_item_re.match(line)
|
||||
if m:
|
||||
name = m.group(1).strip()
|
||||
if current_module and not name.startswith(current_module + '.'):
|
||||
name = "%s.%s" % (current_module, name)
|
||||
documented.setdefault(name, []).append(
|
||||
(filename, current_title, 'autosummary', toctree))
|
||||
continue
|
||||
if line.strip() == '':
|
||||
continue
|
||||
in_autosummary = False
|
||||
|
||||
m = autosummary_re.match(line)
|
||||
if m:
|
||||
in_autosummary = True
|
||||
continue
|
||||
|
||||
m = autodoc_re.search(line)
|
||||
if m:
|
||||
name = m.group(2).strip()
|
||||
if m.group(1) == "module":
|
||||
current_module = name
|
||||
documented.update(get_documented_in_docstring(
|
||||
name, filename=filename))
|
||||
elif current_module and not name.startswith(current_module+'.'):
|
||||
name = "%s.%s" % (current_module, name)
|
||||
documented.setdefault(name, []).append(
|
||||
(filename, current_title, "auto" + m.group(1), None))
|
||||
continue
|
||||
|
||||
m = title_underline_re.match(line)
|
||||
if m and last_line:
|
||||
current_title = last_line.strip()
|
||||
continue
|
||||
|
||||
m = module_re.match(line)
|
||||
if m:
|
||||
current_module = m.group(2)
|
||||
continue
|
||||
finally:
|
||||
last_line = line
|
||||
|
||||
return documented
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -1,158 +0,0 @@
|
|||
from cStringIO import StringIO
|
||||
import compiler
|
||||
import inspect
|
||||
import textwrap
|
||||
import tokenize
|
||||
|
||||
from compiler_unparse import unparse
|
||||
|
||||
|
||||
class Comment(object):
|
||||
""" A comment block.
|
||||
"""
|
||||
is_comment = True
|
||||
def __init__(self, start_lineno, end_lineno, text):
|
||||
# int : The first line number in the block. 1-indexed.
|
||||
self.start_lineno = start_lineno
|
||||
# int : The last line number. Inclusive!
|
||||
self.end_lineno = end_lineno
|
||||
# str : The text block including '#' character but not any leading spaces.
|
||||
self.text = text
|
||||
|
||||
def add(self, string, start, end, line):
|
||||
""" Add a new comment line.
|
||||
"""
|
||||
self.start_lineno = min(self.start_lineno, start[0])
|
||||
self.end_lineno = max(self.end_lineno, end[0])
|
||||
self.text += string
|
||||
|
||||
def __repr__(self):
|
||||
return '%s(%r, %r, %r)' % (self.__class__.__name__, self.start_lineno,
|
||||
self.end_lineno, self.text)
|
||||
|
||||
|
||||
class NonComment(object):
|
||||
""" A non-comment block of code.
|
||||
"""
|
||||
is_comment = False
|
||||
def __init__(self, start_lineno, end_lineno):
|
||||
self.start_lineno = start_lineno
|
||||
self.end_lineno = end_lineno
|
||||
|
||||
def add(self, string, start, end, line):
|
||||
""" Add lines to the block.
|
||||
"""
|
||||
if string.strip():
|
||||
# Only add if not entirely whitespace.
|
||||
self.start_lineno = min(self.start_lineno, start[0])
|
||||
self.end_lineno = max(self.end_lineno, end[0])
|
||||
|
||||
def __repr__(self):
|
||||
return '%s(%r, %r)' % (self.__class__.__name__, self.start_lineno,
|
||||
self.end_lineno)
|
||||
|
||||
|
||||
class CommentBlocker(object):
|
||||
""" Pull out contiguous comment blocks.
|
||||
"""
|
||||
def __init__(self):
|
||||
# Start with a dummy.
|
||||
self.current_block = NonComment(0, 0)
|
||||
|
||||
# All of the blocks seen so far.
|
||||
self.blocks = []
|
||||
|
||||
# The index mapping lines of code to their associated comment blocks.
|
||||
self.index = {}
|
||||
|
||||
def process_file(self, file):
|
||||
""" Process a file object.
|
||||
"""
|
||||
for token in tokenize.generate_tokens(file.next):
|
||||
self.process_token(*token)
|
||||
self.make_index()
|
||||
|
||||
def process_token(self, kind, string, start, end, line):
|
||||
""" Process a single token.
|
||||
"""
|
||||
if self.current_block.is_comment:
|
||||
if kind == tokenize.COMMENT:
|
||||
self.current_block.add(string, start, end, line)
|
||||
else:
|
||||
self.new_noncomment(start[0], end[0])
|
||||
else:
|
||||
if kind == tokenize.COMMENT:
|
||||
self.new_comment(string, start, end, line)
|
||||
else:
|
||||
self.current_block.add(string, start, end, line)
|
||||
|
||||
def new_noncomment(self, start_lineno, end_lineno):
|
||||
""" We are transitioning from a noncomment to a comment.
|
||||
"""
|
||||
block = NonComment(start_lineno, end_lineno)
|
||||
self.blocks.append(block)
|
||||
self.current_block = block
|
||||
|
||||
def new_comment(self, string, start, end, line):
|
||||
""" Possibly add a new comment.
|
||||
|
||||
Only adds a new comment if this comment is the only thing on the line.
|
||||
Otherwise, it extends the noncomment block.
|
||||
"""
|
||||
prefix = line[:start[1]]
|
||||
if prefix.strip():
|
||||
# Oops! Trailing comment, not a comment block.
|
||||
self.current_block.add(string, start, end, line)
|
||||
else:
|
||||
# A comment block.
|
||||
block = Comment(start[0], end[0], string)
|
||||
self.blocks.append(block)
|
||||
self.current_block = block
|
||||
|
||||
def make_index(self):
|
||||
""" Make the index mapping lines of actual code to their associated
|
||||
prefix comments.
|
||||
"""
|
||||
for prev, block in zip(self.blocks[:-1], self.blocks[1:]):
|
||||
if not block.is_comment:
|
||||
self.index[block.start_lineno] = prev
|
||||
|
||||
def search_for_comment(self, lineno, default=None):
|
||||
""" Find the comment block just before the given line number.
|
||||
|
||||
Returns None (or the specified default) if there is no such block.
|
||||
"""
|
||||
if not self.index:
|
||||
self.make_index()
|
||||
block = self.index.get(lineno, None)
|
||||
text = getattr(block, 'text', default)
|
||||
return text
|
||||
|
||||
|
||||
def strip_comment_marker(text):
|
||||
""" Strip # markers at the front of a block of comment text.
|
||||
"""
|
||||
lines = []
|
||||
for line in text.splitlines():
|
||||
lines.append(line.lstrip('#'))
|
||||
text = textwrap.dedent('\n'.join(lines))
|
||||
return text
|
||||
|
||||
|
||||
def get_class_traits(klass):
|
||||
""" Yield all of the documentation for trait definitions on a class object.
|
||||
"""
|
||||
# FIXME: gracefully handle errors here or in the caller?
|
||||
source = inspect.getsource(klass)
|
||||
cb = CommentBlocker()
|
||||
cb.process_file(StringIO(source))
|
||||
mod_ast = compiler.parse(source)
|
||||
class_ast = mod_ast.node.nodes[0]
|
||||
for node in class_ast.code.nodes:
|
||||
# FIXME: handle other kinds of assignments?
|
||||
if isinstance(node, compiler.ast.Assign):
|
||||
name = node.nodes[0].name
|
||||
rhs = unparse(node.expr).strip()
|
||||
doc = strip_comment_marker(cb.search_for_comment(node.lineno, default=''))
|
||||
yield name, rhs, doc
|
||||
|
|
@ -1,860 +0,0 @@
|
|||
""" Turn compiler.ast structures back into executable python code.
|
||||
|
||||
The unparse method takes a compiler.ast tree and transforms it back into
|
||||
valid python code. It is incomplete and currently only works for
|
||||
import statements, function calls, function definitions, assignments, and
|
||||
basic expressions.
|
||||
|
||||
Inspired by python-2.5-svn/Demo/parser/unparse.py
|
||||
|
||||
fixme: We may want to move to using _ast trees because the compiler for
|
||||
them is about 6 times faster than compiler.compile.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import cStringIO
|
||||
from compiler.ast import Const, Name, Tuple, Div, Mul, Sub, Add
|
||||
|
||||
def unparse(ast, single_line_functions=False):
|
||||
s = cStringIO.StringIO()
|
||||
UnparseCompilerAst(ast, s, single_line_functions)
|
||||
return s.getvalue().lstrip()
|
||||
|
||||
op_precedence = { 'compiler.ast.Power':3, 'compiler.ast.Mul':2, 'compiler.ast.Div':2,
|
||||
'compiler.ast.Add':1, 'compiler.ast.Sub':1 }
|
||||
|
||||
class UnparseCompilerAst:
|
||||
""" Methods in this class recursively traverse an AST and
|
||||
output source code for the abstract syntax; original formatting
|
||||
is disregarged.
|
||||
"""
|
||||
|
||||
#########################################################################
|
||||
# object interface.
|
||||
#########################################################################
|
||||
|
||||
def __init__(self, tree, file = sys.stdout, single_line_functions=False):
|
||||
""" Unparser(tree, file=sys.stdout) -> None.
|
||||
|
||||
Print the source for tree to file.
|
||||
"""
|
||||
self.f = file
|
||||
self._single_func = single_line_functions
|
||||
self._do_indent = True
|
||||
self._indent = 0
|
||||
self._dispatch(tree)
|
||||
self._write("\n")
|
||||
self.f.flush()
|
||||
|
||||
#########################################################################
|
||||
# Unparser private interface.
|
||||
#########################################################################
|
||||
|
||||
### format, output, and dispatch methods ################################
|
||||
|
||||
def _fill(self, text = ""):
|
||||
"Indent a piece of text, according to the current indentation level"
|
||||
if self._do_indent:
|
||||
self._write("\n"+" "*self._indent + text)
|
||||
else:
|
||||
self._write(text)
|
||||
|
||||
def _write(self, text):
|
||||
"Append a piece of text to the current line."
|
||||
self.f.write(text)
|
||||
|
||||
def _enter(self):
|
||||
"Print ':', and increase the indentation."
|
||||
self._write(": ")
|
||||
self._indent += 1
|
||||
|
||||
def _leave(self):
|
||||
"Decrease the indentation level."
|
||||
self._indent -= 1
|
||||
|
||||
def _dispatch(self, tree):
|
||||
"_dispatcher function, _dispatching tree type T to method _T."
|
||||
if isinstance(tree, list):
|
||||
for t in tree:
|
||||
self._dispatch(t)
|
||||
return
|
||||
meth = getattr(self, "_"+tree.__class__.__name__)
|
||||
if tree.__class__.__name__ == 'NoneType' and not self._do_indent:
|
||||
return
|
||||
meth(tree)
|
||||
|
||||
|
||||
#########################################################################
|
||||
# compiler.ast unparsing methods.
|
||||
#
|
||||
# There should be one method per concrete grammar type. They are
|
||||
# organized in alphabetical order.
|
||||
#########################################################################
|
||||
|
||||
def _Add(self, t):
|
||||
self.__binary_op(t, '+')
|
||||
|
||||
def _And(self, t):
|
||||
self._write(" (")
|
||||
for i, node in enumerate(t.nodes):
|
||||
self._dispatch(node)
|
||||
if i != len(t.nodes)-1:
|
||||
self._write(") and (")
|
||||
self._write(")")
|
||||
|
||||
def _AssAttr(self, t):
|
||||
""" Handle assigning an attribute of an object
|
||||
"""
|
||||
self._dispatch(t.expr)
|
||||
self._write('.'+t.attrname)
|
||||
|
||||
def _Assign(self, t):
|
||||
""" Expression Assignment such as "a = 1".
|
||||
|
||||
This only handles assignment in expressions. Keyword assignment
|
||||
is handled separately.
|
||||
"""
|
||||
self._fill()
|
||||
for target in t.nodes:
|
||||
self._dispatch(target)
|
||||
self._write(" = ")
|
||||
self._dispatch(t.expr)
|
||||
if not self._do_indent:
|
||||
self._write('; ')
|
||||
|
||||
def _AssName(self, t):
|
||||
""" Name on left hand side of expression.
|
||||
|
||||
Treat just like a name on the right side of an expression.
|
||||
"""
|
||||
self._Name(t)
|
||||
|
||||
def _AssTuple(self, t):
|
||||
""" Tuple on left hand side of an expression.
|
||||
"""
|
||||
|
||||
# _write each elements, separated by a comma.
|
||||
for element in t.nodes[:-1]:
|
||||
self._dispatch(element)
|
||||
self._write(", ")
|
||||
|
||||
# Handle the last one without writing comma
|
||||
last_element = t.nodes[-1]
|
||||
self._dispatch(last_element)
|
||||
|
||||
def _AugAssign(self, t):
|
||||
""" +=,-=,*=,/=,**=, etc. operations
|
||||
"""
|
||||
|
||||
self._fill()
|
||||
self._dispatch(t.node)
|
||||
self._write(' '+t.op+' ')
|
||||
self._dispatch(t.expr)
|
||||
if not self._do_indent:
|
||||
self._write(';')
|
||||
|
||||
def _Bitand(self, t):
|
||||
""" Bit and operation.
|
||||
"""
|
||||
|
||||
for i, node in enumerate(t.nodes):
|
||||
self._write("(")
|
||||
self._dispatch(node)
|
||||
self._write(")")
|
||||
if i != len(t.nodes)-1:
|
||||
self._write(" & ")
|
||||
|
||||
def _Bitor(self, t):
|
||||
""" Bit or operation
|
||||
"""
|
||||
|
||||
for i, node in enumerate(t.nodes):
|
||||
self._write("(")
|
||||
self._dispatch(node)
|
||||
self._write(")")
|
||||
if i != len(t.nodes)-1:
|
||||
self._write(" | ")
|
||||
|
||||
def _CallFunc(self, t):
|
||||
""" Function call.
|
||||
"""
|
||||
self._dispatch(t.node)
|
||||
self._write("(")
|
||||
comma = False
|
||||
for e in t.args:
|
||||
if comma: self._write(", ")
|
||||
else: comma = True
|
||||
self._dispatch(e)
|
||||
if t.star_args:
|
||||
if comma: self._write(", ")
|
||||
else: comma = True
|
||||
self._write("*")
|
||||
self._dispatch(t.star_args)
|
||||
if t.dstar_args:
|
||||
if comma: self._write(", ")
|
||||
else: comma = True
|
||||
self._write("**")
|
||||
self._dispatch(t.dstar_args)
|
||||
self._write(")")
|
||||
|
||||
def _Compare(self, t):
|
||||
self._dispatch(t.expr)
|
||||
for op, expr in t.ops:
|
||||
self._write(" " + op + " ")
|
||||
self._dispatch(expr)
|
||||
|
||||
def _Const(self, t):
|
||||
""" A constant value such as an integer value, 3, or a string, "hello".
|
||||
"""
|
||||
self._dispatch(t.value)
|
||||
|
||||
def _Decorators(self, t):
|
||||
""" Handle function decorators (eg. @has_units)
|
||||
"""
|
||||
for node in t.nodes:
|
||||
self._dispatch(node)
|
||||
|
||||
def _Dict(self, t):
|
||||
self._write("{")
|
||||
for i, (k, v) in enumerate(t.items):
|
||||
self._dispatch(k)
|
||||
self._write(": ")
|
||||
self._dispatch(v)
|
||||
if i < len(t.items)-1:
|
||||
self._write(", ")
|
||||
self._write("}")
|
||||
|
||||
def _Discard(self, t):
|
||||
""" Node for when return value is ignored such as in "foo(a)".
|
||||
"""
|
||||
self._fill()
|
||||
self._dispatch(t.expr)
|
||||
|
||||
def _Div(self, t):
|
||||
self.__binary_op(t, '/')
|
||||
|
||||
def _Ellipsis(self, t):
|
||||
self._write("...")
|
||||
|
||||
def _From(self, t):
|
||||
""" Handle "from xyz import foo, bar as baz".
|
||||
"""
|
||||
# fixme: Are From and ImportFrom handled differently?
|
||||
self._fill("from ")
|
||||
self._write(t.modname)
|
||||
self._write(" import ")
|
||||
for i, (name,asname) in enumerate(t.names):
|
||||
if i != 0:
|
||||
self._write(", ")
|
||||
self._write(name)
|
||||
if asname is not None:
|
||||
self._write(" as "+asname)
|
||||
|
||||
def _Function(self, t):
|
||||
""" Handle function definitions
|
||||
"""
|
||||
if t.decorators is not None:
|
||||
self._fill("@")
|
||||
self._dispatch(t.decorators)
|
||||
self._fill("def "+t.name + "(")
|
||||
defaults = [None] * (len(t.argnames) - len(t.defaults)) + list(t.defaults)
|
||||
for i, arg in enumerate(zip(t.argnames, defaults)):
|
||||
self._write(arg[0])
|
||||
if arg[1] is not None:
|
||||
self._write('=')
|
||||
self._dispatch(arg[1])
|
||||
if i < len(t.argnames)-1:
|
||||
self._write(', ')
|
||||
self._write(")")
|
||||
if self._single_func:
|
||||
self._do_indent = False
|
||||
self._enter()
|
||||
self._dispatch(t.code)
|
||||
self._leave()
|
||||
self._do_indent = True
|
||||
|
||||
def _Getattr(self, t):
|
||||
""" Handle getting an attribute of an object
|
||||
"""
|
||||
if isinstance(t.expr, (Div, Mul, Sub, Add)):
|
||||
self._write('(')
|
||||
self._dispatch(t.expr)
|
||||
self._write(')')
|
||||
else:
|
||||
self._dispatch(t.expr)
|
||||
|
||||
self._write('.'+t.attrname)
|
||||
|
||||
def _If(self, t):
|
||||
self._fill()
|
||||
|
||||
for i, (compare,code) in enumerate(t.tests):
|
||||
if i == 0:
|
||||
self._write("if ")
|
||||
else:
|
||||
self._write("elif ")
|
||||
self._dispatch(compare)
|
||||
self._enter()
|
||||
self._fill()
|
||||
self._dispatch(code)
|
||||
self._leave()
|
||||
self._write("\n")
|
||||
|
||||
if t.else_ is not None:
|
||||
self._write("else")
|
||||
self._enter()
|
||||
self._fill()
|
||||
self._dispatch(t.else_)
|
||||
self._leave()
|
||||
self._write("\n")
|
||||
|
||||
def _IfExp(self, t):
|
||||
self._dispatch(t.then)
|
||||
self._write(" if ")
|
||||
self._dispatch(t.test)
|
||||
|
||||
if t.else_ is not None:
|
||||
self._write(" else (")
|
||||
self._dispatch(t.else_)
|
||||
self._write(")")
|
||||
|
||||
def _Import(self, t):
|
||||
""" Handle "import xyz.foo".
|
||||
"""
|
||||
self._fill("import ")
|
||||
|
||||
for i, (name,asname) in enumerate(t.names):
|
||||
if i != 0:
|
||||
self._write(", ")
|
||||
self._write(name)
|
||||
if asname is not None:
|
||||
self._write(" as "+asname)
|
||||
|
||||
def _Keyword(self, t):
|
||||
""" Keyword value assignment within function calls and definitions.
|
||||
"""
|
||||
self._write(t.name)
|
||||
self._write("=")
|
||||
self._dispatch(t.expr)
|
||||
|
||||
def _List(self, t):
|
||||
self._write("[")
|
||||
for i,node in enumerate(t.nodes):
|
||||
self._dispatch(node)
|
||||
if i < len(t.nodes)-1:
|
||||
self._write(", ")
|
||||
self._write("]")
|
||||
|
||||
def _Module(self, t):
|
||||
if t.doc is not None:
|
||||
self._dispatch(t.doc)
|
||||
self._dispatch(t.node)
|
||||
|
||||
def _Mul(self, t):
|
||||
self.__binary_op(t, '*')
|
||||
|
||||
def _Name(self, t):
|
||||
self._write(t.name)
|
||||
|
||||
def _NoneType(self, t):
|
||||
self._write("None")
|
||||
|
||||
def _Not(self, t):
|
||||
self._write('not (')
|
||||
self._dispatch(t.expr)
|
||||
self._write(')')
|
||||
|
||||
def _Or(self, t):
|
||||
self._write(" (")
|
||||
for i, node in enumerate(t.nodes):
|
||||
self._dispatch(node)
|
||||
if i != len(t.nodes)-1:
|
||||
self._write(") or (")
|
||||
self._write(")")
|
||||
|
||||
def _Pass(self, t):
|
||||
self._write("pass\n")
|
||||
|
||||
def _Printnl(self, t):
|
||||
self._fill("print ")
|
||||
if t.dest:
|
||||
self._write(">> ")
|
||||
self._dispatch(t.dest)
|
||||
self._write(", ")
|
||||
comma = False
|
||||
for node in t.nodes:
|
||||
if comma: self._write(', ')
|
||||
else: comma = True
|
||||
self._dispatch(node)
|
||||
|
||||
def _Power(self, t):
|
||||
self.__binary_op(t, '**')
|
||||
|
||||
def _Return(self, t):
|
||||
self._fill("return ")
|
||||
if t.value:
|
||||
if isinstance(t.value, Tuple):
|
||||
text = ', '.join([ name.name for name in t.value.asList() ])
|
||||
self._write(text)
|
||||
else:
|
||||
self._dispatch(t.value)
|
||||
if not self._do_indent:
|
||||
self._write('; ')
|
||||
|
||||
def _Slice(self, t):
|
||||
self._dispatch(t.expr)
|
||||
self._write("[")
|
||||
if t.lower:
|
||||
self._dispatch(t.lower)
|
||||
self._write(":")
|
||||
if t.upper:
|
||||
self._dispatch(t.upper)
|
||||
#if t.step:
|
||||
# self._write(":")
|
||||
# self._dispatch(t.step)
|
||||
self._write("]")
|
||||
|
||||
def _Sliceobj(self, t):
|
||||
for i, node in enumerate(t.nodes):
|
||||
if i != 0:
|
||||
self._write(":")
|
||||
if not (isinstance(node, Const) and node.value is None):
|
||||
self._dispatch(node)
|
||||
|
||||
def _Stmt(self, tree):
|
||||
for node in tree.nodes:
|
||||
self._dispatch(node)
|
||||
|
||||
def _Sub(self, t):
|
||||
self.__binary_op(t, '-')
|
||||
|
||||
def _Subscript(self, t):
|
||||
self._dispatch(t.expr)
|
||||
self._write("[")
|
||||
for i, value in enumerate(t.subs):
|
||||
if i != 0:
|
||||
self._write(",")
|
||||
self._dispatch(value)
|
||||
self._write("]")
|
||||
|
||||
def _TryExcept(self, t):
|
||||
self._fill("try")
|
||||
self._enter()
|
||||
self._dispatch(t.body)
|
||||
self._leave()
|
||||
|
||||
for handler in t.handlers:
|
||||
self._fill('except ')
|
||||
self._dispatch(handler[0])
|
||||
if handler[1] is not None:
|
||||
self._write(', ')
|
||||
self._dispatch(handler[1])
|
||||
self._enter()
|
||||
self._dispatch(handler[2])
|
||||
self._leave()
|
||||
|
||||
if t.else_:
|
||||
self._fill("else")
|
||||
self._enter()
|
||||
self._dispatch(t.else_)
|
||||
self._leave()
|
||||
|
||||
def _Tuple(self, t):
|
||||
|
||||
if not t.nodes:
|
||||
# Empty tuple.
|
||||
self._write("()")
|
||||
else:
|
||||
self._write("(")
|
||||
|
||||
# _write each elements, separated by a comma.
|
||||
for element in t.nodes[:-1]:
|
||||
self._dispatch(element)
|
||||
self._write(", ")
|
||||
|
||||
# Handle the last one without writing comma
|
||||
last_element = t.nodes[-1]
|
||||
self._dispatch(last_element)
|
||||
|
||||
self._write(")")
|
||||
|
||||
def _UnaryAdd(self, t):
|
||||
self._write("+")
|
||||
self._dispatch(t.expr)
|
||||
|
||||
def _UnarySub(self, t):
|
||||
self._write("-")
|
||||
self._dispatch(t.expr)
|
||||
|
||||
def _With(self, t):
|
||||
self._fill('with ')
|
||||
self._dispatch(t.expr)
|
||||
if t.vars:
|
||||
self._write(' as ')
|
||||
self._dispatch(t.vars.name)
|
||||
self._enter()
|
||||
self._dispatch(t.body)
|
||||
self._leave()
|
||||
self._write('\n')
|
||||
|
||||
def _int(self, t):
|
||||
self._write(repr(t))
|
||||
|
||||
def __binary_op(self, t, symbol):
|
||||
# Check if parenthesis are needed on left side and then dispatch
|
||||
has_paren = False
|
||||
left_class = str(t.left.__class__)
|
||||
if (left_class in op_precedence.keys() and
|
||||
op_precedence[left_class] < op_precedence[str(t.__class__)]):
|
||||
has_paren = True
|
||||
if has_paren:
|
||||
self._write('(')
|
||||
self._dispatch(t.left)
|
||||
if has_paren:
|
||||
self._write(')')
|
||||
# Write the appropriate symbol for operator
|
||||
self._write(symbol)
|
||||
# Check if parenthesis are needed on the right side and then dispatch
|
||||
has_paren = False
|
||||
right_class = str(t.right.__class__)
|
||||
if (right_class in op_precedence.keys() and
|
||||
op_precedence[right_class] < op_precedence[str(t.__class__)]):
|
||||
has_paren = True
|
||||
if has_paren:
|
||||
self._write('(')
|
||||
self._dispatch(t.right)
|
||||
if has_paren:
|
||||
self._write(')')
|
||||
|
||||
def _float(self, t):
|
||||
# if t is 0.1, str(t)->'0.1' while repr(t)->'0.1000000000001'
|
||||
# We prefer str here.
|
||||
self._write(str(t))
|
||||
|
||||
def _str(self, t):
|
||||
self._write(repr(t))
|
||||
|
||||
def _tuple(self, t):
|
||||
self._write(str(t))
|
||||
|
||||
#########################################################################
|
||||
# These are the methods from the _ast modules unparse.
|
||||
#
|
||||
# As our needs to handle more advanced code increase, we may want to
|
||||
# modify some of the methods below so that they work for compiler.ast.
|
||||
#########################################################################
|
||||
|
||||
# # stmt
|
||||
# def _Expr(self, tree):
|
||||
# self._fill()
|
||||
# self._dispatch(tree.value)
|
||||
#
|
||||
# def _Import(self, t):
|
||||
# self._fill("import ")
|
||||
# first = True
|
||||
# for a in t.names:
|
||||
# if first:
|
||||
# first = False
|
||||
# else:
|
||||
# self._write(", ")
|
||||
# self._write(a.name)
|
||||
# if a.asname:
|
||||
# self._write(" as "+a.asname)
|
||||
#
|
||||
## def _ImportFrom(self, t):
|
||||
## self._fill("from ")
|
||||
## self._write(t.module)
|
||||
## self._write(" import ")
|
||||
## for i, a in enumerate(t.names):
|
||||
## if i == 0:
|
||||
## self._write(", ")
|
||||
## self._write(a.name)
|
||||
## if a.asname:
|
||||
## self._write(" as "+a.asname)
|
||||
## # XXX(jpe) what is level for?
|
||||
##
|
||||
#
|
||||
# def _Break(self, t):
|
||||
# self._fill("break")
|
||||
#
|
||||
# def _Continue(self, t):
|
||||
# self._fill("continue")
|
||||
#
|
||||
# def _Delete(self, t):
|
||||
# self._fill("del ")
|
||||
# self._dispatch(t.targets)
|
||||
#
|
||||
# def _Assert(self, t):
|
||||
# self._fill("assert ")
|
||||
# self._dispatch(t.test)
|
||||
# if t.msg:
|
||||
# self._write(", ")
|
||||
# self._dispatch(t.msg)
|
||||
#
|
||||
# def _Exec(self, t):
|
||||
# self._fill("exec ")
|
||||
# self._dispatch(t.body)
|
||||
# if t.globals:
|
||||
# self._write(" in ")
|
||||
# self._dispatch(t.globals)
|
||||
# if t.locals:
|
||||
# self._write(", ")
|
||||
# self._dispatch(t.locals)
|
||||
#
|
||||
# def _Print(self, t):
|
||||
# self._fill("print ")
|
||||
# do_comma = False
|
||||
# if t.dest:
|
||||
# self._write(">>")
|
||||
# self._dispatch(t.dest)
|
||||
# do_comma = True
|
||||
# for e in t.values:
|
||||
# if do_comma:self._write(", ")
|
||||
# else:do_comma=True
|
||||
# self._dispatch(e)
|
||||
# if not t.nl:
|
||||
# self._write(",")
|
||||
#
|
||||
# def _Global(self, t):
|
||||
# self._fill("global")
|
||||
# for i, n in enumerate(t.names):
|
||||
# if i != 0:
|
||||
# self._write(",")
|
||||
# self._write(" " + n)
|
||||
#
|
||||
# def _Yield(self, t):
|
||||
# self._fill("yield")
|
||||
# if t.value:
|
||||
# self._write(" (")
|
||||
# self._dispatch(t.value)
|
||||
# self._write(")")
|
||||
#
|
||||
# def _Raise(self, t):
|
||||
# self._fill('raise ')
|
||||
# if t.type:
|
||||
# self._dispatch(t.type)
|
||||
# if t.inst:
|
||||
# self._write(", ")
|
||||
# self._dispatch(t.inst)
|
||||
# if t.tback:
|
||||
# self._write(", ")
|
||||
# self._dispatch(t.tback)
|
||||
#
|
||||
#
|
||||
# def _TryFinally(self, t):
|
||||
# self._fill("try")
|
||||
# self._enter()
|
||||
# self._dispatch(t.body)
|
||||
# self._leave()
|
||||
#
|
||||
# self._fill("finally")
|
||||
# self._enter()
|
||||
# self._dispatch(t.finalbody)
|
||||
# self._leave()
|
||||
#
|
||||
# def _excepthandler(self, t):
|
||||
# self._fill("except ")
|
||||
# if t.type:
|
||||
# self._dispatch(t.type)
|
||||
# if t.name:
|
||||
# self._write(", ")
|
||||
# self._dispatch(t.name)
|
||||
# self._enter()
|
||||
# self._dispatch(t.body)
|
||||
# self._leave()
|
||||
#
|
||||
# def _ClassDef(self, t):
|
||||
# self._write("\n")
|
||||
# self._fill("class "+t.name)
|
||||
# if t.bases:
|
||||
# self._write("(")
|
||||
# for a in t.bases:
|
||||
# self._dispatch(a)
|
||||
# self._write(", ")
|
||||
# self._write(")")
|
||||
# self._enter()
|
||||
# self._dispatch(t.body)
|
||||
# self._leave()
|
||||
#
|
||||
# def _FunctionDef(self, t):
|
||||
# self._write("\n")
|
||||
# for deco in t.decorators:
|
||||
# self._fill("@")
|
||||
# self._dispatch(deco)
|
||||
# self._fill("def "+t.name + "(")
|
||||
# self._dispatch(t.args)
|
||||
# self._write(")")
|
||||
# self._enter()
|
||||
# self._dispatch(t.body)
|
||||
# self._leave()
|
||||
#
|
||||
# def _For(self, t):
|
||||
# self._fill("for ")
|
||||
# self._dispatch(t.target)
|
||||
# self._write(" in ")
|
||||
# self._dispatch(t.iter)
|
||||
# self._enter()
|
||||
# self._dispatch(t.body)
|
||||
# self._leave()
|
||||
# if t.orelse:
|
||||
# self._fill("else")
|
||||
# self._enter()
|
||||
# self._dispatch(t.orelse)
|
||||
# self._leave
|
||||
#
|
||||
# def _While(self, t):
|
||||
# self._fill("while ")
|
||||
# self._dispatch(t.test)
|
||||
# self._enter()
|
||||
# self._dispatch(t.body)
|
||||
# self._leave()
|
||||
# if t.orelse:
|
||||
# self._fill("else")
|
||||
# self._enter()
|
||||
# self._dispatch(t.orelse)
|
||||
# self._leave
|
||||
#
|
||||
# # expr
|
||||
# def _Str(self, tree):
|
||||
# self._write(repr(tree.s))
|
||||
##
|
||||
# def _Repr(self, t):
|
||||
# self._write("`")
|
||||
# self._dispatch(t.value)
|
||||
# self._write("`")
|
||||
#
|
||||
# def _Num(self, t):
|
||||
# self._write(repr(t.n))
|
||||
#
|
||||
# def _ListComp(self, t):
|
||||
# self._write("[")
|
||||
# self._dispatch(t.elt)
|
||||
# for gen in t.generators:
|
||||
# self._dispatch(gen)
|
||||
# self._write("]")
|
||||
#
|
||||
# def _GeneratorExp(self, t):
|
||||
# self._write("(")
|
||||
# self._dispatch(t.elt)
|
||||
# for gen in t.generators:
|
||||
# self._dispatch(gen)
|
||||
# self._write(")")
|
||||
#
|
||||
# def _comprehension(self, t):
|
||||
# self._write(" for ")
|
||||
# self._dispatch(t.target)
|
||||
# self._write(" in ")
|
||||
# self._dispatch(t.iter)
|
||||
# for if_clause in t.ifs:
|
||||
# self._write(" if ")
|
||||
# self._dispatch(if_clause)
|
||||
#
|
||||
# def _IfExp(self, t):
|
||||
# self._dispatch(t.body)
|
||||
# self._write(" if ")
|
||||
# self._dispatch(t.test)
|
||||
# if t.orelse:
|
||||
# self._write(" else ")
|
||||
# self._dispatch(t.orelse)
|
||||
#
|
||||
# unop = {"Invert":"~", "Not": "not", "UAdd":"+", "USub":"-"}
|
||||
# def _UnaryOp(self, t):
|
||||
# self._write(self.unop[t.op.__class__.__name__])
|
||||
# self._write("(")
|
||||
# self._dispatch(t.operand)
|
||||
# self._write(")")
|
||||
#
|
||||
# binop = { "Add":"+", "Sub":"-", "Mult":"*", "Div":"/", "Mod":"%",
|
||||
# "LShift":">>", "RShift":"<<", "BitOr":"|", "BitXor":"^", "BitAnd":"&",
|
||||
# "FloorDiv":"//", "Pow": "**"}
|
||||
# def _BinOp(self, t):
|
||||
# self._write("(")
|
||||
# self._dispatch(t.left)
|
||||
# self._write(")" + self.binop[t.op.__class__.__name__] + "(")
|
||||
# self._dispatch(t.right)
|
||||
# self._write(")")
|
||||
#
|
||||
# boolops = {_ast.And: 'and', _ast.Or: 'or'}
|
||||
# def _BoolOp(self, t):
|
||||
# self._write("(")
|
||||
# self._dispatch(t.values[0])
|
||||
# for v in t.values[1:]:
|
||||
# self._write(" %s " % self.boolops[t.op.__class__])
|
||||
# self._dispatch(v)
|
||||
# self._write(")")
|
||||
#
|
||||
# def _Attribute(self,t):
|
||||
# self._dispatch(t.value)
|
||||
# self._write(".")
|
||||
# self._write(t.attr)
|
||||
#
|
||||
## def _Call(self, t):
|
||||
## self._dispatch(t.func)
|
||||
## self._write("(")
|
||||
## comma = False
|
||||
## for e in t.args:
|
||||
## if comma: self._write(", ")
|
||||
## else: comma = True
|
||||
## self._dispatch(e)
|
||||
## for e in t.keywords:
|
||||
## if comma: self._write(", ")
|
||||
## else: comma = True
|
||||
## self._dispatch(e)
|
||||
## if t.starargs:
|
||||
## if comma: self._write(", ")
|
||||
## else: comma = True
|
||||
## self._write("*")
|
||||
## self._dispatch(t.starargs)
|
||||
## if t.kwargs:
|
||||
## if comma: self._write(", ")
|
||||
## else: comma = True
|
||||
## self._write("**")
|
||||
## self._dispatch(t.kwargs)
|
||||
## self._write(")")
|
||||
#
|
||||
# # slice
|
||||
# def _Index(self, t):
|
||||
# self._dispatch(t.value)
|
||||
#
|
||||
# def _ExtSlice(self, t):
|
||||
# for i, d in enumerate(t.dims):
|
||||
# if i != 0:
|
||||
# self._write(': ')
|
||||
# self._dispatch(d)
|
||||
#
|
||||
# # others
|
||||
# def _arguments(self, t):
|
||||
# first = True
|
||||
# nonDef = len(t.args)-len(t.defaults)
|
||||
# for a in t.args[0:nonDef]:
|
||||
# if first:first = False
|
||||
# else: self._write(", ")
|
||||
# self._dispatch(a)
|
||||
# for a,d in zip(t.args[nonDef:], t.defaults):
|
||||
# if first:first = False
|
||||
# else: self._write(", ")
|
||||
# self._dispatch(a),
|
||||
# self._write("=")
|
||||
# self._dispatch(d)
|
||||
# if t.vararg:
|
||||
# if first:first = False
|
||||
# else: self._write(", ")
|
||||
# self._write("*"+t.vararg)
|
||||
# if t.kwarg:
|
||||
# if first:first = False
|
||||
# else: self._write(", ")
|
||||
# self._write("**"+t.kwarg)
|
||||
#
|
||||
## def _keyword(self, t):
|
||||
## self._write(t.arg)
|
||||
## self._write("=")
|
||||
## self._dispatch(t.value)
|
||||
#
|
||||
# def _Lambda(self, t):
|
||||
# self._write("lambda ")
|
||||
# self._dispatch(t.args)
|
||||
# self._write(": ")
|
||||
# self._dispatch(t.body)
|
||||
|
||||
|
||||
|
|
@ -1,499 +0,0 @@
|
|||
"""Extract reference documentation from the NumPy source tree.
|
||||
|
||||
"""
|
||||
|
||||
import inspect
|
||||
import textwrap
|
||||
import re
|
||||
import pydoc
|
||||
from StringIO import StringIO
|
||||
from warnings import warn
|
||||
|
||||
class Reader(object):
|
||||
"""A line-based string reader.
|
||||
|
||||
"""
|
||||
def __init__(self, data):
|
||||
"""
|
||||
Parameters
|
||||
----------
|
||||
data : str
|
||||
String with lines separated by '\n'.
|
||||
|
||||
"""
|
||||
if isinstance(data,list):
|
||||
self._str = data
|
||||
else:
|
||||
self._str = data.split('\n') # store string as list of lines
|
||||
|
||||
self.reset()
|
||||
|
||||
def __getitem__(self, n):
|
||||
return self._str[n]
|
||||
|
||||
def reset(self):
|
||||
self._l = 0 # current line nr
|
||||
|
||||
def read(self):
|
||||
if not self.eof():
|
||||
out = self[self._l]
|
||||
self._l += 1
|
||||
return out
|
||||
else:
|
||||
return ''
|
||||
|
||||
def seek_next_non_empty_line(self):
|
||||
for l in self[self._l:]:
|
||||
if l.strip():
|
||||
break
|
||||
else:
|
||||
self._l += 1
|
||||
|
||||
def eof(self):
|
||||
return self._l >= len(self._str)
|
||||
|
||||
def read_to_condition(self, condition_func):
|
||||
start = self._l
|
||||
for line in self[start:]:
|
||||
if condition_func(line):
|
||||
return self[start:self._l]
|
||||
self._l += 1
|
||||
if self.eof():
|
||||
return self[start:self._l+1]
|
||||
return []
|
||||
|
||||
def read_to_next_empty_line(self):
|
||||
self.seek_next_non_empty_line()
|
||||
def is_empty(line):
|
||||
return not line.strip()
|
||||
return self.read_to_condition(is_empty)
|
||||
|
||||
def read_to_next_unindented_line(self):
|
||||
def is_unindented(line):
|
||||
return (line.strip() and (len(line.lstrip()) == len(line)))
|
||||
return self.read_to_condition(is_unindented)
|
||||
|
||||
def peek(self,n=0):
|
||||
if self._l + n < len(self._str):
|
||||
return self[self._l + n]
|
||||
else:
|
||||
return ''
|
||||
|
||||
def is_empty(self):
|
||||
return not ''.join(self._str).strip()
|
||||
|
||||
|
||||
class NumpyDocString(object):
|
||||
def __init__(self, docstring, config={}):
|
||||
docstring = textwrap.dedent(docstring).split('\n')
|
||||
|
||||
self._doc = Reader(docstring)
|
||||
self._parsed_data = {
|
||||
'Signature': '',
|
||||
'Summary': [''],
|
||||
'Extended Summary': [],
|
||||
'Parameters': [],
|
||||
'Returns': [],
|
||||
'Raises': [],
|
||||
'Warns': [],
|
||||
'Other Parameters': [],
|
||||
'Attributes': [],
|
||||
'Methods': [],
|
||||
'See Also': [],
|
||||
'Notes': [],
|
||||
'Warnings': [],
|
||||
'References': '',
|
||||
'Examples': '',
|
||||
'index': {}
|
||||
}
|
||||
|
||||
self._parse()
|
||||
|
||||
def __getitem__(self,key):
|
||||
return self._parsed_data[key]
|
||||
|
||||
def __setitem__(self,key,val):
|
||||
if not self._parsed_data.has_key(key):
|
||||
warn("Unknown section %s" % key)
|
||||
else:
|
||||
self._parsed_data[key] = val
|
||||
|
||||
def _is_at_section(self):
|
||||
self._doc.seek_next_non_empty_line()
|
||||
|
||||
if self._doc.eof():
|
||||
return False
|
||||
|
||||
l1 = self._doc.peek().strip() # e.g. Parameters
|
||||
|
||||
if l1.startswith('.. index::'):
|
||||
return True
|
||||
|
||||
l2 = self._doc.peek(1).strip() # ---------- or ==========
|
||||
return l2.startswith('-'*len(l1)) or l2.startswith('='*len(l1))
|
||||
|
||||
def _strip(self,doc):
|
||||
i = 0
|
||||
j = 0
|
||||
for i,line in enumerate(doc):
|
||||
if line.strip(): break
|
||||
|
||||
for j,line in enumerate(doc[::-1]):
|
||||
if line.strip(): break
|
||||
|
||||
return doc[i:len(doc)-j]
|
||||
|
||||
def _read_to_next_section(self):
|
||||
section = self._doc.read_to_next_empty_line()
|
||||
|
||||
while not self._is_at_section() and not self._doc.eof():
|
||||
if not self._doc.peek(-1).strip(): # previous line was empty
|
||||
section += ['']
|
||||
|
||||
section += self._doc.read_to_next_empty_line()
|
||||
|
||||
return section
|
||||
|
||||
def _read_sections(self):
|
||||
while not self._doc.eof():
|
||||
data = self._read_to_next_section()
|
||||
name = data[0].strip()
|
||||
|
||||
if name.startswith('..'): # index section
|
||||
yield name, data[1:]
|
||||
elif len(data) < 2:
|
||||
yield StopIteration
|
||||
else:
|
||||
yield name, self._strip(data[2:])
|
||||
|
||||
def _parse_param_list(self,content):
|
||||
r = Reader(content)
|
||||
params = []
|
||||
while not r.eof():
|
||||
header = r.read().strip()
|
||||
if ' : ' in header:
|
||||
arg_name, arg_type = header.split(' : ')[:2]
|
||||
else:
|
||||
arg_name, arg_type = header, ''
|
||||
|
||||
desc = r.read_to_next_unindented_line()
|
||||
desc = dedent_lines(desc)
|
||||
|
||||
params.append((arg_name,arg_type,desc))
|
||||
|
||||
return params
|
||||
|
||||
|
||||
_name_rgx = re.compile(r"^\s*(:(?P<role>\w+):`(?P<name>[a-zA-Z0-9_.-]+)`|"
|
||||
r" (?P<name2>[a-zA-Z0-9_.-]+))\s*", re.X)
|
||||
def _parse_see_also(self, content):
|
||||
"""
|
||||
func_name : Descriptive text
|
||||
continued text
|
||||
another_func_name : Descriptive text
|
||||
func_name1, func_name2, :meth:`func_name`, func_name3
|
||||
|
||||
"""
|
||||
items = []
|
||||
|
||||
def parse_item_name(text):
|
||||
"""Match ':role:`name`' or 'name'"""
|
||||
m = self._name_rgx.match(text)
|
||||
if m:
|
||||
g = m.groups()
|
||||
if g[1] is None:
|
||||
return g[3], None
|
||||
else:
|
||||
return g[2], g[1]
|
||||
raise ValueError("%s is not a item name" % text)
|
||||
|
||||
def push_item(name, rest):
|
||||
if not name:
|
||||
return
|
||||
name, role = parse_item_name(name)
|
||||
items.append((name, list(rest), role))
|
||||
del rest[:]
|
||||
|
||||
current_func = None
|
||||
rest = []
|
||||
|
||||
for line in content:
|
||||
if not line.strip(): continue
|
||||
|
||||
m = self._name_rgx.match(line)
|
||||
if m and line[m.end():].strip().startswith(':'):
|
||||
push_item(current_func, rest)
|
||||
current_func, line = line[:m.end()], line[m.end():]
|
||||
rest = [line.split(':', 1)[1].strip()]
|
||||
if not rest[0]:
|
||||
rest = []
|
||||
elif not line.startswith(' '):
|
||||
push_item(current_func, rest)
|
||||
current_func = None
|
||||
if ',' in line:
|
||||
for func in line.split(','):
|
||||
push_item(func, [])
|
||||
elif line.strip():
|
||||
current_func = line
|
||||
elif current_func is not None:
|
||||
rest.append(line.strip())
|
||||
push_item(current_func, rest)
|
||||
return items
|
||||
|
||||
def _parse_index(self, section, content):
|
||||
"""
|
||||
.. index: default
|
||||
:refguide: something, else, and more
|
||||
|
||||
"""
|
||||
def strip_each_in(lst):
|
||||
return [s.strip() for s in lst]
|
||||
|
||||
out = {}
|
||||
section = section.split('::')
|
||||
if len(section) > 1:
|
||||
out['default'] = strip_each_in(section[1].split(','))[0]
|
||||
for line in content:
|
||||
line = line.split(':')
|
||||
if len(line) > 2:
|
||||
out[line[1]] = strip_each_in(line[2].split(','))
|
||||
return out
|
||||
|
||||
def _parse_summary(self):
|
||||
"""Grab signature (if given) and summary"""
|
||||
if self._is_at_section():
|
||||
return
|
||||
|
||||
summary = self._doc.read_to_next_empty_line()
|
||||
summary_str = " ".join([s.strip() for s in summary]).strip()
|
||||
if re.compile('^([\w., ]+=)?\s*[\w\.]+\(.*\)$').match(summary_str):
|
||||
self['Signature'] = summary_str
|
||||
if not self._is_at_section():
|
||||
self['Summary'] = self._doc.read_to_next_empty_line()
|
||||
else:
|
||||
self['Summary'] = summary
|
||||
|
||||
if not self._is_at_section():
|
||||
self['Extended Summary'] = self._read_to_next_section()
|
||||
|
||||
def _parse(self):
|
||||
self._doc.reset()
|
||||
self._parse_summary()
|
||||
|
||||
for (section,content) in self._read_sections():
|
||||
if not section.startswith('..'):
|
||||
section = ' '.join([s.capitalize() for s in section.split(' ')])
|
||||
if section in ('Parameters', 'Attributes', 'Methods',
|
||||
'Returns', 'Raises', 'Warns'):
|
||||
self[section] = self._parse_param_list(content)
|
||||
elif section.startswith('.. index::'):
|
||||
self['index'] = self._parse_index(section, content)
|
||||
elif section == 'See Also':
|
||||
self['See Also'] = self._parse_see_also(content)
|
||||
else:
|
||||
self[section] = content
|
||||
|
||||
# string conversion routines
|
||||
|
||||
def _str_header(self, name, symbol='-'):
|
||||
return [name, len(name)*symbol]
|
||||
|
||||
def _str_indent(self, doc, indent=4):
|
||||
out = []
|
||||
for line in doc:
|
||||
out += [' '*indent + line]
|
||||
return out
|
||||
|
||||
def _str_signature(self):
|
||||
if self['Signature']:
|
||||
return [self['Signature'].replace('*','\*')] + ['']
|
||||
else:
|
||||
return ['']
|
||||
|
||||
def _str_summary(self):
|
||||
if self['Summary']:
|
||||
return self['Summary'] + ['']
|
||||
else:
|
||||
return []
|
||||
|
||||
def _str_extended_summary(self):
|
||||
if self['Extended Summary']:
|
||||
return self['Extended Summary'] + ['']
|
||||
else:
|
||||
return []
|
||||
|
||||
def _str_param_list(self, name):
|
||||
out = []
|
||||
if self[name]:
|
||||
out += self._str_header(name)
|
||||
for param,param_type,desc in self[name]:
|
||||
out += ['%s : %s' % (param, param_type)]
|
||||
out += self._str_indent(desc)
|
||||
out += ['']
|
||||
return out
|
||||
|
||||
def _str_section(self, name):
|
||||
out = []
|
||||
if self[name]:
|
||||
out += self._str_header(name)
|
||||
out += self[name]
|
||||
out += ['']
|
||||
return out
|
||||
|
||||
def _str_see_also(self, func_role):
|
||||
if not self['See Also']: return []
|
||||
out = []
|
||||
out += self._str_header("See Also")
|
||||
last_had_desc = True
|
||||
for func, desc, role in self['See Also']:
|
||||
if role:
|
||||
link = ':%s:`%s`' % (role, func)
|
||||
elif func_role:
|
||||
link = ':%s:`%s`' % (func_role, func)
|
||||
else:
|
||||
link = "`%s`_" % func
|
||||
if desc or last_had_desc:
|
||||
out += ['']
|
||||
out += [link]
|
||||
else:
|
||||
out[-1] += ", %s" % link
|
||||
if desc:
|
||||
out += self._str_indent([' '.join(desc)])
|
||||
last_had_desc = True
|
||||
else:
|
||||
last_had_desc = False
|
||||
out += ['']
|
||||
return out
|
||||
|
||||
def _str_index(self):
|
||||
idx = self['index']
|
||||
out = []
|
||||
out += ['.. index:: %s' % idx.get('default','')]
|
||||
for section, references in idx.iteritems():
|
||||
if section == 'default':
|
||||
continue
|
||||
out += [' :%s: %s' % (section, ', '.join(references))]
|
||||
return out
|
||||
|
||||
def __str__(self, func_role=''):
|
||||
out = []
|
||||
out += self._str_signature()
|
||||
out += self._str_summary()
|
||||
out += self._str_extended_summary()
|
||||
for param_list in ('Parameters','Returns','Raises'):
|
||||
out += self._str_param_list(param_list)
|
||||
out += self._str_section('Warnings')
|
||||
out += self._str_see_also(func_role)
|
||||
for s in ('Notes','References','Examples'):
|
||||
out += self._str_section(s)
|
||||
for param_list in ('Attributes', 'Methods'):
|
||||
out += self._str_param_list(param_list)
|
||||
out += self._str_index()
|
||||
return '\n'.join(out)
|
||||
|
||||
|
||||
def indent(str,indent=4):
|
||||
indent_str = ' '*indent
|
||||
if str is None:
|
||||
return indent_str
|
||||
lines = str.split('\n')
|
||||
return '\n'.join(indent_str + l for l in lines)
|
||||
|
||||
def dedent_lines(lines):
|
||||
"""Deindent a list of lines maximally"""
|
||||
return textwrap.dedent("\n".join(lines)).split("\n")
|
||||
|
||||
def header(text, style='-'):
|
||||
return text + '\n' + style*len(text) + '\n'
|
||||
|
||||
|
||||
class FunctionDoc(NumpyDocString):
|
||||
def __init__(self, func, role='func', doc=None, config={}):
|
||||
self._f = func
|
||||
self._role = role # e.g. "func" or "meth"
|
||||
if doc is None:
|
||||
doc = inspect.getdoc(func) or ''
|
||||
try:
|
||||
NumpyDocString.__init__(self, doc)
|
||||
except ValueError, e:
|
||||
print '*'*78
|
||||
print "ERROR: '%s' while parsing `%s`" % (e, self._f)
|
||||
print '*'*78
|
||||
#print "Docstring follows:"
|
||||
#print doclines
|
||||
#print '='*78
|
||||
|
||||
if not self['Signature']:
|
||||
func, func_name = self.get_func()
|
||||
try:
|
||||
# try to read signature
|
||||
argspec = inspect.getargspec(func)
|
||||
argspec = inspect.formatargspec(*argspec)
|
||||
argspec = argspec.replace('*','\*')
|
||||
signature = '%s%s' % (func_name, argspec)
|
||||
except TypeError, e:
|
||||
signature = '%s()' % func_name
|
||||
self['Signature'] = signature
|
||||
|
||||
def get_func(self):
|
||||
func_name = getattr(self._f, '__name__', self.__class__.__name__)
|
||||
if inspect.isclass(self._f):
|
||||
func = getattr(self._f, '__call__', self._f.__init__)
|
||||
else:
|
||||
func = self._f
|
||||
return func, func_name
|
||||
|
||||
def __str__(self):
|
||||
out = ''
|
||||
|
||||
func, func_name = self.get_func()
|
||||
signature = self['Signature'].replace('*', '\*')
|
||||
|
||||
roles = {'func': 'function',
|
||||
'meth': 'method'}
|
||||
|
||||
if self._role:
|
||||
if not roles.has_key(self._role):
|
||||
print "Warning: invalid role %s" % self._role
|
||||
out += '.. %s:: %s\n \n\n' % (roles.get(self._role,''),
|
||||
func_name)
|
||||
|
||||
out += super(FunctionDoc, self).__str__(func_role=self._role)
|
||||
return out
|
||||
|
||||
|
||||
class ClassDoc(NumpyDocString):
|
||||
def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc,
|
||||
config={}):
|
||||
if not inspect.isclass(cls):
|
||||
raise ValueError("Initialise using a class. Got %r" % cls)
|
||||
self._cls = cls
|
||||
|
||||
if modulename and not modulename.endswith('.'):
|
||||
modulename += '.'
|
||||
self._mod = modulename
|
||||
self._name = cls.__name__
|
||||
self._func_doc = func_doc
|
||||
|
||||
if doc is None:
|
||||
doc = pydoc.getdoc(cls)
|
||||
|
||||
NumpyDocString.__init__(self, doc)
|
||||
|
||||
if config.get('show_class_members', True):
|
||||
if not self['Methods']:
|
||||
self['Methods'] = [(name, '', '')
|
||||
for name in sorted(self.methods)]
|
||||
if not self['Attributes']:
|
||||
self['Attributes'] = [(name, '', '')
|
||||
for name in sorted(self.properties)]
|
||||
|
||||
@property
|
||||
def methods(self):
|
||||
return [name for name,func in inspect.getmembers(self._cls)
|
||||
if not name.startswith('_') and callable(func)]
|
||||
|
||||
@property
|
||||
def properties(self):
|
||||
return [name for name,func in inspect.getmembers(self._cls)
|
||||
if not name.startswith('_') and func is None]
|
|
@ -1,226 +0,0 @@
|
|||
import re, inspect, textwrap, pydoc
|
||||
import sphinx
|
||||
from docscrape import NumpyDocString, FunctionDoc, ClassDoc
|
||||
|
||||
class SphinxDocString(NumpyDocString):
|
||||
def __init__(self, docstring, config={}):
|
||||
self.use_plots = config.get('use_plots', False)
|
||||
NumpyDocString.__init__(self, docstring, config=config)
|
||||
|
||||
# string conversion routines
|
||||
def _str_header(self, name, symbol='`'):
|
||||
return ['.. rubric:: ' + name, '']
|
||||
|
||||
def _str_field_list(self, name):
|
||||
return [':' + name + ':']
|
||||
|
||||
def _str_indent(self, doc, indent=4):
|
||||
out = []
|
||||
for line in doc:
|
||||
out += [' '*indent + line]
|
||||
return out
|
||||
|
||||
def _str_signature(self):
|
||||
return ['']
|
||||
if self['Signature']:
|
||||
return ['``%s``' % self['Signature']] + ['']
|
||||
else:
|
||||
return ['']
|
||||
|
||||
def _str_summary(self):
|
||||
return self['Summary'] + ['']
|
||||
|
||||
def _str_extended_summary(self):
|
||||
return self['Extended Summary'] + ['']
|
||||
|
||||
def _str_param_list(self, name):
|
||||
out = []
|
||||
if self[name]:
|
||||
out += self._str_field_list(name)
|
||||
out += ['']
|
||||
for param,param_type,desc in self[name]:
|
||||
out += self._str_indent(['**%s** : %s' % (param.strip(),
|
||||
param_type)])
|
||||
out += ['']
|
||||
out += self._str_indent(desc,8)
|
||||
out += ['']
|
||||
return out
|
||||
|
||||
@property
|
||||
def _obj(self):
|
||||
if hasattr(self, '_cls'):
|
||||
return self._cls
|
||||
elif hasattr(self, '_f'):
|
||||
return self._f
|
||||
return None
|
||||
|
||||
def _str_member_list(self, name):
|
||||
"""
|
||||
Generate a member listing, autosummary:: table where possible,
|
||||
and a table where not.
|
||||
|
||||
"""
|
||||
out = []
|
||||
if self[name]:
|
||||
out += ['.. rubric:: %s' % name, '']
|
||||
prefix = getattr(self, '_name', '')
|
||||
|
||||
if prefix:
|
||||
prefix = '~%s.' % prefix
|
||||
|
||||
autosum = []
|
||||
others = []
|
||||
for param, param_type, desc in self[name]:
|
||||
param = param.strip()
|
||||
if not self._obj or hasattr(self._obj, param):
|
||||
autosum += [" %s%s" % (prefix, param)]
|
||||
else:
|
||||
others.append((param, param_type, desc))
|
||||
|
||||
if autosum:
|
||||
out += ['.. autosummary::', ' :toctree:', '']
|
||||
out += autosum
|
||||
|
||||
if others:
|
||||
maxlen_0 = max([len(x[0]) for x in others])
|
||||
maxlen_1 = max([len(x[1]) for x in others])
|
||||
hdr = "="*maxlen_0 + " " + "="*maxlen_1 + " " + "="*10
|
||||
fmt = '%%%ds %%%ds ' % (maxlen_0, maxlen_1)
|
||||
n_indent = maxlen_0 + maxlen_1 + 4
|
||||
out += [hdr]
|
||||
for param, param_type, desc in others:
|
||||
out += [fmt % (param.strip(), param_type)]
|
||||
out += self._str_indent(desc, n_indent)
|
||||
out += [hdr]
|
||||
out += ['']
|
||||
return out
|
||||
|
||||
def _str_section(self, name):
|
||||
out = []
|
||||
if self[name]:
|
||||
out += self._str_header(name)
|
||||
out += ['']
|
||||
content = textwrap.dedent("\n".join(self[name])).split("\n")
|
||||
out += content
|
||||
out += ['']
|
||||
return out
|
||||
|
||||
def _str_see_also(self, func_role):
|
||||
out = []
|
||||
if self['See Also']:
|
||||
see_also = super(SphinxDocString, self)._str_see_also(func_role)
|
||||
out = ['.. seealso::', '']
|
||||
out += self._str_indent(see_also[2:])
|
||||
return out
|
||||
|
||||
def _str_warnings(self):
|
||||
out = []
|
||||
if self['Warnings']:
|
||||
out = ['.. warning::', '']
|
||||
out += self._str_indent(self['Warnings'])
|
||||
return out
|
||||
|
||||
def _str_index(self):
|
||||
idx = self['index']
|
||||
out = []
|
||||
if len(idx) == 0:
|
||||
return out
|
||||
|
||||
out += ['.. index:: %s' % idx.get('default','')]
|
||||
for section, references in idx.iteritems():
|
||||
if section == 'default':
|
||||
continue
|
||||
elif section == 'refguide':
|
||||
out += [' single: %s' % (', '.join(references))]
|
||||
else:
|
||||
out += [' %s: %s' % (section, ','.join(references))]
|
||||
return out
|
||||
|
||||
def _str_references(self):
|
||||
out = []
|
||||
if self['References']:
|
||||
out += self._str_header('References')
|
||||
if isinstance(self['References'], str):
|
||||
self['References'] = [self['References']]
|
||||
out.extend(self['References'])
|
||||
out += ['']
|
||||
# Latex collects all references to a separate bibliography,
|
||||
# so we need to insert links to it
|
||||
if sphinx.__version__ >= "0.6":
|
||||
out += ['.. only:: latex','']
|
||||
else:
|
||||
out += ['.. latexonly::','']
|
||||
items = []
|
||||
for line in self['References']:
|
||||
m = re.match(r'.. \[([a-z0-9._-]+)\]', line, re.I)
|
||||
if m:
|
||||
items.append(m.group(1))
|
||||
out += [' ' + ", ".join(["[%s]_" % item for item in items]), '']
|
||||
return out
|
||||
|
||||
def _str_examples(self):
|
||||
examples_str = "\n".join(self['Examples'])
|
||||
|
||||
if (self.use_plots and 'import matplotlib' in examples_str
|
||||
and 'plot::' not in examples_str):
|
||||
out = []
|
||||
out += self._str_header('Examples')
|
||||
out += ['.. plot::', '']
|
||||
out += self._str_indent(self['Examples'])
|
||||
out += ['']
|
||||
return out
|
||||
else:
|
||||
return self._str_section('Examples')
|
||||
|
||||
def __str__(self, indent=0, func_role="obj"):
|
||||
out = []
|
||||
out += self._str_signature()
|
||||
out += self._str_index() + ['']
|
||||
out += self._str_summary()
|
||||
out += self._str_extended_summary()
|
||||
for param_list in ('Parameters', 'Returns', 'Raises'):
|
||||
out += self._str_param_list(param_list)
|
||||
out += self._str_warnings()
|
||||
out += self._str_see_also(func_role)
|
||||
out += self._str_section('Notes')
|
||||
out += self._str_references()
|
||||
out += self._str_examples()
|
||||
for param_list in ('Attributes', 'Methods'):
|
||||
out += self._str_member_list(param_list)
|
||||
out = self._str_indent(out,indent)
|
||||
return '\n'.join(out)
|
||||
|
||||
class SphinxFunctionDoc(SphinxDocString, FunctionDoc):
|
||||
def __init__(self, obj, doc=None, config={}):
|
||||
self.use_plots = config.get('use_plots', False)
|
||||
FunctionDoc.__init__(self, obj, doc=doc, config=config)
|
||||
|
||||
class SphinxClassDoc(SphinxDocString, ClassDoc):
|
||||
def __init__(self, obj, doc=None, func_doc=None, config={}):
|
||||
self.use_plots = config.get('use_plots', False)
|
||||
ClassDoc.__init__(self, obj, doc=doc, func_doc=None, config=config)
|
||||
|
||||
class SphinxObjDoc(SphinxDocString):
|
||||
def __init__(self, obj, doc=None, config={}):
|
||||
self._f = obj
|
||||
SphinxDocString.__init__(self, doc, config=config)
|
||||
|
||||
def get_doc_object(obj, what=None, doc=None, config={}):
|
||||
if what is None:
|
||||
if inspect.isclass(obj):
|
||||
what = 'class'
|
||||
elif inspect.ismodule(obj):
|
||||
what = 'module'
|
||||
elif callable(obj):
|
||||
what = 'function'
|
||||
else:
|
||||
what = 'object'
|
||||
if what == 'class':
|
||||
return SphinxClassDoc(obj, func_doc=SphinxFunctionDoc, doc=doc,
|
||||
config=config)
|
||||
elif what in ('function', 'method'):
|
||||
return SphinxFunctionDoc(obj, doc=doc, config=config)
|
||||
else:
|
||||
if doc is None:
|
||||
doc = pydoc.getdoc(obj)
|
||||
return SphinxObjDoc(obj, doc, config=config)
|
|
@ -1,196 +0,0 @@
|
|||
"""
|
||||
========
|
||||
numpydoc
|
||||
========
|
||||
|
||||
Sphinx extension that handles docstrings in the Numpy standard format. [1]
|
||||
|
||||
It will:
|
||||
|
||||
- Convert Parameters etc. sections to field lists.
|
||||
- Convert See Also section to a See also entry.
|
||||
- Renumber references.
|
||||
- Extract the signature from the docstring, if it can't be determined otherwise.
|
||||
|
||||
.. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard
|
||||
|
||||
"""
|
||||
|
||||
import os, re, pydoc
|
||||
from docscrape_sphinx import get_doc_object, SphinxDocString
|
||||
from sphinx.util.compat import Directive
|
||||
import inspect
|
||||
|
||||
def mangle_docstrings(app, what, name, obj, options, lines,
|
||||
reference_offset=[0]):
|
||||
|
||||
cfg = dict(use_plots=app.config.numpydoc_use_plots,
|
||||
show_class_members=app.config.numpydoc_show_class_members)
|
||||
|
||||
if what == 'module':
|
||||
# Strip top title
|
||||
title_re = re.compile(ur'^\s*[#*=]{4,}\n[a-z0-9 -]+\n[#*=]{4,}\s*',
|
||||
re.I|re.S)
|
||||
lines[:] = title_re.sub(u'', u"\n".join(lines)).split(u"\n")
|
||||
else:
|
||||
doc = get_doc_object(obj, what, u"\n".join(lines), config=cfg)
|
||||
lines[:] = unicode(doc).split(u"\n")
|
||||
|
||||
if app.config.numpydoc_edit_link and hasattr(obj, '__name__') and \
|
||||
obj.__name__:
|
||||
if hasattr(obj, '__module__'):
|
||||
v = dict(full_name=u"%s.%s" % (obj.__module__, obj.__name__))
|
||||
else:
|
||||
v = dict(full_name=obj.__name__)
|
||||
lines += [u'', u'.. htmlonly::', '']
|
||||
lines += [u' %s' % x for x in
|
||||
(app.config.numpydoc_edit_link % v).split("\n")]
|
||||
|
||||
# replace reference numbers so that there are no duplicates
|
||||
references = []
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
m = re.match(ur'^.. \[([a-z0-9_.-])\]', line, re.I)
|
||||
if m:
|
||||
references.append(m.group(1))
|
||||
|
||||
# start renaming from the longest string, to avoid overwriting parts
|
||||
references.sort(key=lambda x: -len(x))
|
||||
if references:
|
||||
for i, line in enumerate(lines):
|
||||
for r in references:
|
||||
if re.match(ur'^\d+$', r):
|
||||
new_r = u"R%d" % (reference_offset[0] + int(r))
|
||||
else:
|
||||
new_r = u"%s%d" % (r, reference_offset[0])
|
||||
lines[i] = lines[i].replace(u'[%s]_' % r,
|
||||
u'[%s]_' % new_r)
|
||||
lines[i] = lines[i].replace(u'.. [%s]' % r,
|
||||
u'.. [%s]' % new_r)
|
||||
|
||||
reference_offset[0] += len(references)
|
||||
|
||||
def mangle_signature(app, what, name, obj, options, sig, retann):
|
||||
# Do not try to inspect classes that don't define `__init__`
|
||||
if (inspect.isclass(obj) and
|
||||
(not hasattr(obj, '__init__') or
|
||||
'initializes x; see ' in pydoc.getdoc(obj.__init__))):
|
||||
return '', ''
|
||||
|
||||
if not (callable(obj) or hasattr(obj, '__argspec_is_invalid_')): return
|
||||
if not hasattr(obj, '__doc__'): return
|
||||
|
||||
doc = SphinxDocString(pydoc.getdoc(obj))
|
||||
if doc['Signature']:
|
||||
sig = re.sub(u"^[^(]*", u"", doc['Signature'])
|
||||
return sig, u''
|
||||
|
||||
def initialize(app):
|
||||
try:
|
||||
app.connect('autodoc-process-signature', mangle_signature)
|
||||
except:
|
||||
monkeypatch_sphinx_ext_autodoc()
|
||||
|
||||
def setup(app, get_doc_object_=get_doc_object):
|
||||
global get_doc_object
|
||||
get_doc_object = get_doc_object_
|
||||
|
||||
app.connect('autodoc-process-docstring', mangle_docstrings)
|
||||
app.connect('builder-inited', initialize)
|
||||
app.add_config_value('numpydoc_edit_link', None, False)
|
||||
app.add_config_value('numpydoc_use_plots', None, False)
|
||||
app.add_config_value('numpydoc_show_class_members', True, True)
|
||||
|
||||
# Extra mangling directives
|
||||
name_type = {
|
||||
'cfunction': 'function',
|
||||
'cmember': 'attribute',
|
||||
'cmacro': 'function',
|
||||
'ctype': 'class',
|
||||
'cvar': 'object',
|
||||
'class': 'class',
|
||||
'function': 'function',
|
||||
'attribute': 'attribute',
|
||||
'method': 'function',
|
||||
'staticmethod': 'function',
|
||||
'classmethod': 'function',
|
||||
}
|
||||
|
||||
for name, objtype in name_type.items():
|
||||
app.add_directive('np-' + name, wrap_mangling_directive(name, objtype))
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Input-mangling directives
|
||||
#------------------------------------------------------------------------------
|
||||
from docutils.statemachine import ViewList
|
||||
|
||||
def get_directive(name):
|
||||
from docutils.parsers.rst import directives
|
||||
try:
|
||||
return directives.directive(name, None, None)[0]
|
||||
except AttributeError:
|
||||
pass
|
||||
try:
|
||||
# docutils 0.4
|
||||
return directives._directives[name]
|
||||
except (AttributeError, KeyError):
|
||||
raise RuntimeError("No directive named '%s' found" % name)
|
||||
|
||||
def wrap_mangling_directive(base_directive_name, objtype):
|
||||
base_directive = get_directive(base_directive_name)
|
||||
|
||||
if inspect.isfunction(base_directive):
|
||||
base_func = base_directive
|
||||
class base_directive(Directive):
|
||||
required_arguments = base_func.arguments[0]
|
||||
optional_arguments = base_func.arguments[1]
|
||||
final_argument_whitespace = base_func.arguments[2]
|
||||
option_spec = base_func.options
|
||||
has_content = base_func.content
|
||||
def run(self):
|
||||
return base_func(self.name, self.arguments, self.options,
|
||||
self.content, self.lineno,
|
||||
self.content_offset, self.block_text,
|
||||
self.state, self.state_machine)
|
||||
|
||||
class directive(base_directive):
|
||||
def run(self):
|
||||
env = self.state.document.settings.env
|
||||
|
||||
name = None
|
||||
if self.arguments:
|
||||
m = re.match(r'^(.*\s+)?(.*?)(\(.*)?', self.arguments[0])
|
||||
name = m.group(2).strip()
|
||||
|
||||
if not name:
|
||||
name = self.arguments[0]
|
||||
|
||||
lines = list(self.content)
|
||||
mangle_docstrings(env.app, objtype, name, None, None, lines)
|
||||
self.content = ViewList(lines, self.content.parent)
|
||||
|
||||
return base_directive.run(self)
|
||||
|
||||
return directive
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Monkeypatch sphinx.ext.autodoc to accept argspecless autodocs (Sphinx < 0.5)
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
def monkeypatch_sphinx_ext_autodoc():
|
||||
global _original_format_signature
|
||||
import sphinx.ext.autodoc
|
||||
|
||||
if sphinx.ext.autodoc.format_signature is our_format_signature:
|
||||
return
|
||||
|
||||
print "[numpydoc] Monkeypatching sphinx.ext.autodoc ..."
|
||||
_original_format_signature = sphinx.ext.autodoc.format_signature
|
||||
sphinx.ext.autodoc.format_signature = our_format_signature
|
||||
|
||||
def our_format_signature(what, obj):
|
||||
r = mangle_signature(None, what, None, obj, None, None, None)
|
||||
if r is not None:
|
||||
return r[0]
|
||||
else:
|
||||
return _original_format_signature(what, obj)
|
|
@ -1,96 +0,0 @@
|
|||
#
|
||||
# A pair of directives for inserting content that will only appear in
|
||||
# either html or latex.
|
||||
#
|
||||
|
||||
from docutils.nodes import Body, Element
|
||||
from docutils.writers.html4css1 import HTMLTranslator
|
||||
try:
|
||||
from sphinx.latexwriter import LaTeXTranslator
|
||||
except ImportError:
|
||||
from sphinx.writers.latex import LaTeXTranslator
|
||||
|
||||
import warnings
|
||||
warnings.warn("The numpydoc.only_directives module is deprecated;"
|
||||
"please use the only:: directive available in Sphinx >= 0.6",
|
||||
DeprecationWarning, stacklevel=2)
|
||||
|
||||
from docutils.parsers.rst import directives
|
||||
|
||||
class html_only(Body, Element):
|
||||
pass
|
||||
|
||||
class latex_only(Body, Element):
|
||||
pass
|
||||
|
||||
def run(content, node_class, state, content_offset):
|
||||
text = '\n'.join(content)
|
||||
node = node_class(text)
|
||||
state.nested_parse(content, content_offset, node)
|
||||
return [node]
|
||||
|
||||
try:
|
||||
from docutils.parsers.rst import Directive
|
||||
except ImportError:
|
||||
from docutils.parsers.rst.directives import _directives
|
||||
|
||||
def html_only_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
return run(content, html_only, state, content_offset)
|
||||
|
||||
def latex_only_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
return run(content, latex_only, state, content_offset)
|
||||
|
||||
for func in (html_only_directive, latex_only_directive):
|
||||
func.content = 1
|
||||
func.options = {}
|
||||
func.arguments = None
|
||||
|
||||
_directives['htmlonly'] = html_only_directive
|
||||
_directives['latexonly'] = latex_only_directive
|
||||
else:
|
||||
class OnlyDirective(Directive):
|
||||
has_content = True
|
||||
required_arguments = 0
|
||||
optional_arguments = 0
|
||||
final_argument_whitespace = True
|
||||
option_spec = {}
|
||||
|
||||
def run(self):
|
||||
self.assert_has_content()
|
||||
return run(self.content, self.node_class,
|
||||
self.state, self.content_offset)
|
||||
|
||||
class HtmlOnlyDirective(OnlyDirective):
|
||||
node_class = html_only
|
||||
|
||||
class LatexOnlyDirective(OnlyDirective):
|
||||
node_class = latex_only
|
||||
|
||||
directives.register_directive('htmlonly', HtmlOnlyDirective)
|
||||
directives.register_directive('latexonly', LatexOnlyDirective)
|
||||
|
||||
def setup(app):
|
||||
app.add_node(html_only)
|
||||
app.add_node(latex_only)
|
||||
|
||||
# Add visit/depart methods to HTML-Translator:
|
||||
def visit_perform(self, node):
|
||||
pass
|
||||
def depart_perform(self, node):
|
||||
pass
|
||||
def visit_ignore(self, node):
|
||||
node.children = []
|
||||
def depart_ignore(self, node):
|
||||
node.children = []
|
||||
|
||||
HTMLTranslator.visit_html_only = visit_perform
|
||||
HTMLTranslator.depart_html_only = depart_perform
|
||||
HTMLTranslator.visit_latex_only = visit_ignore
|
||||
HTMLTranslator.depart_latex_only = depart_ignore
|
||||
|
||||
LaTeXTranslator.visit_html_only = visit_ignore
|
||||
LaTeXTranslator.depart_html_only = depart_ignore
|
||||
LaTeXTranslator.visit_latex_only = visit_perform
|
||||
LaTeXTranslator.depart_latex_only = depart_perform
|
|
@ -1,162 +0,0 @@
|
|||
"""
|
||||
==============
|
||||
phantom_import
|
||||
==============
|
||||
|
||||
Sphinx extension to make directives from ``sphinx.ext.autodoc`` and similar
|
||||
extensions to use docstrings loaded from an XML file.
|
||||
|
||||
This extension loads an XML file in the Pydocweb format [1] and
|
||||
creates a dummy module that contains the specified docstrings. This
|
||||
can be used to get the current docstrings from a Pydocweb instance
|
||||
without needing to rebuild the documented module.
|
||||
|
||||
.. [1] http://code.google.com/p/pydocweb
|
||||
|
||||
"""
|
||||
import imp, sys, compiler, types, os, inspect, re
|
||||
|
||||
def setup(app):
|
||||
app.connect('builder-inited', initialize)
|
||||
app.add_config_value('phantom_import_file', None, True)
|
||||
|
||||
def initialize(app):
|
||||
fn = app.config.phantom_import_file
|
||||
if (fn and os.path.isfile(fn)):
|
||||
print "[numpydoc] Phantom importing modules from", fn, "..."
|
||||
import_phantom_module(fn)
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Creating 'phantom' modules from an XML description
|
||||
#------------------------------------------------------------------------------
|
||||
def import_phantom_module(xml_file):
|
||||
"""
|
||||
Insert a fake Python module to sys.modules, based on a XML file.
|
||||
|
||||
The XML file is expected to conform to Pydocweb DTD. The fake
|
||||
module will contain dummy objects, which guarantee the following:
|
||||
|
||||
- Docstrings are correct.
|
||||
- Class inheritance relationships are correct (if present in XML).
|
||||
- Function argspec is *NOT* correct (even if present in XML).
|
||||
Instead, the function signature is prepended to the function docstring.
|
||||
- Class attributes are *NOT* correct; instead, they are dummy objects.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
xml_file : str
|
||||
Name of an XML file to read
|
||||
|
||||
"""
|
||||
import lxml.etree as etree
|
||||
|
||||
object_cache = {}
|
||||
|
||||
tree = etree.parse(xml_file)
|
||||
root = tree.getroot()
|
||||
|
||||
# Sort items so that
|
||||
# - Base classes come before classes inherited from them
|
||||
# - Modules come before their contents
|
||||
all_nodes = dict([(n.attrib['id'], n) for n in root])
|
||||
|
||||
def _get_bases(node, recurse=False):
|
||||
bases = [x.attrib['ref'] for x in node.findall('base')]
|
||||
if recurse:
|
||||
j = 0
|
||||
while True:
|
||||
try:
|
||||
b = bases[j]
|
||||
except IndexError: break
|
||||
if b in all_nodes:
|
||||
bases.extend(_get_bases(all_nodes[b]))
|
||||
j += 1
|
||||
return bases
|
||||
|
||||
type_index = ['module', 'class', 'callable', 'object']
|
||||
|
||||
def base_cmp(a, b):
|
||||
x = cmp(type_index.index(a.tag), type_index.index(b.tag))
|
||||
if x != 0: return x
|
||||
|
||||
if a.tag == 'class' and b.tag == 'class':
|
||||
a_bases = _get_bases(a, recurse=True)
|
||||
b_bases = _get_bases(b, recurse=True)
|
||||
x = cmp(len(a_bases), len(b_bases))
|
||||
if x != 0: return x
|
||||
if a.attrib['id'] in b_bases: return -1
|
||||
if b.attrib['id'] in a_bases: return 1
|
||||
|
||||
return cmp(a.attrib['id'].count('.'), b.attrib['id'].count('.'))
|
||||
|
||||
nodes = root.getchildren()
|
||||
nodes.sort(base_cmp)
|
||||
|
||||
# Create phantom items
|
||||
for node in nodes:
|
||||
name = node.attrib['id']
|
||||
doc = (node.text or '').decode('string-escape') + "\n"
|
||||
if doc == "\n": doc = ""
|
||||
|
||||
# create parent, if missing
|
||||
parent = name
|
||||
while True:
|
||||
parent = '.'.join(parent.split('.')[:-1])
|
||||
if not parent: break
|
||||
if parent in object_cache: break
|
||||
obj = imp.new_module(parent)
|
||||
object_cache[parent] = obj
|
||||
sys.modules[parent] = obj
|
||||
|
||||
# create object
|
||||
if node.tag == 'module':
|
||||
obj = imp.new_module(name)
|
||||
obj.__doc__ = doc
|
||||
sys.modules[name] = obj
|
||||
elif node.tag == 'class':
|
||||
bases = [object_cache[b] for b in _get_bases(node)
|
||||
if b in object_cache]
|
||||
bases.append(object)
|
||||
init = lambda self: None
|
||||
init.__doc__ = doc
|
||||
obj = type(name, tuple(bases), {'__doc__': doc, '__init__': init})
|
||||
obj.__name__ = name.split('.')[-1]
|
||||
elif node.tag == 'callable':
|
||||
funcname = node.attrib['id'].split('.')[-1]
|
||||
argspec = node.attrib.get('argspec')
|
||||
if argspec:
|
||||
argspec = re.sub('^[^(]*', '', argspec)
|
||||
doc = "%s%s\n\n%s" % (funcname, argspec, doc)
|
||||
obj = lambda: 0
|
||||
obj.__argspec_is_invalid_ = True
|
||||
obj.func_name = funcname
|
||||
obj.__name__ = name
|
||||
obj.__doc__ = doc
|
||||
if inspect.isclass(object_cache[parent]):
|
||||
obj.__objclass__ = object_cache[parent]
|
||||
else:
|
||||
class Dummy(object): pass
|
||||
obj = Dummy()
|
||||
obj.__name__ = name
|
||||
obj.__doc__ = doc
|
||||
if inspect.isclass(object_cache[parent]):
|
||||
obj.__get__ = lambda: None
|
||||
object_cache[name] = obj
|
||||
|
||||
if parent:
|
||||
if inspect.ismodule(object_cache[parent]):
|
||||
obj.__module__ = parent
|
||||
setattr(object_cache[parent], name.split('.')[-1], obj)
|
||||
|
||||
# Populate items
|
||||
for node in root:
|
||||
obj = object_cache.get(node.attrib['id'])
|
||||
if obj is None: continue
|
||||
for ref in node.findall('ref'):
|
||||
if node.tag == 'class':
|
||||
if ref.attrib['ref'].startswith(node.attrib['id'] + '.'):
|
||||
setattr(obj, ref.attrib['name'],
|
||||
object_cache.get(ref.attrib['ref']))
|
||||
else:
|
||||
setattr(obj, ref.attrib['name'],
|
||||
object_cache.get(ref.attrib['ref']))
|
|
@ -1,563 +0,0 @@
|
|||
"""
|
||||
A special directive for generating a matplotlib plot.
|
||||
|
||||
.. warning::
|
||||
|
||||
This is a hacked version of plot_directive.py from Matplotlib.
|
||||
It's very much subject to change!
|
||||
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Can be used like this::
|
||||
|
||||
.. plot:: examples/example.py
|
||||
|
||||
.. plot::
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
plt.plot([1,2,3], [4,5,6])
|
||||
|
||||
.. plot::
|
||||
|
||||
A plotting example:
|
||||
|
||||
>>> import matplotlib.pyplot as plt
|
||||
>>> plt.plot([1,2,3], [4,5,6])
|
||||
|
||||
The content is interpreted as doctest formatted if it has a line starting
|
||||
with ``>>>``.
|
||||
|
||||
The ``plot`` directive supports the options
|
||||
|
||||
format : {'python', 'doctest'}
|
||||
Specify the format of the input
|
||||
|
||||
include-source : bool
|
||||
Whether to display the source code. Default can be changed in conf.py
|
||||
|
||||
and the ``image`` directive options ``alt``, ``height``, ``width``,
|
||||
``scale``, ``align``, ``class``.
|
||||
|
||||
Configuration options
|
||||
---------------------
|
||||
|
||||
The plot directive has the following configuration options:
|
||||
|
||||
plot_include_source
|
||||
Default value for the include-source option
|
||||
|
||||
plot_pre_code
|
||||
Code that should be executed before each plot.
|
||||
|
||||
plot_basedir
|
||||
Base directory, to which plot:: file names are relative to.
|
||||
(If None or empty, file names are relative to the directoly where
|
||||
the file containing the directive is.)
|
||||
|
||||
plot_formats
|
||||
File formats to generate. List of tuples or strings::
|
||||
|
||||
[(suffix, dpi), suffix, ...]
|
||||
|
||||
that determine the file format and the DPI. For entries whose
|
||||
DPI was omitted, sensible defaults are chosen.
|
||||
|
||||
TODO
|
||||
----
|
||||
|
||||
* Refactor Latex output; now it's plain images, but it would be nice
|
||||
to make them appear side-by-side, or in floats.
|
||||
|
||||
"""
|
||||
|
||||
import sys, os, glob, shutil, imp, warnings, cStringIO, re, textwrap, traceback
|
||||
import sphinx
|
||||
|
||||
import warnings
|
||||
warnings.warn("A plot_directive module is also available under "
|
||||
"matplotlib.sphinxext; expect this numpydoc.plot_directive "
|
||||
"module to be deprecated after relevant features have been "
|
||||
"integrated there.",
|
||||
FutureWarning, stacklevel=2)
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Registration hook
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
def setup(app):
|
||||
setup.app = app
|
||||
setup.config = app.config
|
||||
setup.confdir = app.confdir
|
||||
|
||||
app.add_config_value('plot_pre_code', '', True)
|
||||
app.add_config_value('plot_include_source', False, True)
|
||||
app.add_config_value('plot_formats', ['png', 'hires.png', 'pdf'], True)
|
||||
app.add_config_value('plot_basedir', None, True)
|
||||
|
||||
app.add_directive('plot', plot_directive, True, (0, 1, False),
|
||||
**plot_directive_options)
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# plot:: directive
|
||||
#------------------------------------------------------------------------------
|
||||
from docutils.parsers.rst import directives
|
||||
from docutils import nodes
|
||||
|
||||
def plot_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
return run(arguments, content, options, state_machine, state, lineno)
|
||||
plot_directive.__doc__ = __doc__
|
||||
|
||||
def _option_boolean(arg):
|
||||
if not arg or not arg.strip():
|
||||
# no argument given, assume used as a flag
|
||||
return True
|
||||
elif arg.strip().lower() in ('no', '0', 'false'):
|
||||
return False
|
||||
elif arg.strip().lower() in ('yes', '1', 'true'):
|
||||
return True
|
||||
else:
|
||||
raise ValueError('"%s" unknown boolean' % arg)
|
||||
|
||||
def _option_format(arg):
|
||||
return directives.choice(arg, ('python', 'lisp'))
|
||||
|
||||
def _option_align(arg):
|
||||
return directives.choice(arg, ("top", "middle", "bottom", "left", "center",
|
||||
"right"))
|
||||
|
||||
plot_directive_options = {'alt': directives.unchanged,
|
||||
'height': directives.length_or_unitless,
|
||||
'width': directives.length_or_percentage_or_unitless,
|
||||
'scale': directives.nonnegative_int,
|
||||
'align': _option_align,
|
||||
'class': directives.class_option,
|
||||
'include-source': _option_boolean,
|
||||
'format': _option_format,
|
||||
}
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Generating output
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
from docutils import nodes, utils
|
||||
|
||||
try:
|
||||
# Sphinx depends on either Jinja or Jinja2
|
||||
import jinja2
|
||||
def format_template(template, **kw):
|
||||
return jinja2.Template(template).render(**kw)
|
||||
except ImportError:
|
||||
import jinja
|
||||
def format_template(template, **kw):
|
||||
return jinja.from_string(template, **kw)
|
||||
|
||||
TEMPLATE = """
|
||||
{{ source_code }}
|
||||
|
||||
{{ only_html }}
|
||||
|
||||
{% if source_code %}
|
||||
(`Source code <{{ source_link }}>`__)
|
||||
|
||||
.. admonition:: Output
|
||||
:class: plot-output
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% for img in images %}
|
||||
.. figure:: {{ build_dir }}/{{ img.basename }}.png
|
||||
{%- for option in options %}
|
||||
{{ option }}
|
||||
{% endfor %}
|
||||
|
||||
(
|
||||
{%- if not source_code -%}
|
||||
`Source code <{{source_link}}>`__
|
||||
{%- for fmt in img.formats -%}
|
||||
, `{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__
|
||||
{%- endfor -%}
|
||||
{%- else -%}
|
||||
{%- for fmt in img.formats -%}
|
||||
{%- if not loop.first -%}, {% endif -%}
|
||||
`{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__
|
||||
{%- endfor -%}
|
||||
{%- endif -%}
|
||||
)
|
||||
{% endfor %}
|
||||
|
||||
{{ only_latex }}
|
||||
|
||||
{% for img in images %}
|
||||
.. image:: {{ build_dir }}/{{ img.basename }}.pdf
|
||||
{% endfor %}
|
||||
|
||||
"""
|
||||
|
||||
class ImageFile(object):
|
||||
def __init__(self, basename, dirname):
|
||||
self.basename = basename
|
||||
self.dirname = dirname
|
||||
self.formats = []
|
||||
|
||||
def filename(self, format):
|
||||
return os.path.join(self.dirname, "%s.%s" % (self.basename, format))
|
||||
|
||||
def filenames(self):
|
||||
return [self.filename(fmt) for fmt in self.formats]
|
||||
|
||||
def run(arguments, content, options, state_machine, state, lineno):
|
||||
if arguments and content:
|
||||
raise RuntimeError("plot:: directive can't have both args and content")
|
||||
|
||||
document = state_machine.document
|
||||
config = document.settings.env.config
|
||||
|
||||
options.setdefault('include-source', config.plot_include_source)
|
||||
|
||||
# determine input
|
||||
rst_file = document.attributes['source']
|
||||
rst_dir = os.path.dirname(rst_file)
|
||||
|
||||
if arguments:
|
||||
if not config.plot_basedir:
|
||||
source_file_name = os.path.join(rst_dir,
|
||||
directives.uri(arguments[0]))
|
||||
else:
|
||||
source_file_name = os.path.join(setup.confdir, config.plot_basedir,
|
||||
directives.uri(arguments[0]))
|
||||
code = open(source_file_name, 'r').read()
|
||||
output_base = os.path.basename(source_file_name)
|
||||
else:
|
||||
source_file_name = rst_file
|
||||
code = textwrap.dedent("\n".join(map(str, content)))
|
||||
counter = document.attributes.get('_plot_counter', 0) + 1
|
||||
document.attributes['_plot_counter'] = counter
|
||||
base, ext = os.path.splitext(os.path.basename(source_file_name))
|
||||
output_base = '%s-%d.py' % (base, counter)
|
||||
|
||||
base, source_ext = os.path.splitext(output_base)
|
||||
if source_ext in ('.py', '.rst', '.txt'):
|
||||
output_base = base
|
||||
else:
|
||||
source_ext = ''
|
||||
|
||||
# ensure that LaTeX includegraphics doesn't choke in foo.bar.pdf filenames
|
||||
output_base = output_base.replace('.', '-')
|
||||
|
||||
# is it in doctest format?
|
||||
is_doctest = contains_doctest(code)
|
||||
if options.has_key('format'):
|
||||
if options['format'] == 'python':
|
||||
is_doctest = False
|
||||
else:
|
||||
is_doctest = True
|
||||
|
||||
# determine output directory name fragment
|
||||
source_rel_name = relpath(source_file_name, setup.confdir)
|
||||
source_rel_dir = os.path.dirname(source_rel_name)
|
||||
while source_rel_dir.startswith(os.path.sep):
|
||||
source_rel_dir = source_rel_dir[1:]
|
||||
|
||||
# build_dir: where to place output files (temporarily)
|
||||
build_dir = os.path.join(os.path.dirname(setup.app.doctreedir),
|
||||
'plot_directive',
|
||||
source_rel_dir)
|
||||
if not os.path.exists(build_dir):
|
||||
os.makedirs(build_dir)
|
||||
|
||||
# output_dir: final location in the builder's directory
|
||||
dest_dir = os.path.abspath(os.path.join(setup.app.builder.outdir,
|
||||
source_rel_dir))
|
||||
|
||||
# how to link to files from the RST file
|
||||
dest_dir_link = os.path.join(relpath(setup.confdir, rst_dir),
|
||||
source_rel_dir).replace(os.path.sep, '/')
|
||||
build_dir_link = relpath(build_dir, rst_dir).replace(os.path.sep, '/')
|
||||
source_link = dest_dir_link + '/' + output_base + source_ext
|
||||
|
||||
# make figures
|
||||
try:
|
||||
images = makefig(code, source_file_name, build_dir, output_base,
|
||||
config)
|
||||
except PlotError, err:
|
||||
reporter = state.memo.reporter
|
||||
sm = reporter.system_message(
|
||||
3, "Exception occurred in plotting %s: %s" % (output_base, err),
|
||||
line=lineno)
|
||||
return [sm]
|
||||
|
||||
# generate output restructuredtext
|
||||
if options['include-source']:
|
||||
if is_doctest:
|
||||
lines = ['']
|
||||
lines += [row.rstrip() for row in code.split('\n')]
|
||||
else:
|
||||
lines = ['.. code-block:: python', '']
|
||||
lines += [' %s' % row.rstrip() for row in code.split('\n')]
|
||||
source_code = "\n".join(lines)
|
||||
else:
|
||||
source_code = ""
|
||||
|
||||
opts = [':%s: %s' % (key, val) for key, val in options.items()
|
||||
if key in ('alt', 'height', 'width', 'scale', 'align', 'class')]
|
||||
|
||||
if sphinx.__version__ >= "0.6":
|
||||
only_html = ".. only:: html"
|
||||
only_latex = ".. only:: latex"
|
||||
else:
|
||||
only_html = ".. htmlonly::"
|
||||
only_latex = ".. latexonly::"
|
||||
|
||||
result = format_template(
|
||||
TEMPLATE,
|
||||
dest_dir=dest_dir_link,
|
||||
build_dir=build_dir_link,
|
||||
source_link=source_link,
|
||||
only_html=only_html,
|
||||
only_latex=only_latex,
|
||||
options=opts,
|
||||
images=images,
|
||||
source_code=source_code)
|
||||
|
||||
lines = result.split("\n")
|
||||
if len(lines):
|
||||
state_machine.insert_input(
|
||||
lines, state_machine.input_lines.source(0))
|
||||
|
||||
# copy image files to builder's output directory
|
||||
if not os.path.exists(dest_dir):
|
||||
os.makedirs(dest_dir)
|
||||
|
||||
for img in images:
|
||||
for fn in img.filenames():
|
||||
shutil.copyfile(fn, os.path.join(dest_dir, os.path.basename(fn)))
|
||||
|
||||
# copy script (if necessary)
|
||||
if source_file_name == rst_file:
|
||||
target_name = os.path.join(dest_dir, output_base + source_ext)
|
||||
f = open(target_name, 'w')
|
||||
f.write(unescape_doctest(code))
|
||||
f.close()
|
||||
|
||||
return []
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Run code and capture figures
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
import matplotlib
|
||||
matplotlib.use('Agg')
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.image as image
|
||||
from matplotlib import _pylab_helpers
|
||||
|
||||
import exceptions
|
||||
|
||||
def contains_doctest(text):
|
||||
try:
|
||||
# check if it's valid Python as-is
|
||||
compile(text, '<string>', 'exec')
|
||||
return False
|
||||
except SyntaxError:
|
||||
pass
|
||||
r = re.compile(r'^\s*>>>', re.M)
|
||||
m = r.search(text)
|
||||
return bool(m)
|
||||
|
||||
def unescape_doctest(text):
|
||||
"""
|
||||
Extract code from a piece of text, which contains either Python code
|
||||
or doctests.
|
||||
|
||||
"""
|
||||
if not contains_doctest(text):
|
||||
return text
|
||||
|
||||
code = ""
|
||||
for line in text.split("\n"):
|
||||
m = re.match(r'^\s*(>>>|\.\.\.) (.*)$', line)
|
||||
if m:
|
||||
code += m.group(2) + "\n"
|
||||
elif line.strip():
|
||||
code += "# " + line.strip() + "\n"
|
||||
else:
|
||||
code += "\n"
|
||||
return code
|
||||
|
||||
class PlotError(RuntimeError):
|
||||
pass
|
||||
|
||||
def run_code(code, code_path):
|
||||
# Change the working directory to the directory of the example, so
|
||||
# it can get at its data files, if any.
|
||||
pwd = os.getcwd()
|
||||
old_sys_path = list(sys.path)
|
||||
if code_path is not None:
|
||||
dirname = os.path.abspath(os.path.dirname(code_path))
|
||||
os.chdir(dirname)
|
||||
sys.path.insert(0, dirname)
|
||||
|
||||
# Redirect stdout
|
||||
stdout = sys.stdout
|
||||
sys.stdout = cStringIO.StringIO()
|
||||
|
||||
# Reset sys.argv
|
||||
old_sys_argv = sys.argv
|
||||
sys.argv = [code_path]
|
||||
|
||||
try:
|
||||
try:
|
||||
code = unescape_doctest(code)
|
||||
ns = {}
|
||||
exec setup.config.plot_pre_code in ns
|
||||
exec code in ns
|
||||
except (Exception, SystemExit), err:
|
||||
raise PlotError(traceback.format_exc())
|
||||
finally:
|
||||
os.chdir(pwd)
|
||||
sys.argv = old_sys_argv
|
||||
sys.path[:] = old_sys_path
|
||||
sys.stdout = stdout
|
||||
return ns
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Generating figures
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
def out_of_date(original, derived):
|
||||
"""
|
||||
Returns True if derivative is out-of-date wrt original,
|
||||
both of which are full file paths.
|
||||
"""
|
||||
return (not os.path.exists(derived)
|
||||
or os.stat(derived).st_mtime < os.stat(original).st_mtime)
|
||||
|
||||
|
||||
def makefig(code, code_path, output_dir, output_base, config):
|
||||
"""
|
||||
Run a pyplot script *code* and save the images under *output_dir*
|
||||
with file names derived from *output_base*
|
||||
|
||||
"""
|
||||
|
||||
# -- Parse format list
|
||||
default_dpi = {'png': 80, 'hires.png': 200, 'pdf': 50}
|
||||
formats = []
|
||||
for fmt in config.plot_formats:
|
||||
if isinstance(fmt, str):
|
||||
formats.append((fmt, default_dpi.get(fmt, 80)))
|
||||
elif type(fmt) in (tuple, list) and len(fmt)==2:
|
||||
formats.append((str(fmt[0]), int(fmt[1])))
|
||||
else:
|
||||
raise PlotError('invalid image format "%r" in plot_formats' % fmt)
|
||||
|
||||
# -- Try to determine if all images already exist
|
||||
|
||||
# Look for single-figure output files first
|
||||
all_exists = True
|
||||
img = ImageFile(output_base, output_dir)
|
||||
for format, dpi in formats:
|
||||
if out_of_date(code_path, img.filename(format)):
|
||||
all_exists = False
|
||||
break
|
||||
img.formats.append(format)
|
||||
|
||||
if all_exists:
|
||||
return [img]
|
||||
|
||||
# Then look for multi-figure output files
|
||||
images = []
|
||||
all_exists = True
|
||||
for i in xrange(1000):
|
||||
img = ImageFile('%s_%02d' % (output_base, i), output_dir)
|
||||
for format, dpi in formats:
|
||||
if out_of_date(code_path, img.filename(format)):
|
||||
all_exists = False
|
||||
break
|
||||
img.formats.append(format)
|
||||
|
||||
# assume that if we have one, we have them all
|
||||
if not all_exists:
|
||||
all_exists = (i > 0)
|
||||
break
|
||||
images.append(img)
|
||||
|
||||
if all_exists:
|
||||
return images
|
||||
|
||||
# -- We didn't find the files, so build them
|
||||
|
||||
# Clear between runs
|
||||
plt.close('all')
|
||||
|
||||
# Run code
|
||||
run_code(code, code_path)
|
||||
|
||||
# Collect images
|
||||
images = []
|
||||
|
||||
fig_managers = _pylab_helpers.Gcf.get_all_fig_managers()
|
||||
for i, figman in enumerate(fig_managers):
|
||||
if len(fig_managers) == 1:
|
||||
img = ImageFile(output_base, output_dir)
|
||||
else:
|
||||
img = ImageFile("%s_%02d" % (output_base, i), output_dir)
|
||||
images.append(img)
|
||||
for format, dpi in formats:
|
||||
try:
|
||||
figman.canvas.figure.savefig(img.filename(format), dpi=dpi)
|
||||
except exceptions.BaseException, err:
|
||||
raise PlotError(traceback.format_exc())
|
||||
img.formats.append(format)
|
||||
|
||||
return images
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------
|
||||
# Relative pathnames
|
||||
#------------------------------------------------------------------------------
|
||||
|
||||
try:
|
||||
from os.path import relpath
|
||||
except ImportError:
|
||||
def relpath(target, base=os.curdir):
|
||||
"""
|
||||
Return a relative path to the target from either the current
|
||||
dir or an optional base dir. Base can be a directory
|
||||
specified either as absolute or relative to current dir.
|
||||
"""
|
||||
|
||||
if not os.path.exists(target):
|
||||
raise OSError, 'Target does not exist: '+target
|
||||
|
||||
if not os.path.isdir(base):
|
||||
raise OSError, 'Base is not a directory or does not exist: '+base
|
||||
|
||||
base_list = (os.path.abspath(base)).split(os.sep)
|
||||
target_list = (os.path.abspath(target)).split(os.sep)
|
||||
|
||||
# On the windows platform the target may be on a completely
|
||||
# different drive from the base.
|
||||
if os.name in ['nt','dos','os2'] and base_list[0] <> target_list[0]:
|
||||
raise OSError, 'Target is on a different drive to base. Target: '+target_list[0].upper()+', base: '+base_list[0].upper()
|
||||
|
||||
# Starting from the filepath root, work out how much of the
|
||||
# filepath is shared by base and target.
|
||||
for i in range(min(len(base_list), len(target_list))):
|
||||
if base_list[i] <> target_list[i]: break
|
||||
else:
|
||||
# If we broke out of the loop, i is pointing to the first
|
||||
# differing path elements. If we didn't break out of the
|
||||
# loop, i is pointing to identical path elements.
|
||||
# Increment i so that in all cases it points to the first
|
||||
# differing path elements.
|
||||
i+=1
|
||||
|
||||
rel_list = [os.pardir] * (len(base_list)-i) + target_list[i:]
|
||||
return os.path.join(*rel_list)
|
|
@ -1,31 +0,0 @@
|
|||
from distutils.core import setup
|
||||
import setuptools
|
||||
import sys, os
|
||||
|
||||
version = "0.3.dev"
|
||||
|
||||
setup(
|
||||
name="numpydoc",
|
||||
packages=["numpydoc"],
|
||||
package_dir={"numpydoc": ""},
|
||||
version=version,
|
||||
description="Sphinx extension to support docstrings in Numpy format",
|
||||
# classifiers from http://pypi.python.org/pypi?%3Aaction=list_classifiers
|
||||
classifiers=["Development Status :: 3 - Alpha",
|
||||
"Environment :: Plugins",
|
||||
"License :: OSI Approved :: BSD License",
|
||||
"Topic :: Documentation"],
|
||||
keywords="sphinx numpy",
|
||||
author="Pauli Virtanen and others",
|
||||
author_email="pav@iki.fi",
|
||||
url="http://projects.scipy.org/numpy/browser/trunk/doc/sphinxext",
|
||||
license="BSD",
|
||||
zip_safe=False,
|
||||
install_requires=["Sphinx >= 0.5"],
|
||||
package_data={'numpydoc': 'tests', '': ''},
|
||||
entry_points={
|
||||
"console_scripts": [
|
||||
"autosummary_generate = numpydoc.autosummary_generate:main",
|
||||
],
|
||||
},
|
||||
)
|
|
@ -1,545 +0,0 @@
|
|||
# -*- encoding:utf-8 -*-
|
||||
|
||||
import sys, os
|
||||
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
|
||||
|
||||
from docscrape import NumpyDocString, FunctionDoc, ClassDoc
|
||||
from docscrape_sphinx import SphinxDocString, SphinxClassDoc
|
||||
from nose.tools import *
|
||||
|
||||
doc_txt = '''\
|
||||
numpy.multivariate_normal(mean, cov, shape=None)
|
||||
|
||||
Draw values from a multivariate normal distribution with specified
|
||||
mean and covariance.
|
||||
|
||||
The multivariate normal or Gaussian distribution is a generalisation
|
||||
of the one-dimensional normal distribution to higher dimensions.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
mean : (N,) ndarray
|
||||
Mean of the N-dimensional distribution.
|
||||
|
||||
.. math::
|
||||
|
||||
(1+2+3)/3
|
||||
|
||||
cov : (N,N) ndarray
|
||||
Covariance matrix of the distribution.
|
||||
shape : tuple of ints
|
||||
Given a shape of, for example, (m,n,k), m*n*k samples are
|
||||
generated, and packed in an m-by-n-by-k arrangement. Because
|
||||
each sample is N-dimensional, the output shape is (m,n,k,N).
|
||||
|
||||
Returns
|
||||
-------
|
||||
out : ndarray
|
||||
The drawn samples, arranged according to `shape`. If the
|
||||
shape given is (m,n,...), then the shape of `out` is is
|
||||
(m,n,...,N).
|
||||
|
||||
In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
|
||||
value drawn from the distribution.
|
||||
|
||||
Warnings
|
||||
--------
|
||||
Certain warnings apply.
|
||||
|
||||
Notes
|
||||
-----
|
||||
|
||||
Instead of specifying the full covariance matrix, popular
|
||||
approximations include:
|
||||
|
||||
- Spherical covariance (`cov` is a multiple of the identity matrix)
|
||||
- Diagonal covariance (`cov` has non-negative elements only on the diagonal)
|
||||
|
||||
This geometrical property can be seen in two dimensions by plotting
|
||||
generated data-points:
|
||||
|
||||
>>> mean = [0,0]
|
||||
>>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
|
||||
|
||||
>>> x,y = multivariate_normal(mean,cov,5000).T
|
||||
>>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
|
||||
|
||||
Note that the covariance matrix must be symmetric and non-negative
|
||||
definite.
|
||||
|
||||
References
|
||||
----------
|
||||
.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
|
||||
Processes," 3rd ed., McGraw-Hill Companies, 1991
|
||||
.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
|
||||
2nd ed., Wiley, 2001.
|
||||
|
||||
See Also
|
||||
--------
|
||||
some, other, funcs
|
||||
otherfunc : relationship
|
||||
|
||||
Examples
|
||||
--------
|
||||
>>> mean = (1,2)
|
||||
>>> cov = [[1,0],[1,0]]
|
||||
>>> x = multivariate_normal(mean,cov,(3,3))
|
||||
>>> print x.shape
|
||||
(3, 3, 2)
|
||||
|
||||
The following is probably true, given that 0.6 is roughly twice the
|
||||
standard deviation:
|
||||
|
||||
>>> print list( (x[0,0,:] - mean) < 0.6 )
|
||||
[True, True]
|
||||
|
||||
.. index:: random
|
||||
:refguide: random;distributions, random;gauss
|
||||
|
||||
'''
|
||||
doc = NumpyDocString(doc_txt)
|
||||
|
||||
|
||||
def test_signature():
|
||||
assert doc['Signature'].startswith('numpy.multivariate_normal(')
|
||||
assert doc['Signature'].endswith('shape=None)')
|
||||
|
||||
def test_summary():
|
||||
assert doc['Summary'][0].startswith('Draw values')
|
||||
assert doc['Summary'][-1].endswith('covariance.')
|
||||
|
||||
def test_extended_summary():
|
||||
assert doc['Extended Summary'][0].startswith('The multivariate normal')
|
||||
|
||||
def test_parameters():
|
||||
assert_equal(len(doc['Parameters']), 3)
|
||||
assert_equal([n for n,_,_ in doc['Parameters']], ['mean','cov','shape'])
|
||||
|
||||
arg, arg_type, desc = doc['Parameters'][1]
|
||||
assert_equal(arg_type, '(N,N) ndarray')
|
||||
assert desc[0].startswith('Covariance matrix')
|
||||
assert doc['Parameters'][0][-1][-2] == ' (1+2+3)/3'
|
||||
|
||||
def test_returns():
|
||||
assert_equal(len(doc['Returns']), 1)
|
||||
arg, arg_type, desc = doc['Returns'][0]
|
||||
assert_equal(arg, 'out')
|
||||
assert_equal(arg_type, 'ndarray')
|
||||
assert desc[0].startswith('The drawn samples')
|
||||
assert desc[-1].endswith('distribution.')
|
||||
|
||||
def test_notes():
|
||||
assert doc['Notes'][0].startswith('Instead')
|
||||
assert doc['Notes'][-1].endswith('definite.')
|
||||
assert_equal(len(doc['Notes']), 17)
|
||||
|
||||
def test_references():
|
||||
assert doc['References'][0].startswith('..')
|
||||
assert doc['References'][-1].endswith('2001.')
|
||||
|
||||
def test_examples():
|
||||
assert doc['Examples'][0].startswith('>>>')
|
||||
assert doc['Examples'][-1].endswith('True]')
|
||||
|
||||
def test_index():
|
||||
assert_equal(doc['index']['default'], 'random')
|
||||
print doc['index']
|
||||
assert_equal(len(doc['index']), 2)
|
||||
assert_equal(len(doc['index']['refguide']), 2)
|
||||
|
||||
def non_blank_line_by_line_compare(a,b):
|
||||
a = [l for l in a.split('\n') if l.strip()]
|
||||
b = [l for l in b.split('\n') if l.strip()]
|
||||
for n,line in enumerate(a):
|
||||
if not line == b[n]:
|
||||
raise AssertionError("Lines %s of a and b differ: "
|
||||
"\n>>> %s\n<<< %s\n" %
|
||||
(n,line,b[n]))
|
||||
def test_str():
|
||||
non_blank_line_by_line_compare(str(doc),
|
||||
"""numpy.multivariate_normal(mean, cov, shape=None)
|
||||
|
||||
Draw values from a multivariate normal distribution with specified
|
||||
mean and covariance.
|
||||
|
||||
The multivariate normal or Gaussian distribution is a generalisation
|
||||
of the one-dimensional normal distribution to higher dimensions.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
mean : (N,) ndarray
|
||||
Mean of the N-dimensional distribution.
|
||||
|
||||
.. math::
|
||||
|
||||
(1+2+3)/3
|
||||
|
||||
cov : (N,N) ndarray
|
||||
Covariance matrix of the distribution.
|
||||
shape : tuple of ints
|
||||
Given a shape of, for example, (m,n,k), m*n*k samples are
|
||||
generated, and packed in an m-by-n-by-k arrangement. Because
|
||||
each sample is N-dimensional, the output shape is (m,n,k,N).
|
||||
|
||||
Returns
|
||||
-------
|
||||
out : ndarray
|
||||
The drawn samples, arranged according to `shape`. If the
|
||||
shape given is (m,n,...), then the shape of `out` is is
|
||||
(m,n,...,N).
|
||||
|
||||
In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
|
||||
value drawn from the distribution.
|
||||
|
||||
Warnings
|
||||
--------
|
||||
Certain warnings apply.
|
||||
|
||||
See Also
|
||||
--------
|
||||
`some`_, `other`_, `funcs`_
|
||||
|
||||
`otherfunc`_
|
||||
relationship
|
||||
|
||||
Notes
|
||||
-----
|
||||
Instead of specifying the full covariance matrix, popular
|
||||
approximations include:
|
||||
|
||||
- Spherical covariance (`cov` is a multiple of the identity matrix)
|
||||
- Diagonal covariance (`cov` has non-negative elements only on the diagonal)
|
||||
|
||||
This geometrical property can be seen in two dimensions by plotting
|
||||
generated data-points:
|
||||
|
||||
>>> mean = [0,0]
|
||||
>>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
|
||||
|
||||
>>> x,y = multivariate_normal(mean,cov,5000).T
|
||||
>>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
|
||||
|
||||
Note that the covariance matrix must be symmetric and non-negative
|
||||
definite.
|
||||
|
||||
References
|
||||
----------
|
||||
.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
|
||||
Processes," 3rd ed., McGraw-Hill Companies, 1991
|
||||
.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
|
||||
2nd ed., Wiley, 2001.
|
||||
|
||||
Examples
|
||||
--------
|
||||
>>> mean = (1,2)
|
||||
>>> cov = [[1,0],[1,0]]
|
||||
>>> x = multivariate_normal(mean,cov,(3,3))
|
||||
>>> print x.shape
|
||||
(3, 3, 2)
|
||||
|
||||
The following is probably true, given that 0.6 is roughly twice the
|
||||
standard deviation:
|
||||
|
||||
>>> print list( (x[0,0,:] - mean) < 0.6 )
|
||||
[True, True]
|
||||
|
||||
.. index:: random
|
||||
:refguide: random;distributions, random;gauss""")
|
||||
|
||||
|
||||
def test_sphinx_str():
|
||||
sphinx_doc = SphinxDocString(doc_txt)
|
||||
non_blank_line_by_line_compare(str(sphinx_doc),
|
||||
"""
|
||||
.. index:: random
|
||||
single: random;distributions, random;gauss
|
||||
|
||||
Draw values from a multivariate normal distribution with specified
|
||||
mean and covariance.
|
||||
|
||||
The multivariate normal or Gaussian distribution is a generalisation
|
||||
of the one-dimensional normal distribution to higher dimensions.
|
||||
|
||||
:Parameters:
|
||||
|
||||
**mean** : (N,) ndarray
|
||||
|
||||
Mean of the N-dimensional distribution.
|
||||
|
||||
.. math::
|
||||
|
||||
(1+2+3)/3
|
||||
|
||||
**cov** : (N,N) ndarray
|
||||
|
||||
Covariance matrix of the distribution.
|
||||
|
||||
**shape** : tuple of ints
|
||||
|
||||
Given a shape of, for example, (m,n,k), m*n*k samples are
|
||||
generated, and packed in an m-by-n-by-k arrangement. Because
|
||||
each sample is N-dimensional, the output shape is (m,n,k,N).
|
||||
|
||||
:Returns:
|
||||
|
||||
**out** : ndarray
|
||||
|
||||
The drawn samples, arranged according to `shape`. If the
|
||||
shape given is (m,n,...), then the shape of `out` is is
|
||||
(m,n,...,N).
|
||||
|
||||
In other words, each entry ``out[i,j,...,:]`` is an N-dimensional
|
||||
value drawn from the distribution.
|
||||
|
||||
.. warning::
|
||||
|
||||
Certain warnings apply.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:obj:`some`, :obj:`other`, :obj:`funcs`
|
||||
|
||||
:obj:`otherfunc`
|
||||
relationship
|
||||
|
||||
.. rubric:: Notes
|
||||
|
||||
Instead of specifying the full covariance matrix, popular
|
||||
approximations include:
|
||||
|
||||
- Spherical covariance (`cov` is a multiple of the identity matrix)
|
||||
- Diagonal covariance (`cov` has non-negative elements only on the diagonal)
|
||||
|
||||
This geometrical property can be seen in two dimensions by plotting
|
||||
generated data-points:
|
||||
|
||||
>>> mean = [0,0]
|
||||
>>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis
|
||||
|
||||
>>> x,y = multivariate_normal(mean,cov,5000).T
|
||||
>>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show()
|
||||
|
||||
Note that the covariance matrix must be symmetric and non-negative
|
||||
definite.
|
||||
|
||||
.. rubric:: References
|
||||
|
||||
.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic
|
||||
Processes," 3rd ed., McGraw-Hill Companies, 1991
|
||||
.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification,"
|
||||
2nd ed., Wiley, 2001.
|
||||
|
||||
.. only:: latex
|
||||
|
||||
[1]_, [2]_
|
||||
|
||||
.. rubric:: Examples
|
||||
|
||||
>>> mean = (1,2)
|
||||
>>> cov = [[1,0],[1,0]]
|
||||
>>> x = multivariate_normal(mean,cov,(3,3))
|
||||
>>> print x.shape
|
||||
(3, 3, 2)
|
||||
|
||||
The following is probably true, given that 0.6 is roughly twice the
|
||||
standard deviation:
|
||||
|
||||
>>> print list( (x[0,0,:] - mean) < 0.6 )
|
||||
[True, True]
|
||||
""")
|
||||
|
||||
|
||||
doc2 = NumpyDocString("""
|
||||
Returns array of indices of the maximum values of along the given axis.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
a : {array_like}
|
||||
Array to look in.
|
||||
axis : {None, integer}
|
||||
If None, the index is into the flattened array, otherwise along
|
||||
the specified axis""")
|
||||
|
||||
def test_parameters_without_extended_description():
|
||||
assert_equal(len(doc2['Parameters']), 2)
|
||||
|
||||
doc3 = NumpyDocString("""
|
||||
my_signature(*params, **kwds)
|
||||
|
||||
Return this and that.
|
||||
""")
|
||||
|
||||
def test_escape_stars():
|
||||
signature = str(doc3).split('\n')[0]
|
||||
assert_equal(signature, 'my_signature(\*params, \*\*kwds)')
|
||||
|
||||
doc4 = NumpyDocString(
|
||||
"""a.conj()
|
||||
|
||||
Return an array with all complex-valued elements conjugated.""")
|
||||
|
||||
def test_empty_extended_summary():
|
||||
assert_equal(doc4['Extended Summary'], [])
|
||||
|
||||
doc5 = NumpyDocString(
|
||||
"""
|
||||
a.something()
|
||||
|
||||
Raises
|
||||
------
|
||||
LinAlgException
|
||||
If array is singular.
|
||||
|
||||
""")
|
||||
|
||||
def test_raises():
|
||||
assert_equal(len(doc5['Raises']), 1)
|
||||
name,_,desc = doc5['Raises'][0]
|
||||
assert_equal(name,'LinAlgException')
|
||||
assert_equal(desc,['If array is singular.'])
|
||||
|
||||
def test_see_also():
|
||||
doc6 = NumpyDocString(
|
||||
"""
|
||||
z(x,theta)
|
||||
|
||||
See Also
|
||||
--------
|
||||
func_a, func_b, func_c
|
||||
func_d : some equivalent func
|
||||
foo.func_e : some other func over
|
||||
multiple lines
|
||||
func_f, func_g, :meth:`func_h`, func_j,
|
||||
func_k
|
||||
:obj:`baz.obj_q`
|
||||
:class:`class_j`: fubar
|
||||
foobar
|
||||
""")
|
||||
|
||||
assert len(doc6['See Also']) == 12
|
||||
for func, desc, role in doc6['See Also']:
|
||||
if func in ('func_a', 'func_b', 'func_c', 'func_f',
|
||||
'func_g', 'func_h', 'func_j', 'func_k', 'baz.obj_q'):
|
||||
assert(not desc)
|
||||
else:
|
||||
assert(desc)
|
||||
|
||||
if func == 'func_h':
|
||||
assert role == 'meth'
|
||||
elif func == 'baz.obj_q':
|
||||
assert role == 'obj'
|
||||
elif func == 'class_j':
|
||||
assert role == 'class'
|
||||
else:
|
||||
assert role is None
|
||||
|
||||
if func == 'func_d':
|
||||
assert desc == ['some equivalent func']
|
||||
elif func == 'foo.func_e':
|
||||
assert desc == ['some other func over', 'multiple lines']
|
||||
elif func == 'class_j':
|
||||
assert desc == ['fubar', 'foobar']
|
||||
|
||||
def test_see_also_print():
|
||||
class Dummy(object):
|
||||
"""
|
||||
See Also
|
||||
--------
|
||||
func_a, func_b
|
||||
func_c : some relationship
|
||||
goes here
|
||||
func_d
|
||||
"""
|
||||
pass
|
||||
|
||||
obj = Dummy()
|
||||
s = str(FunctionDoc(obj, role='func'))
|
||||
assert(':func:`func_a`, :func:`func_b`' in s)
|
||||
assert(' some relationship' in s)
|
||||
assert(':func:`func_d`' in s)
|
||||
|
||||
doc7 = NumpyDocString("""
|
||||
|
||||
Doc starts on second line.
|
||||
|
||||
""")
|
||||
|
||||
def test_empty_first_line():
|
||||
assert doc7['Summary'][0].startswith('Doc starts')
|
||||
|
||||
|
||||
def test_no_summary():
|
||||
str(SphinxDocString("""
|
||||
Parameters
|
||||
----------"""))
|
||||
|
||||
|
||||
def test_unicode():
|
||||
doc = SphinxDocString("""
|
||||
öäöäöäöäöåååå
|
||||
|
||||
öäöäöäööäååå
|
||||
|
||||
Parameters
|
||||
----------
|
||||
ååå : äää
|
||||
ööö
|
||||
|
||||
Returns
|
||||
-------
|
||||
ååå : ööö
|
||||
äää
|
||||
|
||||
""")
|
||||
assert doc['Summary'][0] == u'öäöäöäöäöåååå'.encode('utf-8')
|
||||
|
||||
def test_plot_examples():
|
||||
cfg = dict(use_plots=True)
|
||||
|
||||
doc = SphinxDocString("""
|
||||
Examples
|
||||
--------
|
||||
>>> import matplotlib.pyplot as plt
|
||||
>>> plt.plot([1,2,3],[4,5,6])
|
||||
>>> plt.show()
|
||||
""", config=cfg)
|
||||
assert 'plot::' in str(doc), str(doc)
|
||||
|
||||
doc = SphinxDocString("""
|
||||
Examples
|
||||
--------
|
||||
.. plot::
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
plt.plot([1,2,3],[4,5,6])
|
||||
plt.show()
|
||||
""", config=cfg)
|
||||
assert str(doc).count('plot::') == 1, str(doc)
|
||||
|
||||
def test_class_members():
|
||||
|
||||
class Dummy(object):
|
||||
"""
|
||||
Dummy class.
|
||||
|
||||
"""
|
||||
def spam(self, a, b):
|
||||
"""Spam\n\nSpam spam."""
|
||||
pass
|
||||
def ham(self, c, d):
|
||||
"""Cheese\n\nNo cheese."""
|
||||
pass
|
||||
|
||||
for cls in (ClassDoc, SphinxClassDoc):
|
||||
doc = cls(Dummy, config=dict(show_class_members=False))
|
||||
assert 'Methods' not in str(doc), (cls, str(doc))
|
||||
assert 'spam' not in str(doc), (cls, str(doc))
|
||||
assert 'ham' not in str(doc), (cls, str(doc))
|
||||
|
||||
doc = cls(Dummy, config=dict(show_class_members=True))
|
||||
assert 'Methods' in str(doc), (cls, str(doc))
|
||||
assert 'spam' in str(doc), (cls, str(doc))
|
||||
assert 'ham' in str(doc), (cls, str(doc))
|
||||
|
||||
if cls is SphinxClassDoc:
|
||||
assert '.. autosummary::' in str(doc), str(doc)
|
|
@ -1,140 +0,0 @@
|
|||
"""
|
||||
=========
|
||||
traitsdoc
|
||||
=========
|
||||
|
||||
Sphinx extension that handles docstrings in the Numpy standard format, [1]
|
||||
and support Traits [2].
|
||||
|
||||
This extension can be used as a replacement for ``numpydoc`` when support
|
||||
for Traits is required.
|
||||
|
||||
.. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard
|
||||
.. [2] http://code.enthought.com/projects/traits/
|
||||
|
||||
"""
|
||||
|
||||
import inspect
|
||||
import os
|
||||
import pydoc
|
||||
|
||||
import docscrape
|
||||
import docscrape_sphinx
|
||||
from docscrape_sphinx import SphinxClassDoc, SphinxFunctionDoc, SphinxDocString
|
||||
|
||||
import numpydoc
|
||||
|
||||
import comment_eater
|
||||
|
||||
class SphinxTraitsDoc(SphinxClassDoc):
|
||||
def __init__(self, cls, modulename='', func_doc=SphinxFunctionDoc):
|
||||
if not inspect.isclass(cls):
|
||||
raise ValueError("Initialise using a class. Got %r" % cls)
|
||||
self._cls = cls
|
||||
|
||||
if modulename and not modulename.endswith('.'):
|
||||
modulename += '.'
|
||||
self._mod = modulename
|
||||
self._name = cls.__name__
|
||||
self._func_doc = func_doc
|
||||
|
||||
docstring = pydoc.getdoc(cls)
|
||||
docstring = docstring.split('\n')
|
||||
|
||||
# De-indent paragraph
|
||||
try:
|
||||
indent = min(len(s) - len(s.lstrip()) for s in docstring
|
||||
if s.strip())
|
||||
except ValueError:
|
||||
indent = 0
|
||||
|
||||
for n,line in enumerate(docstring):
|
||||
docstring[n] = docstring[n][indent:]
|
||||
|
||||
self._doc = docscrape.Reader(docstring)
|
||||
self._parsed_data = {
|
||||
'Signature': '',
|
||||
'Summary': '',
|
||||
'Description': [],
|
||||
'Extended Summary': [],
|
||||
'Parameters': [],
|
||||
'Returns': [],
|
||||
'Raises': [],
|
||||
'Warns': [],
|
||||
'Other Parameters': [],
|
||||
'Traits': [],
|
||||
'Methods': [],
|
||||
'See Also': [],
|
||||
'Notes': [],
|
||||
'References': '',
|
||||
'Example': '',
|
||||
'Examples': '',
|
||||
'index': {}
|
||||
}
|
||||
|
||||
self._parse()
|
||||
|
||||
def _str_summary(self):
|
||||
return self['Summary'] + ['']
|
||||
|
||||
def _str_extended_summary(self):
|
||||
return self['Description'] + self['Extended Summary'] + ['']
|
||||
|
||||
def __str__(self, indent=0, func_role="func"):
|
||||
out = []
|
||||
out += self._str_signature()
|
||||
out += self._str_index() + ['']
|
||||
out += self._str_summary()
|
||||
out += self._str_extended_summary()
|
||||
for param_list in ('Parameters', 'Traits', 'Methods',
|
||||
'Returns','Raises'):
|
||||
out += self._str_param_list(param_list)
|
||||
out += self._str_see_also("obj")
|
||||
out += self._str_section('Notes')
|
||||
out += self._str_references()
|
||||
out += self._str_section('Example')
|
||||
out += self._str_section('Examples')
|
||||
out = self._str_indent(out,indent)
|
||||
return '\n'.join(out)
|
||||
|
||||
def looks_like_issubclass(obj, classname):
|
||||
""" Return True if the object has a class or superclass with the given class
|
||||
name.
|
||||
|
||||
Ignores old-style classes.
|
||||
"""
|
||||
t = obj
|
||||
if t.__name__ == classname:
|
||||
return True
|
||||
for klass in t.__mro__:
|
||||
if klass.__name__ == classname:
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_doc_object(obj, what=None, config=None):
|
||||
if what is None:
|
||||
if inspect.isclass(obj):
|
||||
what = 'class'
|
||||
elif inspect.ismodule(obj):
|
||||
what = 'module'
|
||||
elif callable(obj):
|
||||
what = 'function'
|
||||
else:
|
||||
what = 'object'
|
||||
if what == 'class':
|
||||
doc = SphinxTraitsDoc(obj, '', func_doc=SphinxFunctionDoc, config=config)
|
||||
if looks_like_issubclass(obj, 'HasTraits'):
|
||||
for name, trait, comment in comment_eater.get_class_traits(obj):
|
||||
# Exclude private traits.
|
||||
if not name.startswith('_'):
|
||||
doc['Traits'].append((name, trait, comment.splitlines()))
|
||||
return doc
|
||||
elif what in ('function', 'method'):
|
||||
return SphinxFunctionDoc(obj, '', config=config)
|
||||
else:
|
||||
return SphinxDocString(pydoc.getdoc(obj), config=config)
|
||||
|
||||
def setup(app):
|
||||
# init numpydoc
|
||||
numpydoc.setup(app, get_doc_object)
|
||||
|
|
@ -1,128 +0,0 @@
|
|||
"""
|
||||
SciPy: A scientific computing package for Python
|
||||
================================================
|
||||
|
||||
Documentation is available in the docstrings and
|
||||
online at http://docs.scipy.org.
|
||||
|
||||
Contents
|
||||
--------
|
||||
SciPy imports all the functions from the NumPy namespace, and in
|
||||
addition provides:
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
::
|
||||
|
||||
odr --- Orthogonal Distance Regression [*]
|
||||
misc --- Various utilities that don't have
|
||||
another home.
|
||||
cluster --- Vector Quantization / Kmeans [*]
|
||||
fftpack --- Discrete Fourier Transform algorithms
|
||||
[*]
|
||||
io --- Data input and output [*]
|
||||
sparse.linalg.eigen.lobpcg --- Locally Optimal Block Preconditioned
|
||||
Conjugate Gradient Method (LOBPCG) [*]
|
||||
special --- Airy Functions [*]
|
||||
lib.blas --- Wrappers to BLAS library [*]
|
||||
sparse.linalg.eigen --- Sparse Eigenvalue Solvers [*]
|
||||
stats --- Statistical Functions [*]
|
||||
lib --- Python wrappers to external libraries
|
||||
[*]
|
||||
lib.lapack --- Wrappers to LAPACK library [*]
|
||||
maxentropy --- Routines for fitting maximum entropy
|
||||
models [*]
|
||||
integrate --- Integration routines [*]
|
||||
ndimage --- n-dimensional image package [*]
|
||||
linalg --- Linear algebra routines [*]
|
||||
spatial --- Spatial data structures and algorithms
|
||||
[*]
|
||||
interpolate --- Interpolation Tools [*]
|
||||
sparse.linalg --- Sparse Linear Algebra [*]
|
||||
sparse.linalg.dsolve.umfpack --- :Interface to the UMFPACK library: [*]
|
||||
sparse.linalg.dsolve --- Linear Solvers [*]
|
||||
optimize --- Optimization Tools [*]
|
||||
sparse.linalg.eigen.arpack --- Eigenvalue solver using iterative
|
||||
methods. [*]
|
||||
signal --- Signal Processing Tools [*]
|
||||
sparse --- Sparse Matrices [*]
|
||||
|
||||
[*] - using a package requires explicit import
|
||||
|
||||
Global symbols from subpackages
|
||||
-------------------------------
|
||||
::
|
||||
|
||||
misc --> info, factorial, factorial2, factorialk,
|
||||
comb, who, lena, central_diff_weights,
|
||||
derivative, pade, source
|
||||
fftpack --> fft, fftn, fft2, ifft, ifft2, ifftn,
|
||||
fftshift, ifftshift, fftfreq
|
||||
stats --> find_repeats
|
||||
linalg.dsolve.umfpack --> UmfpackContext
|
||||
|
||||
Utility tools
|
||||
-------------
|
||||
::
|
||||
|
||||
test --- Run scipy unittests
|
||||
show_config --- Show scipy build configuration
|
||||
show_numpy_config --- Show numpy build configuration
|
||||
__version__ --- Scipy version string
|
||||
__numpy_version__ --- Numpy version string
|
||||
|
||||
"""
|
||||
|
||||
__all__ = ['pkgload','test']
|
||||
|
||||
from numpy import show_config as show_numpy_config
|
||||
if show_numpy_config is None:
|
||||
raise ImportError,"Cannot import scipy when running from numpy source directory."
|
||||
from numpy import __version__ as __numpy_version__
|
||||
|
||||
# Import numpy symbols to scipy name space
|
||||
import numpy as _num
|
||||
from numpy import oldnumeric
|
||||
from numpy import *
|
||||
from numpy.random import rand, randn
|
||||
from numpy.fft import fft, ifft
|
||||
from numpy.lib.scimath import *
|
||||
|
||||
# Emit a warning if numpy is too old
|
||||
majver, minver = [float(i) for i in _num.version.version.split('.')[:2]]
|
||||
if majver < 1 or (majver == 1 and minver < 2):
|
||||
import warnings
|
||||
warnings.warn("Numpy 1.2.0 or above is recommended for this version of " \
|
||||
"scipy (detected version %s)" % _num.version.version,
|
||||
UserWarning)
|
||||
|
||||
__all__ += ['oldnumeric']+_num.__all__
|
||||
|
||||
__all__ += ['randn', 'rand', 'fft', 'ifft']
|
||||
|
||||
del _num
|
||||
# Remove the linalg imported from numpy so that the scipy.linalg package can be
|
||||
# imported.
|
||||
del linalg
|
||||
__all__.remove('linalg')
|
||||
|
||||
try:
|
||||
from scipy.__config__ import show as show_config
|
||||
except ImportError:
|
||||
msg = """Error importing scipy: you cannot import scipy while
|
||||
being in scipy source directory; please exit the scipy source
|
||||
tree first, and relaunch your python intepreter."""
|
||||
raise ImportError(msg)
|
||||
from scipy.version import version as __version__
|
||||
|
||||
# Load scipy packages and their global_symbols
|
||||
from numpy._import_tools import PackageLoader
|
||||
import os as _os
|
||||
SCIPY_IMPORT_VERBOSE = int(_os.environ.get('SCIPY_IMPORT_VERBOSE','-1'))
|
||||
del _os
|
||||
pkgload = PackageLoader()
|
||||
pkgload(verbose=SCIPY_IMPORT_VERBOSE,postpone=True)
|
||||
|
||||
from numpy.testing import Tester
|
||||
test = Tester().test
|
||||
bench = Tester().bench
|
|
@ -1,15 +0,0 @@
|
|||
# Last Change: Mon Nov 03 07:00 PM 2008 J
|
||||
# vim:syntax=python
|
||||
from os.path import join
|
||||
|
||||
from numscons import GetNumpyEnvironment
|
||||
|
||||
env = GetNumpyEnvironment(ARGUMENTS)
|
||||
|
||||
env.NumpyPythonExtension('_hierarchy_wrap',
|
||||
source = [join('src', 'hierarchy_wrap.c'),
|
||||
join('src', 'hierarchy.c')])
|
||||
|
||||
env.NumpyPythonExtension('_vq',
|
||||
source = [join('src', 'vq_module.c'),
|
||||
join('src', 'vq.c')])
|
|
@ -1,2 +0,0 @@
|
|||
from numscons import GetInitEnvironment
|
||||
GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript')
|
|
@ -1,12 +0,0 @@
|
|||
#
|
||||
# spatial - Distances
|
||||
#
|
||||
|
||||
from info import __doc__
|
||||
|
||||
__all__ = ['vq', 'hierarchy']
|
||||
|
||||
import vq, hierarchy
|
||||
|
||||
from numpy.testing import Tester
|
||||
test = Tester().test
|
File diff suppressed because it is too large
Load diff
|
@ -1,25 +0,0 @@
|
|||
"""
|
||||
Vector Quantization / Kmeans
|
||||
============================
|
||||
|
||||
Clustering algorithms are useful in information theory, target detection,
|
||||
communications, compression, and other areas. The vq module only
|
||||
supports vector quantization and the k-means algorithms. Development
|
||||
of self-organizing maps (SOM) and other approaches is underway.
|
||||
|
||||
Hierarchical Clustering
|
||||
=======================
|
||||
|
||||
The hierarchy module provides functions for hierarchical and agglomerative
|
||||
clustering. Its features include generating hierarchical clusters from
|
||||
distance matrices, computing distance matrices from observation vectors,
|
||||
calculating statistics on clusters, cutting linkages to generate flat
|
||||
clusters, and visualizing clusters with dendrograms.
|
||||
|
||||
Distance Computation
|
||||
====================
|
||||
|
||||
The distance module provides functions for computing distances between
|
||||
pairs of vectors from a set of observation vectors.
|
||||
|
||||
"""
|
|
@ -1,30 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
from os.path import join
|
||||
|
||||
def configuration(parent_package = '', top_path = None):
|
||||
from numpy.distutils.misc_util import Configuration, get_numpy_include_dirs
|
||||
config = Configuration('cluster', parent_package, top_path)
|
||||
|
||||
config.add_data_dir('tests')
|
||||
|
||||
config.add_extension('_vq',
|
||||
sources=[join('src', 'vq_module.c'), join('src', 'vq.c')],
|
||||
include_dirs = [get_numpy_include_dirs()])
|
||||
|
||||
config.add_extension('_hierarchy_wrap',
|
||||
sources=[join('src', 'hierarchy_wrap.c'), join('src', 'hierarchy.c')],
|
||||
include_dirs = [get_numpy_include_dirs()])
|
||||
|
||||
return config
|
||||
|
||||
if __name__ == '__main__':
|
||||
from numpy.distutils.core import setup
|
||||
setup(maintainer = "SciPy Developers",
|
||||
author = "Eric Jones",
|
||||
maintainer_email = "scipy-dev@scipy.org",
|
||||
description = "Clustering Algorithms (Information Theory)",
|
||||
url = "http://www.scipy.org",
|
||||
license = "SciPy License (BSD Style)",
|
||||
**configuration(top_path='').todict()
|
||||
)
|
|
@ -1,27 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
from os.path import join
|
||||
|
||||
def configuration(parent_package = '', top_path = None):
|
||||
from numpy.distutils.misc_util import Configuration, get_numpy_include_dirs
|
||||
config = Configuration('cluster', parent_package, top_path)
|
||||
|
||||
config.add_data_dir('tests')
|
||||
|
||||
#config.add_extension('_vq',
|
||||
# sources=[join('src', 'vq_module.c'), join('src', 'vq.c')],
|
||||
# include_dirs = [get_numpy_include_dirs()])
|
||||
config.add_sconscript('SConstruct')
|
||||
|
||||
return config
|
||||
|
||||
if __name__ == '__main__':
|
||||
from numpy.distutils.core import setup
|
||||
setup(maintainer = "SciPy Developers",
|
||||
author = "Eric Jones",
|
||||
maintainer_email = "scipy-dev@scipy.org",
|
||||
description = "Clustering Algorithms (Information Theory)",
|
||||
url = "http://www.scipy.org",
|
||||
license = "SciPy License (BSD Style)",
|
||||
**configuration(top_path='').todict()
|
||||
)
|
|
@ -1,69 +0,0 @@
|
|||
/**
|
||||
* common.h
|
||||
*
|
||||
* Author: Damian Eads
|
||||
* Date: September 22, 2007 (moved into new file on June 8, 2008)
|
||||
*
|
||||
* Copyright (c) 2007, 2008, Damian Eads. All rights reserved.
|
||||
* Adapted for incorporation into Scipy, April 9, 2008.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions
|
||||
* are met:
|
||||
* - Redistributions of source code must retain the above
|
||||
* copyright notice, this list of conditions and the
|
||||
* following disclaimer.
|
||||
* - Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* - Neither the name of the author nor the names of its
|
||||
* contributors may be used to endorse or promote products derived
|
||||
* from this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#ifndef _CLUSTER_COMMON_H
|
||||
#define _CLUSTER_COMMON_H
|
||||
|
||||
#define CPY_MAX(_x, _y) ((_x > _y) ? (_x) : (_y))
|
||||
#define CPY_MIN(_x, _y) ((_x < _y) ? (_x) : (_y))
|
||||
|
||||
#define NCHOOSE2(_n) ((_n)*(_n-1)/2)
|
||||
|
||||
#define CPY_BITS_PER_CHAR (sizeof(unsigned char) * 8)
|
||||
#define CPY_FLAG_ARRAY_SIZE_BYTES(num_bits) (CPY_CEIL_DIV((num_bits), \
|
||||
CPY_BITS_PER_CHAR))
|
||||
#define CPY_GET_BIT(_xx, i) (((_xx)[(i) / CPY_BITS_PER_CHAR] >> \
|
||||
((CPY_BITS_PER_CHAR-1) - \
|
||||
((i) % CPY_BITS_PER_CHAR))) & 0x1)
|
||||
#define CPY_SET_BIT(_xx, i) ((_xx)[(i) / CPY_BITS_PER_CHAR] |= \
|
||||
((0x1) << ((CPY_BITS_PER_CHAR-1) \
|
||||
-((i) % CPY_BITS_PER_CHAR))))
|
||||
#define CPY_CLEAR_BIT(_xx, i) ((_xx)[(i) / CPY_BITS_PER_CHAR] &= \
|
||||
~((0x1) << ((CPY_BITS_PER_CHAR-1) \
|
||||
-((i) % CPY_BITS_PER_CHAR))))
|
||||
|
||||
#ifndef CPY_CEIL_DIV
|
||||
#define CPY_CEIL_DIV(x, y) ((((double)x)/(double)y) == \
|
||||
((double)((x)/(y))) ? ((x)/(y)) : ((x)/(y) + 1))
|
||||
#endif
|
||||
|
||||
#ifdef CPY_DEBUG
|
||||
#define CPY_DEBUG_MSG(...) fprintf(stderr, __VA_ARGS__)
|
||||
#else
|
||||
#define CPY_DEBUG_MSG(...)
|
||||
#endif
|
||||
|
||||
#endif
|
File diff suppressed because it is too large
Load diff
|
@ -1,130 +0,0 @@
|
|||
/**
|
||||
* hierarchy.h
|
||||
*
|
||||
* Author: Damian Eads
|
||||
* Date: September 22, 2007
|
||||
* Adapted for incorporation into Scipy, April 9, 2008.
|
||||
*
|
||||
* Copyright (c) 2007, 2008, Damian Eads. All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions
|
||||
* are met:
|
||||
* - Redistributions of source code must retain the above
|
||||
* copyright notice, this list of conditions and the
|
||||
* following disclaimer.
|
||||
* - Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* - Neither the name of the author nor the names of its
|
||||
* contributors may be used to endorse or promote products derived
|
||||
* from this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#ifndef _CPY_HIERARCHY_H
|
||||
#define _CPY_HIERARCHY_H
|
||||
|
||||
#define CPY_LINKAGE_SINGLE 0
|
||||
#define CPY_LINKAGE_COMPLETE 1
|
||||
#define CPY_LINKAGE_AVERAGE 2
|
||||
#define CPY_LINKAGE_CENTROID 3
|
||||
#define CPY_LINKAGE_MEDIAN 4
|
||||
#define CPY_LINKAGE_WARD 5
|
||||
#define CPY_LINKAGE_WEIGHTED 6
|
||||
|
||||
#define CPY_CRIT_INCONSISTENT 0
|
||||
#define CPY_CRIT_DISTANCE 1
|
||||
#define CPY_CRIT_MAXCLUST 2
|
||||
|
||||
typedef struct cnode {
|
||||
int n;
|
||||
int id;
|
||||
double d;
|
||||
struct cnode *left;
|
||||
struct cnode *right;
|
||||
} cnode;
|
||||
|
||||
typedef struct clnode {
|
||||
struct clnode *next;
|
||||
struct cnode *val;
|
||||
} clnode;
|
||||
|
||||
typedef struct clist {
|
||||
struct clnode *head;
|
||||
struct clnode *tail;
|
||||
} clist;
|
||||
|
||||
typedef struct cinfo {
|
||||
struct cnode *nodes;
|
||||
struct clist *lists;
|
||||
int *ind;
|
||||
double *dmt;
|
||||
double *dm;
|
||||
double *buf;
|
||||
double **rows;
|
||||
double **centroids;
|
||||
double *centroidBuffer;
|
||||
const double *X;
|
||||
int *rowsize;
|
||||
int m;
|
||||
int n;
|
||||
int nid;
|
||||
} cinfo;
|
||||
|
||||
typedef void (distfunc) (cinfo *info, int mini, int minj, int np, int n);
|
||||
|
||||
void inconsistency_calculation(const double *Z, double *R, int n, int d);
|
||||
void inconsistency_calculation_alt(const double *Z, double *R, int n, int d);
|
||||
|
||||
void chopmins(int *ind, int mini, int minj, int np);
|
||||
void chopmins_ns_i(double *ind, int mini, int np);
|
||||
void chopmins_ns_ij(double *ind, int mini, int minj, int np);
|
||||
|
||||
void dist_single(cinfo *info, int mini, int minj, int np, int n);
|
||||
void dist_average(cinfo *info, int mini, int minj, int np, int n);
|
||||
void dist_complete(cinfo *info, int mini, int minj, int np, int n);
|
||||
void dist_centroid(cinfo *info, int mini, int minj, int np, int n);
|
||||
void dist_ward(cinfo *info, int mini, int minj, int np, int n);
|
||||
void dist_weighted(cinfo *info, int mini, int minj, int np, int n);
|
||||
|
||||
int leaders(const double *Z, const int *T, int *L, int *M, int kk, int n);
|
||||
|
||||
void linkage(double *dm, double *Z, double *X, int m, int n, int ml, int kc, distfunc dfunc, int method);
|
||||
void linkage_alt(double *dm, double *Z, double *X, int m, int n, int ml, int kc, distfunc dfunc, int method);
|
||||
|
||||
void cophenetic_distances(const double *Z, double *d, int n);
|
||||
void cpy_to_tree(const double *Z, cnode **tnodes, int n);
|
||||
void calculate_cluster_sizes(const double *Z, double *cs, int n);
|
||||
|
||||
void form_member_list(const double *Z, int *members, int n);
|
||||
void form_flat_clusters_from_in(const double *Z, const double *R, int *T,
|
||||
double cutoff, int n);
|
||||
void form_flat_clusters_from_dist(const double *Z, int *T,
|
||||
double cutoff, int n);
|
||||
void form_flat_clusters_from_monotonic_criterion(const double *Z,
|
||||
const double *mono_crit,
|
||||
int *T, double cutoff, int n);
|
||||
|
||||
void form_flat_clusters_maxclust_dist(const double *Z, int *T, int n, int mc);
|
||||
|
||||
void form_flat_clusters_maxclust_monocrit(const double *Z,
|
||||
const double *mono_crit,
|
||||
int *T, int n, int mc);
|
||||
|
||||
void get_max_dist_for_each_cluster(const double *Z, double *max_dists, int n);
|
||||
void get_max_Rfield_for_each_cluster(const double *Z, const double *R,
|
||||
double *max_rfs, int n, int rf);
|
||||
#endif
|
|
@ -1,379 +0,0 @@
|
|||
/**
|
||||
* hierarchy_wrap.c
|
||||
*
|
||||
* Author: Damian Eads
|
||||
* Date: September 22, 2007
|
||||
* Adapted for incorporation into Scipy, April 9, 2008.
|
||||
*
|
||||
* Copyright (c) 2007, Damian Eads. All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions
|
||||
* are met:
|
||||
* - Redistributions of source code must retain the above
|
||||
* copyright notice, this list of conditions and the
|
||||
* following disclaimer.
|
||||
* - Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* - Neither the name of the author nor the names of its
|
||||
* contributors may be used to endorse or promote products derived
|
||||
* from this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#include "hierarchy.h"
|
||||
#include "Python.h"
|
||||
#include <numpy/arrayobject.h>
|
||||
#include <stdio.h>
|
||||
|
||||
extern PyObject *linkage_wrap(PyObject *self, PyObject *args) {
|
||||
int method, n;
|
||||
PyArrayObject *dm, *Z;
|
||||
distfunc *df;
|
||||
if (!PyArg_ParseTuple(args, "O!O!ii",
|
||||
&PyArray_Type, &dm,
|
||||
&PyArray_Type, &Z,
|
||||
&n,
|
||||
&method)) {
|
||||
return 0;
|
||||
}
|
||||
else {
|
||||
switch (method) {
|
||||
case CPY_LINKAGE_SINGLE:
|
||||
df = dist_single;
|
||||
break;
|
||||
case CPY_LINKAGE_COMPLETE:
|
||||
df = dist_complete;
|
||||
break;
|
||||
case CPY_LINKAGE_AVERAGE:
|
||||
df = dist_average;
|
||||
break;
|
||||
case CPY_LINKAGE_WEIGHTED:
|
||||
df = dist_weighted;
|
||||
break;
|
||||
default:
|
||||
/** Report an error. */
|
||||
df = 0;
|
||||
break;
|
||||
}
|
||||
linkage((double*)dm->data, (double*)Z->data, 0, 0, n, 0, 0, df, method);
|
||||
}
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
extern PyObject *linkage_euclid_wrap(PyObject *self, PyObject *args) {
|
||||
int method, m, n, ml;
|
||||
PyArrayObject *dm, *Z, *X;
|
||||
distfunc *df;
|
||||
if (!PyArg_ParseTuple(args, "O!O!O!iii",
|
||||
&PyArray_Type, &dm,
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &X,
|
||||
&m,
|
||||
&n,
|
||||
&method)) {
|
||||
return 0;
|
||||
}
|
||||
else {
|
||||
ml = 0;
|
||||
/** fprintf(stderr, "m: %d, n: %d\n", m, n);**/
|
||||
switch (method) {
|
||||
case CPY_LINKAGE_CENTROID:
|
||||
df = dist_centroid;
|
||||
break;
|
||||
case CPY_LINKAGE_MEDIAN:
|
||||
df = dist_centroid;
|
||||
break;
|
||||
case CPY_LINKAGE_WARD:
|
||||
df = dist_ward;
|
||||
// ml = 1;
|
||||
break;
|
||||
default:
|
||||
/** Report an error. */
|
||||
df = 0;
|
||||
break;
|
||||
}
|
||||
linkage((double*)dm->data, (double*)Z->data, (double*)X->data,
|
||||
m, n, 1, 1, df, method);
|
||||
}
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
extern PyObject *calculate_cluster_sizes_wrap(PyObject *self, PyObject *args) {
|
||||
int n;
|
||||
PyArrayObject *Z, *cs_;
|
||||
if (!PyArg_ParseTuple(args, "O!O!i",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &cs_,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
calculate_cluster_sizes((const double*)Z->data, (double*)cs_->data, n);
|
||||
return Py_BuildValue("");
|
||||
}
|
||||
|
||||
extern PyObject *get_max_dist_for_each_cluster_wrap(PyObject *self,
|
||||
PyObject *args) {
|
||||
int n;
|
||||
PyArrayObject *Z, *md;
|
||||
if (!PyArg_ParseTuple(args, "O!O!i",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &md,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
get_max_dist_for_each_cluster((const double*)Z->data, (double*)md->data, n);
|
||||
return Py_BuildValue("");
|
||||
}
|
||||
|
||||
extern PyObject *get_max_Rfield_for_each_cluster_wrap(PyObject *self,
|
||||
PyObject *args) {
|
||||
int n, rf;
|
||||
PyArrayObject *Z, *R, *max_rfs;
|
||||
if (!PyArg_ParseTuple(args, "O!O!O!ii",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &R,
|
||||
&PyArray_Type, &max_rfs,
|
||||
&n, &rf)) {
|
||||
return 0;
|
||||
}
|
||||
get_max_Rfield_for_each_cluster((const double *)Z->data,
|
||||
(const double *)R->data,
|
||||
(double *)max_rfs->data, n, rf);
|
||||
return Py_BuildValue("");
|
||||
}
|
||||
|
||||
extern PyObject *prelist_wrap(PyObject *self, PyObject *args) {
|
||||
int n;
|
||||
PyArrayObject *Z, *ML;
|
||||
if (!PyArg_ParseTuple(args, "O!O!i",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &ML,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
form_member_list((const double *)Z->data, (int *)ML->data, n);
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
extern PyObject *cluster_in_wrap(PyObject *self, PyObject *args) {
|
||||
int n;
|
||||
double cutoff;
|
||||
PyArrayObject *Z, *R, *T;
|
||||
if (!PyArg_ParseTuple(args, "O!O!O!di",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &R,
|
||||
&PyArray_Type, &T,
|
||||
&cutoff,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
form_flat_clusters_from_in((const double *)Z->data, (const double *)R->data,
|
||||
(int *)T->data, cutoff, n);
|
||||
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
extern PyObject *cluster_dist_wrap(PyObject *self, PyObject *args) {
|
||||
int n;
|
||||
double cutoff;
|
||||
PyArrayObject *Z, *T;
|
||||
if (!PyArg_ParseTuple(args, "O!O!di",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &T,
|
||||
&cutoff,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
form_flat_clusters_from_dist((const double *)Z->data,
|
||||
(int *)T->data, cutoff, n);
|
||||
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
extern PyObject *cluster_monocrit_wrap(PyObject *self, PyObject *args) {
|
||||
int n;
|
||||
double cutoff;
|
||||
PyArrayObject *Z, *MV, *T;
|
||||
if (!PyArg_ParseTuple(args, "O!O!O!di",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &MV,
|
||||
&PyArray_Type, &T,
|
||||
&cutoff,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
form_flat_clusters_from_monotonic_criterion((const double *)Z->data,
|
||||
(const double *)MV->data,
|
||||
(int *)T->data,
|
||||
cutoff,
|
||||
n);
|
||||
|
||||
form_flat_clusters_from_dist((const double *)Z->data,
|
||||
(int *)T->data, cutoff, n);
|
||||
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
|
||||
|
||||
extern PyObject *cluster_maxclust_dist_wrap(PyObject *self, PyObject *args) {
|
||||
int n, mc;
|
||||
PyArrayObject *Z, *T;
|
||||
if (!PyArg_ParseTuple(args, "O!O!ii",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &T,
|
||||
&n, &mc)) {
|
||||
return 0;
|
||||
}
|
||||
form_flat_clusters_maxclust_dist((const double*)Z->data, (int *)T->data,
|
||||
n, mc);
|
||||
|
||||
return Py_BuildValue("");
|
||||
}
|
||||
|
||||
|
||||
extern PyObject *cluster_maxclust_monocrit_wrap(PyObject *self, PyObject *args) {
|
||||
int n, mc;
|
||||
PyArrayObject *Z, *MC, *T;
|
||||
if (!PyArg_ParseTuple(args, "O!O!O!ii",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &MC,
|
||||
&PyArray_Type, &T,
|
||||
&n, &mc)) {
|
||||
return 0;
|
||||
}
|
||||
form_flat_clusters_maxclust_monocrit((const double *)Z->data,
|
||||
(const double *)MC->data,
|
||||
(int *)T->data, n, mc);
|
||||
|
||||
return Py_BuildValue("");
|
||||
}
|
||||
|
||||
|
||||
extern PyObject *inconsistent_wrap(PyObject *self, PyObject *args) {
|
||||
int n, d;
|
||||
PyArrayObject *Z, *R;
|
||||
if (!PyArg_ParseTuple(args, "O!O!ii",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &R,
|
||||
&n, &d)) {
|
||||
return 0;
|
||||
}
|
||||
inconsistency_calculation_alt((const double*)Z->data, (double*)R->data, n, d);
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
extern PyObject *cophenetic_distances_wrap(PyObject *self, PyObject *args) {
|
||||
int n;
|
||||
PyArrayObject *Z, *d;
|
||||
if (!PyArg_ParseTuple(args, "O!O!i",
|
||||
&PyArray_Type, &Z,
|
||||
&PyArray_Type, &d,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
cophenetic_distances((const double*)Z->data, (double*)d->data, n);
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
extern PyObject *chopmin_ns_ij_wrap(PyObject *self, PyObject *args) {
|
||||
int mini, minj, n;
|
||||
PyArrayObject *row;
|
||||
if (!PyArg_ParseTuple(args, "O!iii",
|
||||
&PyArray_Type, &row,
|
||||
&mini,
|
||||
&minj,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
chopmins_ns_ij((double*)row->data, mini, minj, n);
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
extern PyObject *chopmin_ns_i_wrap(PyObject *self, PyObject *args) {
|
||||
int mini, n;
|
||||
PyArrayObject *row;
|
||||
if (!PyArg_ParseTuple(args, "O!ii",
|
||||
&PyArray_Type, &row,
|
||||
&mini,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
chopmins_ns_i((double*)row->data, mini, n);
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
extern PyObject *chopmins_wrap(PyObject *self, PyObject *args) {
|
||||
int mini, minj, n;
|
||||
PyArrayObject *row;
|
||||
if (!PyArg_ParseTuple(args, "O!iii",
|
||||
&PyArray_Type, &row,
|
||||
&mini,
|
||||
&minj,
|
||||
&n)) {
|
||||
return 0;
|
||||
}
|
||||
chopmins((int*)row->data, mini, minj, n);
|
||||
return Py_BuildValue("d", 0.0);
|
||||
}
|
||||
|
||||
|
||||
extern PyObject *leaders_wrap(PyObject *self, PyObject *args) {
|
||||
PyArrayObject *Z_, *T_, *L_, *M_;
|
||||
int kk, n, res;
|
||||
if (!PyArg_ParseTuple(args, "O!O!O!O!ii",
|
||||
&PyArray_Type, &Z_,
|
||||
&PyArray_Type, &T_,
|
||||
&PyArray_Type, &L_,
|
||||
&PyArray_Type, &M_,
|
||||
&kk, &n)) {
|
||||
return 0;
|
||||
}
|
||||
else {
|
||||
res = leaders((double*)Z_->data, (int*)T_->data,
|
||||
(int*)L_->data, (int*)M_->data, kk, n);
|
||||
}
|
||||
return Py_BuildValue("i", res);
|
||||
}
|
||||
|
||||
static PyMethodDef _hierarchyWrapMethods[] = {
|
||||
{"calculate_cluster_sizes_wrap", calculate_cluster_sizes_wrap, METH_VARARGS},
|
||||
{"chopmins", chopmins_wrap, METH_VARARGS},
|
||||
{"chopmins_ns_i", chopmin_ns_i_wrap, METH_VARARGS},
|
||||
{"chopmins_ns_ij", chopmin_ns_ij_wrap, METH_VARARGS},
|
||||
{"cluster_in_wrap", cluster_in_wrap, METH_VARARGS},
|
||||
{"cluster_dist_wrap", cluster_dist_wrap, METH_VARARGS},
|
||||
{"cluster_maxclust_dist_wrap", cluster_maxclust_dist_wrap, METH_VARARGS},
|
||||
{"cluster_maxclust_monocrit_wrap", cluster_maxclust_monocrit_wrap, METH_VARARGS},
|
||||
{"cluster_monocrit_wrap", cluster_monocrit_wrap, METH_VARARGS},
|
||||
{"cophenetic_distances_wrap", cophenetic_distances_wrap, METH_VARARGS},
|
||||
{"get_max_dist_for_each_cluster_wrap",
|
||||
get_max_dist_for_each_cluster_wrap, METH_VARARGS},
|
||||
{"get_max_Rfield_for_each_cluster_wrap",
|
||||
get_max_Rfield_for_each_cluster_wrap, METH_VARARGS},
|
||||
{"inconsistent_wrap", inconsistent_wrap, METH_VARARGS},
|
||||
{"leaders_wrap", leaders_wrap, METH_VARARGS},
|
||||
{"linkage_euclid_wrap", linkage_euclid_wrap, METH_VARARGS},
|
||||
{"linkage_wrap", linkage_wrap, METH_VARARGS},
|
||||
{"prelist_wrap", prelist_wrap, METH_VARARGS},
|
||||
{NULL, NULL} /* Sentinel - marks the end of this structure */
|
||||
};
|
||||
|
||||
PyMODINIT_FUNC init_hierarchy_wrap(void) {
|
||||
(void) Py_InitModule("_hierarchy_wrap", _hierarchyWrapMethods);
|
||||
import_array(); // Must be present for NumPy. Called first after above line.
|
||||
}
|
|
@ -1,155 +0,0 @@
|
|||
/*
|
||||
* This file implements vq for float and double in C. It is a direct
|
||||
* translation from the swig interface which could not be generated anymore
|
||||
* with recent swig
|
||||
*/
|
||||
|
||||
/*
|
||||
* Including python.h is necessary because python header redefines some macros
|
||||
* in standart C header
|
||||
*/
|
||||
#include <Python.h>
|
||||
|
||||
#include <stddef.h>
|
||||
#include <math.h>
|
||||
|
||||
#include "vq.h"
|
||||
/*
|
||||
* results is put into code, which contains initially the initial code
|
||||
*
|
||||
* mdist and code should have at least n elements
|
||||
*/
|
||||
const static double rbig = 1e100;
|
||||
|
||||
|
||||
#if 0
|
||||
static int float_vq_1d(const float *in, int n,
|
||||
const float *init, int ncode,
|
||||
npy_intp *code, float *mdist)
|
||||
{
|
||||
int i, j;
|
||||
float m, d;
|
||||
|
||||
for (i = 0; i < n; ++i) {
|
||||
m = (float)rbig;
|
||||
/* Compute the minimal distance for obsvervation i */
|
||||
for (j = 0; j < ncode; ++j) {
|
||||
d = (in[i] - init[j]);
|
||||
d *= d;
|
||||
if ( d < m) {
|
||||
m = d;
|
||||
}
|
||||
}
|
||||
mdist[i] = m;
|
||||
code[i] = j;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
static int float_vq_obs(const float *obs,
|
||||
float *code_book, int Ncodes, int Nfeatures,
|
||||
npy_intp* code, float *lowest_dist)
|
||||
{
|
||||
int i,j,k=0;
|
||||
float dist, diff;
|
||||
|
||||
*lowest_dist = (float) rbig;
|
||||
for(i = 0; i < Ncodes; i++) {
|
||||
dist = 0;
|
||||
for(j=0; j < Nfeatures; j++) {
|
||||
diff = code_book[k] - obs[j];
|
||||
dist += diff*diff;
|
||||
k++;
|
||||
}
|
||||
dist = (float)sqrt(dist);
|
||||
if (dist < *lowest_dist) {
|
||||
*code = i;
|
||||
*lowest_dist = dist;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int float_tvq(
|
||||
float* obs,
|
||||
float* code_book,
|
||||
int Nobs, int Ncodes, int Nfeatures,
|
||||
npy_intp* codes, float* lowest_dist)
|
||||
{
|
||||
int i;
|
||||
for( i = 0; i < Nobs; i++) {
|
||||
float_vq_obs(
|
||||
&(obs[i*Nfeatures]),
|
||||
code_book,Ncodes, Nfeatures,
|
||||
&(codes[i]), &(lowest_dist[i]));
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
#if 0
|
||||
static int double_vq_1d(const double *in, int n,
|
||||
const double *init, int ncode,
|
||||
npy_intp *code, double *mdist)
|
||||
{
|
||||
int i, j;
|
||||
double m, d;
|
||||
|
||||
for (i = 0; i < n; ++i) {
|
||||
m = (double)rbig;
|
||||
/* Compute the minimal distance for obsvervation i */
|
||||
for (j = 0; j < ncode; ++j) {
|
||||
d = (in[i] - init[j]);
|
||||
d *= d;
|
||||
if ( d < m) {
|
||||
m = d;
|
||||
}
|
||||
}
|
||||
mdist[i] = m;
|
||||
code[i] = j;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
static int double_vq_obs(const double *obs,
|
||||
double *code_book, int Ncodes, int Nfeatures,
|
||||
npy_intp* code, double *lowest_dist)
|
||||
{
|
||||
int i,j,k=0;
|
||||
double dist, diff;
|
||||
|
||||
*lowest_dist = (double) rbig;
|
||||
for(i = 0; i < Ncodes; i++) {
|
||||
dist = 0;
|
||||
for(j=0; j < Nfeatures; j++) {
|
||||
diff = code_book[k] - obs[j];
|
||||
dist += diff*diff;
|
||||
k++;
|
||||
}
|
||||
dist = (double)sqrt(dist);
|
||||
if (dist < *lowest_dist) {
|
||||
*code = i;
|
||||
*lowest_dist = dist;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int double_tvq(
|
||||
double* obs,
|
||||
double* code_book,
|
||||
int Nobs, int Ncodes, int Nfeatures,
|
||||
npy_intp* codes, double* lowest_dist)
|
||||
{
|
||||
int i;
|
||||
for( i = 0; i < Nobs; i++) {
|
||||
double_vq_obs(
|
||||
&(obs[i*Nfeatures]),
|
||||
code_book,Ncodes, Nfeatures,
|
||||
&(codes[i]), &(lowest_dist[i]));
|
||||
}
|
||||
return 0;
|
||||
}
|
|
@ -1,14 +0,0 @@
|
|||
#ifndef _VQ_H_
|
||||
#define _VQ_H
|
||||
|
||||
#include <Python.h>
|
||||
|
||||
#include <numpy/arrayobject.h>
|
||||
|
||||
int double_tvq(double* obs, double* code_book, int Nobs, int Ncodes,
|
||||
int Nfeatures, npy_intp* codes, double* lowest_dist);
|
||||
|
||||
int float_tvq(float* obs, float* code_book, int Nobs, int Ncodes,
|
||||
int Nfeatures, npy_intp* codes, float* lowest_dist);
|
||||
|
||||
#endif
|
|
@ -1,154 +0,0 @@
|
|||
/*
|
||||
* Last Change: Wed Jun 20 04:00 PM 2007 J
|
||||
*
|
||||
*/
|
||||
#include <Python.h>
|
||||
|
||||
#include <numpy/arrayobject.h>
|
||||
|
||||
#include "vq.h"
|
||||
|
||||
PyObject* compute_vq(PyObject*, PyObject*);
|
||||
|
||||
static PyMethodDef vqmethods [] = {
|
||||
{"vq", compute_vq, METH_VARARGS, "TODO docstring"},
|
||||
{NULL, NULL, 0, NULL}
|
||||
};
|
||||
|
||||
PyMODINIT_FUNC init_vq(void)
|
||||
{
|
||||
Py_InitModule("_vq", vqmethods);
|
||||
import_array();
|
||||
}
|
||||
|
||||
PyObject* compute_vq(PyObject* self, PyObject* args)
|
||||
{
|
||||
PyObject *obs, *code, *out;
|
||||
PyArrayObject *obs_a, *code_a;
|
||||
PyArrayObject *index_a, *dist_a;
|
||||
int typenum1, typenum2;
|
||||
npy_intp nc, nd;
|
||||
npy_intp n, d;
|
||||
|
||||
if ( !PyArg_ParseTuple(args, "OO", &obs, &code) ) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Check that obs and code both are arrays of same type, conformant
|
||||
* dimensions, etc...*/
|
||||
if (!(PyArray_Check(obs) && PyArray_Check(code))) {
|
||||
PyErr_Format(PyExc_ValueError,
|
||||
"observation and code should be numpy arrays");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
typenum1 = PyArray_TYPE(obs);
|
||||
typenum2 = PyArray_TYPE(code);
|
||||
if (typenum1 != typenum1) {
|
||||
PyErr_Format(PyExc_ValueError,
|
||||
"observation and code should have same type");
|
||||
return NULL;
|
||||
}
|
||||
obs_a = (PyArrayObject*)PyArray_FROM_OF(obs,
|
||||
NPY_CONTIGUOUS | NPY_NOTSWAPPED | NPY_ALIGNED);
|
||||
if (obs_a == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
code_a = (PyArrayObject*)PyArray_FROM_OF(code,
|
||||
NPY_CONTIGUOUS | NPY_NOTSWAPPED | NPY_ALIGNED);
|
||||
if (code_a == NULL) {
|
||||
goto clean_obs_a;
|
||||
}
|
||||
|
||||
if( !(obs_a->nd == code_a->nd)) {
|
||||
PyErr_Format(PyExc_ValueError,
|
||||
"observation and code should have same shape");
|
||||
goto clean_code_a;
|
||||
}
|
||||
|
||||
switch (obs_a->nd) {
|
||||
case 1:
|
||||
nd = 1;
|
||||
d = 1;
|
||||
n = PyArray_DIM(obs, 0);
|
||||
nc = PyArray_DIM(code, 0);
|
||||
break;
|
||||
case 2:
|
||||
nd = 2;
|
||||
n = PyArray_DIM(obs, 0);
|
||||
d = PyArray_DIM(obs, 1);
|
||||
nc = PyArray_DIM(code, 0);
|
||||
if (! (d == PyArray_DIM(code, 1)) ) {
|
||||
PyErr_Format(PyExc_ValueError,
|
||||
"obs and code should have same number of "
|
||||
" features (columns)");
|
||||
goto clean_code_a;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
PyErr_Format(PyExc_ValueError,
|
||||
"rank different than 1 or 2 are not supported");
|
||||
goto clean_code_a;
|
||||
}
|
||||
|
||||
switch (PyArray_TYPE(obs)) {
|
||||
case NPY_FLOAT:
|
||||
dist_a = (PyArrayObject*)PyArray_EMPTY(1, &n, typenum1, 0);
|
||||
if (dist_a == NULL) {
|
||||
goto clean_code_a;
|
||||
}
|
||||
index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, PyArray_INTP, 0);
|
||||
if (index_a == NULL) {
|
||||
goto clean_dist_a;
|
||||
}
|
||||
float_tvq((float*)obs_a->data, (float*)code_a->data, n, nc, d,
|
||||
(npy_intp*)index_a->data, (float*)dist_a->data);
|
||||
break;
|
||||
case NPY_DOUBLE:
|
||||
dist_a = (PyArrayObject*)PyArray_EMPTY(1, &n, typenum1, 0);
|
||||
if (dist_a == NULL) {
|
||||
goto clean_code_a;
|
||||
}
|
||||
index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, PyArray_INTP, 0);
|
||||
if (index_a == NULL) {
|
||||
goto clean_dist_a;
|
||||
}
|
||||
double_tvq((double*)obs_a->data, (double*)code_a->data, n, nc, d,
|
||||
(npy_intp*)index_a->data, (double*)dist_a->data);
|
||||
break;
|
||||
default:
|
||||
PyErr_Format(PyExc_ValueError,
|
||||
"type other than float or double not supported");
|
||||
goto clean_code_a;
|
||||
}
|
||||
|
||||
/* Create output */
|
||||
out = PyTuple_New(2);
|
||||
if (out == NULL) {
|
||||
goto clean_index_a;
|
||||
}
|
||||
if (PyTuple_SetItem(out, 0, (PyObject*)index_a)) {
|
||||
goto clean_out;
|
||||
}
|
||||
if (PyTuple_SetItem(out, 1, (PyObject*)dist_a)) {
|
||||
goto clean_out;
|
||||
}
|
||||
|
||||
/* Clean everything */
|
||||
Py_DECREF(code_a);
|
||||
Py_DECREF(obs_a);
|
||||
return out;
|
||||
|
||||
clean_out:
|
||||
Py_DECREF(out);
|
||||
clean_dist_a:
|
||||
Py_DECREF(dist_a);
|
||||
clean_index_a:
|
||||
Py_DECREF(index_a);
|
||||
clean_code_a:
|
||||
Py_DECREF(code_a);
|
||||
clean_obs_a:
|
||||
Py_DECREF(obs_a);
|
||||
return NULL;
|
||||
}
|
|
@ -1,30 +0,0 @@
|
|||
5.2656366e-01 3.1416019e-01 8.0065637e-02
|
||||
7.5020518e-01 4.6029983e-01 8.9869646e-01
|
||||
6.6546123e-01 6.9401142e-01 9.1046570e-01
|
||||
9.6404759e-01 1.4308220e-03 7.3987422e-01
|
||||
1.0815906e-01 5.5302879e-01 6.6380478e-02
|
||||
9.3135913e-01 8.2542491e-01 9.5231544e-01
|
||||
6.7808696e-01 3.4190397e-01 5.6148195e-01
|
||||
9.8273094e-01 7.0460521e-01 8.7097863e-02
|
||||
6.1469161e-01 4.6998923e-02 6.0240645e-01
|
||||
5.8016126e-01 9.1735497e-01 5.8816385e-01
|
||||
1.3824631e+00 1.9635816e+00 1.9443788e+00
|
||||
2.1067586e+00 1.6714873e+00 1.3485448e+00
|
||||
1.3988007e+00 1.6614205e+00 1.3222455e+00
|
||||
1.7141046e+00 1.4917638e+00 1.4543217e+00
|
||||
1.5410234e+00 1.8437495e+00 1.6465895e+00
|
||||
2.0851248e+00 1.8452435e+00 2.1734085e+00
|
||||
1.3074874e+00 1.5380165e+00 2.1600774e+00
|
||||
1.4144770e+00 1.9932907e+00 1.9910742e+00
|
||||
1.6194349e+00 1.4770328e+00 1.8978816e+00
|
||||
1.5988060e+00 1.5498898e+00 1.5756335e+00
|
||||
3.3724738e+00 2.6963531e+00 3.3998170e+00
|
||||
3.1370512e+00 3.3652809e+00 3.0608907e+00
|
||||
3.2941325e+00 3.1961950e+00 2.9070017e+00
|
||||
2.6551051e+00 3.0678590e+00 2.9719854e+00
|
||||
3.3094104e+00 2.5928397e+00 2.5771411e+00
|
||||
2.5955722e+00 3.3347737e+00 3.0879319e+00
|
||||
2.5820618e+00 3.4161567e+00 3.2644199e+00
|
||||
2.7112700e+00 2.7703245e+00 2.6346650e+00
|
||||
2.7961785e+00 3.2547372e+00 3.4180156e+00
|
||||
2.6474175e+00 2.5453804e+00 3.2535411e+00
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue