Merge branch 'master_14.4.1' of github.com:Unidata/awips2 into master_14.4.1

Former-commit-id: 286148b8f308c9790c186cdf340978475376cf4a [formerly 03348518d5]
Former-commit-id: b47ebec35d
This commit is contained in:
mjames-upc 2015-08-04 11:44:51 -06:00
commit 2d6a8f6386
4256 changed files with 871656 additions and 68205 deletions

Binary file not shown.

1
pythonPackages/metpy Submodule

@ -0,0 +1 @@
Subproject commit db120ecf9d6094c3c0c3f2778d5cd4a4c776c773

Binary file not shown.

1
pythonPackages/pint Submodule

@ -0,0 +1 @@
Subproject commit 2c67bbac774a3ceb593df258514a432c6f107397

View file

@ -0,0 +1,31 @@
dateutil - Extensions to the standard Python datetime module.
Copyright (c) 2003-2011 - Gustavo Niemeyer <gustavo@niemeyer.net>
Copyright (c) 2012-2014 - Tomi Pieviläinen <tomi.pievilainen@iki.fi>
Copyright (c) 2014 - Yaron de Leeuw <me@jarondl.net>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View file

@ -0,0 +1 @@
include LICENSE NEWS zonefile_metadata.json updatezinfo.py

View file

@ -0,0 +1,232 @@
Version 2.4.2
-------------
- Updated zoneinfo to 2015b.
- Fixed issue with parsing of tzstr on Python 2.7.x; tzstr will now be decoded
if not a unicode type. gh #51 (lp:1331576), gh pr #55.
- Fix a parser issue where AM and PM tokens were showing up in fuzzy date
stamps, triggering inappropriate errors. gh #56 (lp: 1428895), gh pr #63.
- Missing function "setcachsize" removed from zoneinfo __all__ list by @ryanss,
fixing an issue with wildcard imports of dateutil.zoneinfo. (gh pr #66).
- (PyPi only) Fix an issue with source distributions not including the test
suite.
Version 2.4.1
-------------
- Added explicit check for valid hours if AM/PM is specified in parser.
(gh pr #22, issue #21)
- Fix bug in rrule introduced in 2.4.0 where byweekday parameter was not
handled properly. (gh pr #35, issue #34)
- Fix error where parser allowed some invalid dates, overwriting existing hours
with the last 2-digit number in the string. (gh pr #32, issue #31)
- Fix and add test for Python 2.x compatibility with boolean checking of
relativedelta objects. Implemented by @nimasmi (gh pr #43) and Cédric Krier
(lp: 1035038)
- Replaced parse() calls with explicit datetime objects in unit tests unrelated
to parser. (gh pr #36)
- Changed private _byxxx from sets to sorted tuples and fixed one currently
unreachable bug in _construct_byset. (gh pr #54)
- Additional documentation for parser (gh pr #29, #33, #41) and rrule.
- Formatting fixes to documentation of rrule and README.rst.
- Updated zoneinfo to 2015a.
Version 2.4.0
-------------
- Fix an issue with relativedelta and freezegun (lp:1374022)
- Fix tzinfo in windows for timezones without dst (lp:1010050, gh #2)
- Ignore missing timezones in windows like in POSIX
- Fix minimal version requirement for six (gh #6)
- Many rrule changes and fixes by @pganssle (gh pull requests #13 #14 #17),
including defusing some infinite loops (gh #4)
Version 2.3
-----------
- Cleanup directory structure, moved test.py to dateutil/tests/test.py
- Changed many aspects of dealing with the zone info file. Instead of a cache,
all the zones are loaded to memory, but symbolic links are loaded only once,
so not much memory is used.
- The package is now zip-safe, and universal-wheelable, thanks to changes in
the handling of the zoneinfo file.
- Fixed tzwin silently not imported on windows python2
- New maintainer, together with new hosting: GitHub, Travis, Read-The-Docs
Version 2.2
-----------
- Updated zoneinfo to 2013h
- fuzzy_with_tokens parse addon from Christopher Corley
- Bug with LANG=C fixed by Mike Gilbert
Version 2.1
-----------
- New maintainer
- Dateutil now works on Python 2.6, 2.7 and 3.2 from same codebase (with six)
- #704047: Ismael Carnales' patch for a new time format
- Small bug fixes, thanks for reporters!
Version 2.0
-----------
- Ported to Python 3, by Brian Jones. If you need dateutil for Python 2.X,
please continue using the 1.X series.
- There's no such thing as a "PSF License". This source code is now
made available under the Simplified BSD license. See LICENSE for
details.
Version 1.5
-----------
- As reported by Mathieu Bridon, rrules were matching the bysecond rules
incorrectly against byminute in some circumstances when the SECONDLY
frequency was in use, due to a copy & paste bug. The problem has been
unittested and corrected.
- Adam Ryan reported a problem in the relativedelta implementation which
affected the yearday parameter in the month of January specifically.
This has been unittested and fixed.
- Updated timezone information.
Version 1.4.1
-------------
- Updated timezone information.
Version 1.4
-----------
- Fixed another parser precision problem on conversion of decimal seconds
to microseconds, as reported by Erik Brown. Now these issues are gone
for real since it's not using floating point arithmetic anymore.
- Fixed case where tzrange.utcoffset and tzrange.dst() might fail due
to a date being used where a datetime was expected (reported and fixed
by Lennart Regebro).
- Prevent tzstr from introducing daylight timings in strings that didn't
specify them (reported by Lennart Regebro).
- Calls like gettz("GMT+3") and gettz("UTC-2") will now return the
expected values, instead of the TZ variable behavior.
- Fixed DST signal handling in zoneinfo files. Reported by
Nicholas F. Fabry and John-Mark Gurney.
Version 1.3
-----------
- Fixed precision problem on conversion of decimal seconds to
microseconds, as reported by Skip Montanaro.
- Fixed bug in constructor of parser, and converted parser classes to
new-style classes. Original report and patch by Michael Elsdörfer.
- Initialize tzid and comps in tz.py, to prevent the code from ever
raising a NameError (even with broken files). Johan Dahlin suggested
the fix after a pyflakes run.
- Version is now published in dateutil.__version__, as requested
by Darren Dale.
- All code is compatible with new-style division.
Version 1.2
-----------
- Now tzfile will round timezones to full-minutes if necessary,
since Python's datetime doesn't support sub-minute offsets.
Thanks to Ilpo Nyyssönen for reporting the issue.
- Removed bare string exceptions, as reported and fixed by
Wilfredo Sánchez Vega.
- Fix bug in leap count parsing (reported and fixed by Eugene Oden).
Version 1.1
-----------
- Fixed rrule byyearday handling. Abramo Bagnara pointed out that
RFC2445 allows negative numbers.
- Fixed --prefix handling in setup.py (by Sidnei da Silva).
- Now tz.gettz() returns a tzlocal instance when not given any
arguments and no other timezone information is found.
- Updating timezone information to version 2005q.
Version 1.0
-----------
- Fixed parsing of XXhXXm formatted time after day/month/year
has been parsed.
- Added patch by Jeffrey Harris optimizing rrule.__contains__.
Version 0.9
-----------
- Fixed pickling of timezone types, as reported by
Andreas Köhler.
- Implemented internal timezone information with binary
timezone files [1]. datautil.tz.gettz() function will now
try to use the system timezone files, and fallback to
the internal versions. It's also possible to ask for
the internal versions directly by using
dateutil.zoneinfo.gettz().
- New tzwin timezone type, allowing access to Windows
internal timezones (contributed by Jeffrey Harris).
- Fixed parsing of unicode date strings.
- Accept parserinfo instances as the parser constructor
parameter, besides parserinfo (sub)classes.
- Changed weekday to spell the not-set n value as None
instead of 0.
- Fixed other reported bugs.
[1] http://www.twinsun.com/tz/tz-link.htm
Version 0.5
-----------
- Removed FREQ_ prefix from rrule frequency constants
WARNING: this breaks compatibility with previous versions.
- Fixed rrule.between() for cases where "after" is achieved
before even starting, as reported by Andreas Köhler.
- Fixed two digit zero-year parsing (such as 31-Dec-00), as
reported by Jim Abramson, and included test case for this.
- Sort exdate and rdate before iterating over them, so that
it's not necessary to sort them before adding to the rruleset,
as reported by Nicholas Piper.

View file

@ -0,0 +1,26 @@
Metadata-Version: 1.1
Name: python-dateutil
Version: 2.4.2
Summary: Extensions to the standard Python datetime module
Home-page: https://dateutil.readthedocs.org
Author: Yaron de Leeuw
Author-email: me@jarondl.net
License: Simplified BSD
Description:
The dateutil module provides powerful extensions to the
datetime module available in the Python standard library.
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Topic :: Software Development :: Libraries
Requires: six

View file

@ -0,0 +1,124 @@
dateutil - powerful extensions to datetime
==========================================
.. image:: https://img.shields.io/travis/dateutil/dateutil/master.svg?style=flat-square
:target: https://travis-ci.org/dateutil/dateutil
:alt: travis build status
.. image:: https://img.shields.io/appveyor/ci/dateutil/dateutil/master.svg?style=flat-square
:target: https://ci.appveyor.com/project/dateutil/dateutil
:alt: appveyor build status
.. image:: https://img.shields.io/pypi/dd/python-dateutil.svg?style=flat-square
:target: https://pypi.python.org/pypi/python-dateutil/
:alt: pypi downloads per day
.. image:: https://img.shields.io/pypi/v/python-dateutil.svg?style=flat-square
:target: https://pypi.python.org/pypi/python-dateutil/
:alt: pypi version
The `dateutil` module provides powerful extensions to
the standard `datetime` module, available in Python.
Download
========
dateutil is available on PyPI
https://pypi.python.org/pypi/python-dateutil/
The documentation is hosted at:
https://dateutil.readthedocs.org/
Code
====
https://github.com/dateutil/dateutil/
Features
========
* Computing of relative deltas (next month, next year,
next monday, last week of month, etc);
* Computing of relative deltas between two given
date and/or datetime objects;
* Computing of dates based on very flexible recurrence rules,
using a superset of the `iCalendar <https://www.ietf.org/rfc/rfc2445.txt>`_
specification. Parsing of RFC strings is supported as well.
* Generic parsing of dates in almost any string format;
* Timezone (tzinfo) implementations for tzfile(5) format
files (/etc/localtime, /usr/share/zoneinfo, etc), TZ
environment string (in all known formats), iCalendar
format files, given ranges (with help from relative deltas),
local machine timezone, fixed offset timezone, UTC timezone,
and Windows registry-based time zones.
* Internal up-to-date world timezone information based on
Olson's database.
* Computing of Easter Sunday dates for any given year,
using Western, Orthodox or Julian algorithms;
* More than 400 test cases.
Quick example
=============
Here's a snapshot, just to give an idea about the power of the
package. For more examples, look at the documentation.
Suppose you want to know how much time is left, in
years/months/days/etc, before the next easter happening on a
year with a Friday 13th in August, and you want to get today's
date out of the "date" unix system command. Here is the code:
.. doctest:: readmeexample
>>> from dateutil.relativedelta import *
>>> from dateutil.easter import *
>>> from dateutil.rrule import *
>>> from dateutil.parser import *
>>> from datetime import *
>>> now = parse("Sat Oct 11 17:13:46 UTC 2003")
>>> today = now.date()
>>> year = rrule(YEARLY,dtstart=now,bymonth=8,bymonthday=13,byweekday=FR)[0].year
>>> rdelta = relativedelta(easter(year), today)
>>> print("Today is: %s" % today)
Today is: 2003-10-11
>>> print("Year with next Aug 13th on a Friday is: %s" % year)
Year with next Aug 13th on a Friday is: 2004
>>> print("How far is the Easter of that year: %s" % rdelta)
How far is the Easter of that year: relativedelta(months=+6)
>>> print("And the Easter of that year is: %s" % (today+rdelta))
And the Easter of that year is: 2004-04-11
Being exactly 6 months ahead was **really** a coincidence :)
Author
======
The dateutil module was written by Gustavo Niemeyer <gustavo@niemeyer.net>
in 2003
It is maintained by:
* Gustavo Niemeyer <gustavo@niemeyer.net> 2003-2011
* Tomi Pieviläinen <tomi.pievilainen@iki.fi> 2012-2014
* Yaron de Leeuw <me@jarondl.net> 2014-
Building and releasing
======================
When you get the source, it does not contain the internal zoneinfo
database. To get (and update) the database, run the updatezinfo.py script. Make sure
that the zic command is in your path, and that you have network connectivity
to get the latest timezone information from IANA. If you have downloaded
the timezone data earlier, you can give the tarball as a parameter to
updatezinfo.py.
Testing
=======
dateutil has a comprehensive test suite, which can be run simply by running
`python setup.py test [-q]` in the project root. Note that if you don't have the internal
zoneinfo database, some tests will fail. Apart from that, all tests should pass.
To easily test dateutil against all supported Python versions, you can use
`tox <https://tox.readthedocs.org/en/latest/>`_.
All github pull requests are automatically tested using travis.

View file

@ -0,0 +1,2 @@
# -*- coding: utf-8 -*-
__version__ = "2.4.2"

View file

@ -0,0 +1,89 @@
# -*- coding: utf-8 -*-
"""
This module offers a generic easter computing method for any given year, using
Western, Orthodox or Julian algorithms.
"""
import datetime
__all__ = ["easter", "EASTER_JULIAN", "EASTER_ORTHODOX", "EASTER_WESTERN"]
EASTER_JULIAN = 1
EASTER_ORTHODOX = 2
EASTER_WESTERN = 3
def easter(year, method=EASTER_WESTERN):
"""
This method was ported from the work done by GM Arts,
on top of the algorithm by Claus Tondering, which was
based in part on the algorithm of Ouding (1940), as
quoted in "Explanatory Supplement to the Astronomical
Almanac", P. Kenneth Seidelmann, editor.
This algorithm implements three different easter
calculation methods:
1 - Original calculation in Julian calendar, valid in
dates after 326 AD
2 - Original method, with date converted to Gregorian
calendar, valid in years 1583 to 4099
3 - Revised method, in Gregorian calendar, valid in
years 1583 to 4099 as well
These methods are represented by the constants:
EASTER_JULIAN = 1
EASTER_ORTHODOX = 2
EASTER_WESTERN = 3
The default method is method 3.
More about the algorithm may be found at:
http://users.chariot.net.au/~gmarts/eastalg.htm
and
http://www.tondering.dk/claus/calendar.html
"""
if not (1 <= method <= 3):
raise ValueError("invalid method")
# g - Golden year - 1
# c - Century
# h - (23 - Epact) mod 30
# i - Number of days from March 21 to Paschal Full Moon
# j - Weekday for PFM (0=Sunday, etc)
# p - Number of days from March 21 to Sunday on or before PFM
# (-6 to 28 methods 1 & 3, to 56 for method 2)
# e - Extra days to add for method 2 (converting Julian
# date to Gregorian date)
y = year
g = y % 19
e = 0
if method < 3:
# Old method
i = (19*g + 15) % 30
j = (y + y//4 + i) % 7
if method == 2:
# Extra dates to convert Julian to Gregorian date
e = 10
if y > 1600:
e = e + y//100 - 16 - (y//100 - 16)//4
else:
# New method
c = y//100
h = (c - c//4 - (8*c + 13)//25 + 19*g + 15) % 30
i = h - (h//28)*(1 - (h//28)*(29//(h + 1))*((21 - g)//11))
j = (y + y//4 + i + 2 - c + c//4) % 7
# p can be from -6 to 56 corresponding to dates 22 March to 23 May
# (later dates apply to method 2, although 23 May never actually occurs)
p = i - j + e
d = 1 + (p + 27 + (p + 6)//40) % 31
m = 3 + (p + 26)//30
return datetime.date(int(y), int(m), int(d))

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,450 @@
# -*- coding: utf-8 -*-
import datetime
import calendar
from six import integer_types
__all__ = ["relativedelta", "MO", "TU", "WE", "TH", "FR", "SA", "SU"]
class weekday(object):
__slots__ = ["weekday", "n"]
def __init__(self, weekday, n=None):
self.weekday = weekday
self.n = n
def __call__(self, n):
if n == self.n:
return self
else:
return self.__class__(self.weekday, n)
def __eq__(self, other):
try:
if self.weekday != other.weekday or self.n != other.n:
return False
except AttributeError:
return False
return True
def __repr__(self):
s = ("MO", "TU", "WE", "TH", "FR", "SA", "SU")[self.weekday]
if not self.n:
return s
else:
return "%s(%+d)" % (s, self.n)
MO, TU, WE, TH, FR, SA, SU = weekdays = tuple([weekday(x) for x in range(7)])
class relativedelta(object):
"""
The relativedelta type is based on the specification of the excellent
work done by M.-A. Lemburg in his
`mx.DateTime <http://www.egenix.com/files/python/mxDateTime.html>`_ extension.
However, notice that this type does *NOT* implement the same algorithm as
his work. Do *NOT* expect it to behave like mx.DateTime's counterpart.
There are two different ways to build a relativedelta instance. The
first one is passing it two date/datetime classes::
relativedelta(datetime1, datetime2)
The second one is passing it any number of the following keyword arguments::
relativedelta(arg1=x,arg2=y,arg3=z...)
year, month, day, hour, minute, second, microsecond:
Absolute information (argument is singular); adding or subtracting a
relativedelta with absolute information does not perform an aritmetic
operation, but rather REPLACES the corresponding value in the
original datetime with the value(s) in relativedelta.
years, months, weeks, days, hours, minutes, seconds, microseconds:
Relative information, may be negative (argument is plural); adding
or subtracting a relativedelta with relative information performs
the corresponding aritmetic operation on the original datetime value
with the information in the relativedelta.
weekday:
One of the weekday instances (MO, TU, etc). These instances may
receive a parameter N, specifying the Nth weekday, which could
be positive or negative (like MO(+1) or MO(-2). Not specifying
it is the same as specifying +1. You can also use an integer,
where 0=MO.
leapdays:
Will add given days to the date found, if year is a leap
year, and the date found is post 28 of february.
yearday, nlyearday:
Set the yearday or the non-leap year day (jump leap days).
These are converted to day/month/leapdays information.
Here is the behavior of operations with relativedelta:
1. Calculate the absolute year, using the 'year' argument, or the
original datetime year, if the argument is not present.
2. Add the relative 'years' argument to the absolute year.
3. Do steps 1 and 2 for month/months.
4. Calculate the absolute day, using the 'day' argument, or the
original datetime day, if the argument is not present. Then,
subtract from the day until it fits in the year and month
found after their operations.
5. Add the relative 'days' argument to the absolute day. Notice
that the 'weeks' argument is multiplied by 7 and added to
'days'.
6. Do steps 1 and 2 for hour/hours, minute/minutes, second/seconds,
microsecond/microseconds.
7. If the 'weekday' argument is present, calculate the weekday,
with the given (wday, nth) tuple. wday is the index of the
weekday (0-6, 0=Mon), and nth is the number of weeks to add
forward or backward, depending on its signal. Notice that if
the calculated date is already Monday, for example, using
(0, 1) or (0, -1) won't change the day.
"""
def __init__(self, dt1=None, dt2=None,
years=0, months=0, days=0, leapdays=0, weeks=0,
hours=0, minutes=0, seconds=0, microseconds=0,
year=None, month=None, day=None, weekday=None,
yearday=None, nlyearday=None,
hour=None, minute=None, second=None, microsecond=None):
if dt1 and dt2:
# datetime is a subclass of date. So both must be date
if not (isinstance(dt1, datetime.date) and
isinstance(dt2, datetime.date)):
raise TypeError("relativedelta only diffs datetime/date")
# We allow two dates, or two datetimes, so we coerce them to be
# of the same type
if (isinstance(dt1, datetime.datetime) !=
isinstance(dt2, datetime.datetime)):
if not isinstance(dt1, datetime.datetime):
dt1 = datetime.datetime.fromordinal(dt1.toordinal())
elif not isinstance(dt2, datetime.datetime):
dt2 = datetime.datetime.fromordinal(dt2.toordinal())
self.years = 0
self.months = 0
self.days = 0
self.leapdays = 0
self.hours = 0
self.minutes = 0
self.seconds = 0
self.microseconds = 0
self.year = None
self.month = None
self.day = None
self.weekday = None
self.hour = None
self.minute = None
self.second = None
self.microsecond = None
self._has_time = 0
months = (dt1.year*12+dt1.month)-(dt2.year*12+dt2.month)
self._set_months(months)
dtm = self.__radd__(dt2)
if dt1 < dt2:
while dt1 > dtm:
months += 1
self._set_months(months)
dtm = self.__radd__(dt2)
else:
while dt1 < dtm:
months -= 1
self._set_months(months)
dtm = self.__radd__(dt2)
delta = dt1 - dtm
self.seconds = delta.seconds+delta.days*86400
self.microseconds = delta.microseconds
else:
self.years = years
self.months = months
self.days = days+weeks*7
self.leapdays = leapdays
self.hours = hours
self.minutes = minutes
self.seconds = seconds
self.microseconds = microseconds
self.year = year
self.month = month
self.day = day
self.hour = hour
self.minute = minute
self.second = second
self.microsecond = microsecond
if isinstance(weekday, integer_types):
self.weekday = weekdays[weekday]
else:
self.weekday = weekday
yday = 0
if nlyearday:
yday = nlyearday
elif yearday:
yday = yearday
if yearday > 59:
self.leapdays = -1
if yday:
ydayidx = [31, 59, 90, 120, 151, 181, 212,
243, 273, 304, 334, 366]
for idx, ydays in enumerate(ydayidx):
if yday <= ydays:
self.month = idx+1
if idx == 0:
self.day = yday
else:
self.day = yday-ydayidx[idx-1]
break
else:
raise ValueError("invalid year day (%d)" % yday)
self._fix()
def _fix(self):
if abs(self.microseconds) > 999999:
s = self.microseconds//abs(self.microseconds)
div, mod = divmod(self.microseconds*s, 1000000)
self.microseconds = mod*s
self.seconds += div*s
if abs(self.seconds) > 59:
s = self.seconds//abs(self.seconds)
div, mod = divmod(self.seconds*s, 60)
self.seconds = mod*s
self.minutes += div*s
if abs(self.minutes) > 59:
s = self.minutes//abs(self.minutes)
div, mod = divmod(self.minutes*s, 60)
self.minutes = mod*s
self.hours += div*s
if abs(self.hours) > 23:
s = self.hours//abs(self.hours)
div, mod = divmod(self.hours*s, 24)
self.hours = mod*s
self.days += div*s
if abs(self.months) > 11:
s = self.months//abs(self.months)
div, mod = divmod(self.months*s, 12)
self.months = mod*s
self.years += div*s
if (self.hours or self.minutes or self.seconds or self.microseconds
or self.hour is not None or self.minute is not None or
self.second is not None or self.microsecond is not None):
self._has_time = 1
else:
self._has_time = 0
def _set_months(self, months):
self.months = months
if abs(self.months) > 11:
s = self.months//abs(self.months)
div, mod = divmod(self.months*s, 12)
self.months = mod*s
self.years = div*s
else:
self.years = 0
def __add__(self, other):
if isinstance(other, relativedelta):
return relativedelta(years=other.years+self.years,
months=other.months+self.months,
days=other.days+self.days,
hours=other.hours+self.hours,
minutes=other.minutes+self.minutes,
seconds=other.seconds+self.seconds,
microseconds=(other.microseconds +
self.microseconds),
leapdays=other.leapdays or self.leapdays,
year=other.year or self.year,
month=other.month or self.month,
day=other.day or self.day,
weekday=other.weekday or self.weekday,
hour=other.hour or self.hour,
minute=other.minute or self.minute,
second=other.second or self.second,
microsecond=(other.microsecond or
self.microsecond))
if not isinstance(other, datetime.date):
raise TypeError("unsupported type for add operation")
elif self._has_time and not isinstance(other, datetime.datetime):
other = datetime.datetime.fromordinal(other.toordinal())
year = (self.year or other.year)+self.years
month = self.month or other.month
if self.months:
assert 1 <= abs(self.months) <= 12
month += self.months
if month > 12:
year += 1
month -= 12
elif month < 1:
year -= 1
month += 12
day = min(calendar.monthrange(year, month)[1],
self.day or other.day)
repl = {"year": year, "month": month, "day": day}
for attr in ["hour", "minute", "second", "microsecond"]:
value = getattr(self, attr)
if value is not None:
repl[attr] = value
days = self.days
if self.leapdays and month > 2 and calendar.isleap(year):
days += self.leapdays
ret = (other.replace(**repl)
+ datetime.timedelta(days=days,
hours=self.hours,
minutes=self.minutes,
seconds=self.seconds,
microseconds=self.microseconds))
if self.weekday:
weekday, nth = self.weekday.weekday, self.weekday.n or 1
jumpdays = (abs(nth)-1)*7
if nth > 0:
jumpdays += (7-ret.weekday()+weekday) % 7
else:
jumpdays += (ret.weekday()-weekday) % 7
jumpdays *= -1
ret += datetime.timedelta(days=jumpdays)
return ret
def __radd__(self, other):
return self.__add__(other)
def __rsub__(self, other):
return self.__neg__().__radd__(other)
def __sub__(self, other):
if not isinstance(other, relativedelta):
raise TypeError("unsupported type for sub operation")
return relativedelta(years=self.years-other.years,
months=self.months-other.months,
days=self.days-other.days,
hours=self.hours-other.hours,
minutes=self.minutes-other.minutes,
seconds=self.seconds-other.seconds,
microseconds=self.microseconds-other.microseconds,
leapdays=self.leapdays or other.leapdays,
year=self.year or other.year,
month=self.month or other.month,
day=self.day or other.day,
weekday=self.weekday or other.weekday,
hour=self.hour or other.hour,
minute=self.minute or other.minute,
second=self.second or other.second,
microsecond=self.microsecond or other.microsecond)
def __neg__(self):
return relativedelta(years=-self.years,
months=-self.months,
days=-self.days,
hours=-self.hours,
minutes=-self.minutes,
seconds=-self.seconds,
microseconds=-self.microseconds,
leapdays=self.leapdays,
year=self.year,
month=self.month,
day=self.day,
weekday=self.weekday,
hour=self.hour,
minute=self.minute,
second=self.second,
microsecond=self.microsecond)
def __bool__(self):
return not (not self.years and
not self.months and
not self.days and
not self.hours and
not self.minutes and
not self.seconds and
not self.microseconds and
not self.leapdays and
self.year is None and
self.month is None and
self.day is None and
self.weekday is None and
self.hour is None and
self.minute is None and
self.second is None and
self.microsecond is None)
# Compatibility with Python 2.x
__nonzero__ = __bool__
def __mul__(self, other):
f = float(other)
return relativedelta(years=int(self.years*f),
months=int(self.months*f),
days=int(self.days*f),
hours=int(self.hours*f),
minutes=int(self.minutes*f),
seconds=int(self.seconds*f),
microseconds=int(self.microseconds*f),
leapdays=self.leapdays,
year=self.year,
month=self.month,
day=self.day,
weekday=self.weekday,
hour=self.hour,
minute=self.minute,
second=self.second,
microsecond=self.microsecond)
__rmul__ = __mul__
def __eq__(self, other):
if not isinstance(other, relativedelta):
return False
if self.weekday or other.weekday:
if not self.weekday or not other.weekday:
return False
if self.weekday.weekday != other.weekday.weekday:
return False
n1, n2 = self.weekday.n, other.weekday.n
if n1 != n2 and not ((not n1 or n1 == 1) and (not n2 or n2 == 1)):
return False
return (self.years == other.years and
self.months == other.months and
self.days == other.days and
self.hours == other.hours and
self.minutes == other.minutes and
self.seconds == other.seconds and
self.leapdays == other.leapdays and
self.year == other.year and
self.month == other.month and
self.day == other.day and
self.hour == other.hour and
self.minute == other.minute and
self.second == other.second and
self.microsecond == other.microsecond)
def __ne__(self, other):
return not self.__eq__(other)
def __div__(self, other):
return self.__mul__(1/float(other))
__truediv__ = __div__
def __repr__(self):
l = []
for attr in ["years", "months", "days", "leapdays",
"hours", "minutes", "seconds", "microseconds"]:
value = getattr(self, attr)
if value:
l.append("%s=%+d" % (attr, value))
for attr in ["year", "month", "day", "weekday",
"hour", "minute", "second", "microsecond"]:
value = getattr(self, attr)
if value is not None:
l.append("%s=%s" % (attr, repr(value)))
return "%s(%s)" % (self.__class__.__name__, ", ".join(l))
# vim:ts=4:sw=4:et

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,986 @@
# -*- coding: utf-8 -*-
"""
This module offers timezone implementations subclassing the abstract
:py:`datetime.tzinfo` type. There are classes to handle tzfile format files
(usually are in :file:`/etc/localtime`, :file:`/usr/share/zoneinfo`, etc), TZ
environment string (in all known formats), given ranges (with help from
relative deltas), local machine timezone, fixed offset timezone, and UTC
timezone.
"""
import datetime
import struct
import time
import sys
import os
from six import string_types, PY3
try:
from dateutil.tzwin import tzwin, tzwinlocal
except ImportError:
tzwin = tzwinlocal = None
relativedelta = None
parser = None
rrule = None
__all__ = ["tzutc", "tzoffset", "tzlocal", "tzfile", "tzrange",
"tzstr", "tzical", "tzwin", "tzwinlocal", "gettz"]
def tzname_in_python2(myfunc):
"""Change unicode output into bytestrings in Python 2
tzname() API changed in Python 3. It used to return bytes, but was changed
to unicode strings
"""
def inner_func(*args, **kwargs):
if PY3:
return myfunc(*args, **kwargs)
else:
return myfunc(*args, **kwargs).encode()
return inner_func
ZERO = datetime.timedelta(0)
EPOCHORDINAL = datetime.datetime.utcfromtimestamp(0).toordinal()
class tzutc(datetime.tzinfo):
def utcoffset(self, dt):
return ZERO
def dst(self, dt):
return ZERO
@tzname_in_python2
def tzname(self, dt):
return "UTC"
def __eq__(self, other):
return (isinstance(other, tzutc) or
(isinstance(other, tzoffset) and other._offset == ZERO))
def __ne__(self, other):
return not self.__eq__(other)
def __repr__(self):
return "%s()" % self.__class__.__name__
__reduce__ = object.__reduce__
class tzoffset(datetime.tzinfo):
def __init__(self, name, offset):
self._name = name
self._offset = datetime.timedelta(seconds=offset)
def utcoffset(self, dt):
return self._offset
def dst(self, dt):
return ZERO
@tzname_in_python2
def tzname(self, dt):
return self._name
def __eq__(self, other):
return (isinstance(other, tzoffset) and
self._offset == other._offset)
def __ne__(self, other):
return not self.__eq__(other)
def __repr__(self):
return "%s(%s, %s)" % (self.__class__.__name__,
repr(self._name),
self._offset.days*86400+self._offset.seconds)
__reduce__ = object.__reduce__
class tzlocal(datetime.tzinfo):
_std_offset = datetime.timedelta(seconds=-time.timezone)
if time.daylight:
_dst_offset = datetime.timedelta(seconds=-time.altzone)
else:
_dst_offset = _std_offset
def utcoffset(self, dt):
if self._isdst(dt):
return self._dst_offset
else:
return self._std_offset
def dst(self, dt):
if self._isdst(dt):
return self._dst_offset-self._std_offset
else:
return ZERO
@tzname_in_python2
def tzname(self, dt):
return time.tzname[self._isdst(dt)]
def _isdst(self, dt):
# We can't use mktime here. It is unstable when deciding if
# the hour near to a change is DST or not.
#
# timestamp = time.mktime((dt.year, dt.month, dt.day, dt.hour,
# dt.minute, dt.second, dt.weekday(), 0, -1))
# return time.localtime(timestamp).tm_isdst
#
# The code above yields the following result:
#
# >>> import tz, datetime
# >>> t = tz.tzlocal()
# >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
# 'BRDT'
# >>> datetime.datetime(2003,2,16,0,tzinfo=t).tzname()
# 'BRST'
# >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
# 'BRST'
# >>> datetime.datetime(2003,2,15,22,tzinfo=t).tzname()
# 'BRDT'
# >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
# 'BRDT'
#
# Here is a more stable implementation:
#
timestamp = ((dt.toordinal() - EPOCHORDINAL) * 86400
+ dt.hour * 3600
+ dt.minute * 60
+ dt.second)
return time.localtime(timestamp+time.timezone).tm_isdst
def __eq__(self, other):
if not isinstance(other, tzlocal):
return False
return (self._std_offset == other._std_offset and
self._dst_offset == other._dst_offset)
return True
def __ne__(self, other):
return not self.__eq__(other)
def __repr__(self):
return "%s()" % self.__class__.__name__
__reduce__ = object.__reduce__
class _ttinfo(object):
__slots__ = ["offset", "delta", "isdst", "abbr", "isstd", "isgmt"]
def __init__(self):
for attr in self.__slots__:
setattr(self, attr, None)
def __repr__(self):
l = []
for attr in self.__slots__:
value = getattr(self, attr)
if value is not None:
l.append("%s=%s" % (attr, repr(value)))
return "%s(%s)" % (self.__class__.__name__, ", ".join(l))
def __eq__(self, other):
if not isinstance(other, _ttinfo):
return False
return (self.offset == other.offset and
self.delta == other.delta and
self.isdst == other.isdst and
self.abbr == other.abbr and
self.isstd == other.isstd and
self.isgmt == other.isgmt)
def __ne__(self, other):
return not self.__eq__(other)
def __getstate__(self):
state = {}
for name in self.__slots__:
state[name] = getattr(self, name, None)
return state
def __setstate__(self, state):
for name in self.__slots__:
if name in state:
setattr(self, name, state[name])
class tzfile(datetime.tzinfo):
# http://www.twinsun.com/tz/tz-link.htm
# ftp://ftp.iana.org/tz/tz*.tar.gz
def __init__(self, fileobj, filename=None):
file_opened_here = False
if isinstance(fileobj, string_types):
self._filename = fileobj
fileobj = open(fileobj, 'rb')
file_opened_here = True
elif filename is not None:
self._filename = filename
elif hasattr(fileobj, "name"):
self._filename = fileobj.name
else:
self._filename = repr(fileobj)
# From tzfile(5):
#
# The time zone information files used by tzset(3)
# begin with the magic characters "TZif" to identify
# them as time zone information files, followed by
# sixteen bytes reserved for future use, followed by
# six four-byte values of type long, written in a
# ``standard'' byte order (the high-order byte
# of the value is written first).
try:
if fileobj.read(4).decode() != "TZif":
raise ValueError("magic not found")
fileobj.read(16)
(
# The number of UTC/local indicators stored in the file.
ttisgmtcnt,
# The number of standard/wall indicators stored in the file.
ttisstdcnt,
# The number of leap seconds for which data is
# stored in the file.
leapcnt,
# The number of "transition times" for which data
# is stored in the file.
timecnt,
# The number of "local time types" for which data
# is stored in the file (must not be zero).
typecnt,
# The number of characters of "time zone
# abbreviation strings" stored in the file.
charcnt,
) = struct.unpack(">6l", fileobj.read(24))
# The above header is followed by tzh_timecnt four-byte
# values of type long, sorted in ascending order.
# These values are written in ``standard'' byte order.
# Each is used as a transition time (as returned by
# time(2)) at which the rules for computing local time
# change.
if timecnt:
self._trans_list = struct.unpack(">%dl" % timecnt,
fileobj.read(timecnt*4))
else:
self._trans_list = []
# Next come tzh_timecnt one-byte values of type unsigned
# char; each one tells which of the different types of
# ``local time'' types described in the file is associated
# with the same-indexed transition time. These values
# serve as indices into an array of ttinfo structures that
# appears next in the file.
if timecnt:
self._trans_idx = struct.unpack(">%dB" % timecnt,
fileobj.read(timecnt))
else:
self._trans_idx = []
# Each ttinfo structure is written as a four-byte value
# for tt_gmtoff of type long, in a standard byte
# order, followed by a one-byte value for tt_isdst
# and a one-byte value for tt_abbrind. In each
# structure, tt_gmtoff gives the number of
# seconds to be added to UTC, tt_isdst tells whether
# tm_isdst should be set by localtime(3), and
# tt_abbrind serves as an index into the array of
# time zone abbreviation characters that follow the
# ttinfo structure(s) in the file.
ttinfo = []
for i in range(typecnt):
ttinfo.append(struct.unpack(">lbb", fileobj.read(6)))
abbr = fileobj.read(charcnt).decode()
# Then there are tzh_leapcnt pairs of four-byte
# values, written in standard byte order; the
# first value of each pair gives the time (as
# returned by time(2)) at which a leap second
# occurs; the second gives the total number of
# leap seconds to be applied after the given time.
# The pairs of values are sorted in ascending order
# by time.
# Not used, for now
# if leapcnt:
# leap = struct.unpack(">%dl" % (leapcnt*2),
# fileobj.read(leapcnt*8))
# Then there are tzh_ttisstdcnt standard/wall
# indicators, each stored as a one-byte value;
# they tell whether the transition times associated
# with local time types were specified as standard
# time or wall clock time, and are used when
# a time zone file is used in handling POSIX-style
# time zone environment variables.
if ttisstdcnt:
isstd = struct.unpack(">%db" % ttisstdcnt,
fileobj.read(ttisstdcnt))
# Finally, there are tzh_ttisgmtcnt UTC/local
# indicators, each stored as a one-byte value;
# they tell whether the transition times associated
# with local time types were specified as UTC or
# local time, and are used when a time zone file
# is used in handling POSIX-style time zone envi-
# ronment variables.
if ttisgmtcnt:
isgmt = struct.unpack(">%db" % ttisgmtcnt,
fileobj.read(ttisgmtcnt))
# ** Everything has been read **
finally:
if file_opened_here:
fileobj.close()
# Build ttinfo list
self._ttinfo_list = []
for i in range(typecnt):
gmtoff, isdst, abbrind = ttinfo[i]
# Round to full-minutes if that's not the case. Python's
# datetime doesn't accept sub-minute timezones. Check
# http://python.org/sf/1447945 for some information.
gmtoff = (gmtoff+30)//60*60
tti = _ttinfo()
tti.offset = gmtoff
tti.delta = datetime.timedelta(seconds=gmtoff)
tti.isdst = isdst
tti.abbr = abbr[abbrind:abbr.find('\x00', abbrind)]
tti.isstd = (ttisstdcnt > i and isstd[i] != 0)
tti.isgmt = (ttisgmtcnt > i and isgmt[i] != 0)
self._ttinfo_list.append(tti)
# Replace ttinfo indexes for ttinfo objects.
trans_idx = []
for idx in self._trans_idx:
trans_idx.append(self._ttinfo_list[idx])
self._trans_idx = tuple(trans_idx)
# Set standard, dst, and before ttinfos. before will be
# used when a given time is before any transitions,
# and will be set to the first non-dst ttinfo, or to
# the first dst, if all of them are dst.
self._ttinfo_std = None
self._ttinfo_dst = None
self._ttinfo_before = None
if self._ttinfo_list:
if not self._trans_list:
self._ttinfo_std = self._ttinfo_first = self._ttinfo_list[0]
else:
for i in range(timecnt-1, -1, -1):
tti = self._trans_idx[i]
if not self._ttinfo_std and not tti.isdst:
self._ttinfo_std = tti
elif not self._ttinfo_dst and tti.isdst:
self._ttinfo_dst = tti
if self._ttinfo_std and self._ttinfo_dst:
break
else:
if self._ttinfo_dst and not self._ttinfo_std:
self._ttinfo_std = self._ttinfo_dst
for tti in self._ttinfo_list:
if not tti.isdst:
self._ttinfo_before = tti
break
else:
self._ttinfo_before = self._ttinfo_list[0]
# Now fix transition times to become relative to wall time.
#
# I'm not sure about this. In my tests, the tz source file
# is setup to wall time, and in the binary file isstd and
# isgmt are off, so it should be in wall time. OTOH, it's
# always in gmt time. Let me know if you have comments
# about this.
laststdoffset = 0
self._trans_list = list(self._trans_list)
for i in range(len(self._trans_list)):
tti = self._trans_idx[i]
if not tti.isdst:
# This is std time.
self._trans_list[i] += tti.offset
laststdoffset = tti.offset
else:
# This is dst time. Convert to std.
self._trans_list[i] += laststdoffset
self._trans_list = tuple(self._trans_list)
def _find_ttinfo(self, dt, laststd=0):
timestamp = ((dt.toordinal() - EPOCHORDINAL) * 86400
+ dt.hour * 3600
+ dt.minute * 60
+ dt.second)
idx = 0
for trans in self._trans_list:
if timestamp < trans:
break
idx += 1
else:
return self._ttinfo_std
if idx == 0:
return self._ttinfo_before
if laststd:
while idx > 0:
tti = self._trans_idx[idx-1]
if not tti.isdst:
return tti
idx -= 1
else:
return self._ttinfo_std
else:
return self._trans_idx[idx-1]
def utcoffset(self, dt):
if not self._ttinfo_std:
return ZERO
return self._find_ttinfo(dt).delta
def dst(self, dt):
if not self._ttinfo_dst:
return ZERO
tti = self._find_ttinfo(dt)
if not tti.isdst:
return ZERO
# The documentation says that utcoffset()-dst() must
# be constant for every dt.
return tti.delta-self._find_ttinfo(dt, laststd=1).delta
# An alternative for that would be:
#
# return self._ttinfo_dst.offset-self._ttinfo_std.offset
#
# However, this class stores historical changes in the
# dst offset, so I belive that this wouldn't be the right
# way to implement this.
@tzname_in_python2
def tzname(self, dt):
if not self._ttinfo_std:
return None
return self._find_ttinfo(dt).abbr
def __eq__(self, other):
if not isinstance(other, tzfile):
return False
return (self._trans_list == other._trans_list and
self._trans_idx == other._trans_idx and
self._ttinfo_list == other._ttinfo_list)
def __ne__(self, other):
return not self.__eq__(other)
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, repr(self._filename))
def __reduce__(self):
if not os.path.isfile(self._filename):
raise ValueError("Unpickable %s class" % self.__class__.__name__)
return (self.__class__, (self._filename,))
class tzrange(datetime.tzinfo):
def __init__(self, stdabbr, stdoffset=None,
dstabbr=None, dstoffset=None,
start=None, end=None):
global relativedelta
if not relativedelta:
from dateutil import relativedelta
self._std_abbr = stdabbr
self._dst_abbr = dstabbr
if stdoffset is not None:
self._std_offset = datetime.timedelta(seconds=stdoffset)
else:
self._std_offset = ZERO
if dstoffset is not None:
self._dst_offset = datetime.timedelta(seconds=dstoffset)
elif dstabbr and stdoffset is not None:
self._dst_offset = self._std_offset+datetime.timedelta(hours=+1)
else:
self._dst_offset = ZERO
if dstabbr and start is None:
self._start_delta = relativedelta.relativedelta(
hours=+2, month=4, day=1, weekday=relativedelta.SU(+1))
else:
self._start_delta = start
if dstabbr and end is None:
self._end_delta = relativedelta.relativedelta(
hours=+1, month=10, day=31, weekday=relativedelta.SU(-1))
else:
self._end_delta = end
def utcoffset(self, dt):
if self._isdst(dt):
return self._dst_offset
else:
return self._std_offset
def dst(self, dt):
if self._isdst(dt):
return self._dst_offset-self._std_offset
else:
return ZERO
@tzname_in_python2
def tzname(self, dt):
if self._isdst(dt):
return self._dst_abbr
else:
return self._std_abbr
def _isdst(self, dt):
if not self._start_delta:
return False
year = datetime.datetime(dt.year, 1, 1)
start = year+self._start_delta
end = year+self._end_delta
dt = dt.replace(tzinfo=None)
if start < end:
return dt >= start and dt < end
else:
return dt >= start or dt < end
def __eq__(self, other):
if not isinstance(other, tzrange):
return False
return (self._std_abbr == other._std_abbr and
self._dst_abbr == other._dst_abbr and
self._std_offset == other._std_offset and
self._dst_offset == other._dst_offset and
self._start_delta == other._start_delta and
self._end_delta == other._end_delta)
def __ne__(self, other):
return not self.__eq__(other)
def __repr__(self):
return "%s(...)" % self.__class__.__name__
__reduce__ = object.__reduce__
class tzstr(tzrange):
def __init__(self, s):
global parser
if not parser:
from dateutil import parser
self._s = s
res = parser._parsetz(s)
if res is None:
raise ValueError("unknown string format")
# Here we break the compatibility with the TZ variable handling.
# GMT-3 actually *means* the timezone -3.
if res.stdabbr in ("GMT", "UTC"):
res.stdoffset *= -1
# We must initialize it first, since _delta() needs
# _std_offset and _dst_offset set. Use False in start/end
# to avoid building it two times.
tzrange.__init__(self, res.stdabbr, res.stdoffset,
res.dstabbr, res.dstoffset,
start=False, end=False)
if not res.dstabbr:
self._start_delta = None
self._end_delta = None
else:
self._start_delta = self._delta(res.start)
if self._start_delta:
self._end_delta = self._delta(res.end, isend=1)
def _delta(self, x, isend=0):
kwargs = {}
if x.month is not None:
kwargs["month"] = x.month
if x.weekday is not None:
kwargs["weekday"] = relativedelta.weekday(x.weekday, x.week)
if x.week > 0:
kwargs["day"] = 1
else:
kwargs["day"] = 31
elif x.day:
kwargs["day"] = x.day
elif x.yday is not None:
kwargs["yearday"] = x.yday
elif x.jyday is not None:
kwargs["nlyearday"] = x.jyday
if not kwargs:
# Default is to start on first sunday of april, and end
# on last sunday of october.
if not isend:
kwargs["month"] = 4
kwargs["day"] = 1
kwargs["weekday"] = relativedelta.SU(+1)
else:
kwargs["month"] = 10
kwargs["day"] = 31
kwargs["weekday"] = relativedelta.SU(-1)
if x.time is not None:
kwargs["seconds"] = x.time
else:
# Default is 2AM.
kwargs["seconds"] = 7200
if isend:
# Convert to standard time, to follow the documented way
# of working with the extra hour. See the documentation
# of the tzinfo class.
delta = self._dst_offset-self._std_offset
kwargs["seconds"] -= delta.seconds+delta.days*86400
return relativedelta.relativedelta(**kwargs)
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, repr(self._s))
class _tzicalvtzcomp(object):
def __init__(self, tzoffsetfrom, tzoffsetto, isdst,
tzname=None, rrule=None):
self.tzoffsetfrom = datetime.timedelta(seconds=tzoffsetfrom)
self.tzoffsetto = datetime.timedelta(seconds=tzoffsetto)
self.tzoffsetdiff = self.tzoffsetto-self.tzoffsetfrom
self.isdst = isdst
self.tzname = tzname
self.rrule = rrule
class _tzicalvtz(datetime.tzinfo):
def __init__(self, tzid, comps=[]):
self._tzid = tzid
self._comps = comps
self._cachedate = []
self._cachecomp = []
def _find_comp(self, dt):
if len(self._comps) == 1:
return self._comps[0]
dt = dt.replace(tzinfo=None)
try:
return self._cachecomp[self._cachedate.index(dt)]
except ValueError:
pass
lastcomp = None
lastcompdt = None
for comp in self._comps:
if not comp.isdst:
# Handle the extra hour in DST -> STD
compdt = comp.rrule.before(dt-comp.tzoffsetdiff, inc=True)
else:
compdt = comp.rrule.before(dt, inc=True)
if compdt and (not lastcompdt or lastcompdt < compdt):
lastcompdt = compdt
lastcomp = comp
if not lastcomp:
# RFC says nothing about what to do when a given
# time is before the first onset date. We'll look for the
# first standard component, or the first component, if
# none is found.
for comp in self._comps:
if not comp.isdst:
lastcomp = comp
break
else:
lastcomp = comp[0]
self._cachedate.insert(0, dt)
self._cachecomp.insert(0, lastcomp)
if len(self._cachedate) > 10:
self._cachedate.pop()
self._cachecomp.pop()
return lastcomp
def utcoffset(self, dt):
return self._find_comp(dt).tzoffsetto
def dst(self, dt):
comp = self._find_comp(dt)
if comp.isdst:
return comp.tzoffsetdiff
else:
return ZERO
@tzname_in_python2
def tzname(self, dt):
return self._find_comp(dt).tzname
def __repr__(self):
return "<tzicalvtz %s>" % repr(self._tzid)
__reduce__ = object.__reduce__
class tzical(object):
def __init__(self, fileobj):
global rrule
if not rrule:
from dateutil import rrule
if isinstance(fileobj, string_types):
self._s = fileobj
# ical should be encoded in UTF-8 with CRLF
fileobj = open(fileobj, 'r')
elif hasattr(fileobj, "name"):
self._s = fileobj.name
else:
self._s = repr(fileobj)
self._vtz = {}
self._parse_rfc(fileobj.read())
def keys(self):
return list(self._vtz.keys())
def get(self, tzid=None):
if tzid is None:
keys = list(self._vtz.keys())
if len(keys) == 0:
raise ValueError("no timezones defined")
elif len(keys) > 1:
raise ValueError("more than one timezone available")
tzid = keys[0]
return self._vtz.get(tzid)
def _parse_offset(self, s):
s = s.strip()
if not s:
raise ValueError("empty offset")
if s[0] in ('+', '-'):
signal = (-1, +1)[s[0] == '+']
s = s[1:]
else:
signal = +1
if len(s) == 4:
return (int(s[:2])*3600+int(s[2:])*60)*signal
elif len(s) == 6:
return (int(s[:2])*3600+int(s[2:4])*60+int(s[4:]))*signal
else:
raise ValueError("invalid offset: "+s)
def _parse_rfc(self, s):
lines = s.splitlines()
if not lines:
raise ValueError("empty string")
# Unfold
i = 0
while i < len(lines):
line = lines[i].rstrip()
if not line:
del lines[i]
elif i > 0 and line[0] == " ":
lines[i-1] += line[1:]
del lines[i]
else:
i += 1
tzid = None
comps = []
invtz = False
comptype = None
for line in lines:
if not line:
continue
name, value = line.split(':', 1)
parms = name.split(';')
if not parms:
raise ValueError("empty property name")
name = parms[0].upper()
parms = parms[1:]
if invtz:
if name == "BEGIN":
if value in ("STANDARD", "DAYLIGHT"):
# Process component
pass
else:
raise ValueError("unknown component: "+value)
comptype = value
founddtstart = False
tzoffsetfrom = None
tzoffsetto = None
rrulelines = []
tzname = None
elif name == "END":
if value == "VTIMEZONE":
if comptype:
raise ValueError("component not closed: "+comptype)
if not tzid:
raise ValueError("mandatory TZID not found")
if not comps:
raise ValueError(
"at least one component is needed")
# Process vtimezone
self._vtz[tzid] = _tzicalvtz(tzid, comps)
invtz = False
elif value == comptype:
if not founddtstart:
raise ValueError("mandatory DTSTART not found")
if tzoffsetfrom is None:
raise ValueError(
"mandatory TZOFFSETFROM not found")
if tzoffsetto is None:
raise ValueError(
"mandatory TZOFFSETFROM not found")
# Process component
rr = None
if rrulelines:
rr = rrule.rrulestr("\n".join(rrulelines),
compatible=True,
ignoretz=True,
cache=True)
comp = _tzicalvtzcomp(tzoffsetfrom, tzoffsetto,
(comptype == "DAYLIGHT"),
tzname, rr)
comps.append(comp)
comptype = None
else:
raise ValueError("invalid component end: "+value)
elif comptype:
if name == "DTSTART":
rrulelines.append(line)
founddtstart = True
elif name in ("RRULE", "RDATE", "EXRULE", "EXDATE"):
rrulelines.append(line)
elif name == "TZOFFSETFROM":
if parms:
raise ValueError(
"unsupported %s parm: %s " % (name, parms[0]))
tzoffsetfrom = self._parse_offset(value)
elif name == "TZOFFSETTO":
if parms:
raise ValueError(
"unsupported TZOFFSETTO parm: "+parms[0])
tzoffsetto = self._parse_offset(value)
elif name == "TZNAME":
if parms:
raise ValueError(
"unsupported TZNAME parm: "+parms[0])
tzname = value
elif name == "COMMENT":
pass
else:
raise ValueError("unsupported property: "+name)
else:
if name == "TZID":
if parms:
raise ValueError(
"unsupported TZID parm: "+parms[0])
tzid = value
elif name in ("TZURL", "LAST-MODIFIED", "COMMENT"):
pass
else:
raise ValueError("unsupported property: "+name)
elif name == "BEGIN" and value == "VTIMEZONE":
tzid = None
comps = []
invtz = True
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, repr(self._s))
if sys.platform != "win32":
TZFILES = ["/etc/localtime", "localtime"]
TZPATHS = ["/usr/share/zoneinfo", "/usr/lib/zoneinfo", "/etc/zoneinfo"]
else:
TZFILES = []
TZPATHS = []
def gettz(name=None):
tz = None
if not name:
try:
name = os.environ["TZ"]
except KeyError:
pass
if name is None or name == ":":
for filepath in TZFILES:
if not os.path.isabs(filepath):
filename = filepath
for path in TZPATHS:
filepath = os.path.join(path, filename)
if os.path.isfile(filepath):
break
else:
continue
if os.path.isfile(filepath):
try:
tz = tzfile(filepath)
break
except (IOError, OSError, ValueError):
pass
else:
tz = tzlocal()
else:
if name.startswith(":"):
name = name[:-1]
if os.path.isabs(name):
if os.path.isfile(name):
tz = tzfile(name)
else:
tz = None
else:
for path in TZPATHS:
filepath = os.path.join(path, name)
if not os.path.isfile(filepath):
filepath = filepath.replace(' ', '_')
if not os.path.isfile(filepath):
continue
try:
tz = tzfile(filepath)
break
except (IOError, OSError, ValueError):
pass
else:
tz = None
if tzwin is not None:
try:
tz = tzwin(name)
except WindowsError:
tz = None
if not tz:
from dateutil.zoneinfo import gettz
tz = gettz(name)
if not tz:
for c in name:
# name must have at least one offset to be a tzstr
if c in "0123456789":
try:
tz = tzstr(name)
except ValueError:
pass
break
else:
if name in ("GMT", "UTC"):
tz = tzutc()
elif name in time.tzname:
tz = tzlocal()
return tz
# vim:ts=4:sw=4:et

View file

@ -0,0 +1,184 @@
# This code was originally contributed by Jeffrey Harris.
import datetime
import struct
from six.moves import winreg
__all__ = ["tzwin", "tzwinlocal"]
ONEWEEK = datetime.timedelta(7)
TZKEYNAMENT = r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones"
TZKEYNAME9X = r"SOFTWARE\Microsoft\Windows\CurrentVersion\Time Zones"
TZLOCALKEYNAME = r"SYSTEM\CurrentControlSet\Control\TimeZoneInformation"
def _settzkeyname():
handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
try:
winreg.OpenKey(handle, TZKEYNAMENT).Close()
TZKEYNAME = TZKEYNAMENT
except WindowsError:
TZKEYNAME = TZKEYNAME9X
handle.Close()
return TZKEYNAME
TZKEYNAME = _settzkeyname()
class tzwinbase(datetime.tzinfo):
"""tzinfo class based on win32's timezones available in the registry."""
def utcoffset(self, dt):
if self._isdst(dt):
return datetime.timedelta(minutes=self._dstoffset)
else:
return datetime.timedelta(minutes=self._stdoffset)
def dst(self, dt):
if self._isdst(dt):
minutes = self._dstoffset - self._stdoffset
return datetime.timedelta(minutes=minutes)
else:
return datetime.timedelta(0)
def tzname(self, dt):
if self._isdst(dt):
return self._dstname
else:
return self._stdname
def list():
"""Return a list of all time zones known to the system."""
handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
tzkey = winreg.OpenKey(handle, TZKEYNAME)
result = [winreg.EnumKey(tzkey, i)
for i in range(winreg.QueryInfoKey(tzkey)[0])]
tzkey.Close()
handle.Close()
return result
list = staticmethod(list)
def display(self):
return self._display
def _isdst(self, dt):
if not self._dstmonth:
# dstmonth == 0 signals the zone has no daylight saving time
return False
dston = picknthweekday(dt.year, self._dstmonth, self._dstdayofweek,
self._dsthour, self._dstminute,
self._dstweeknumber)
dstoff = picknthweekday(dt.year, self._stdmonth, self._stddayofweek,
self._stdhour, self._stdminute,
self._stdweeknumber)
if dston < dstoff:
return dston <= dt.replace(tzinfo=None) < dstoff
else:
return not dstoff <= dt.replace(tzinfo=None) < dston
class tzwin(tzwinbase):
def __init__(self, name):
self._name = name
# multiple contexts only possible in 2.7 and 3.1, we still support 2.6
with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle:
with winreg.OpenKey(handle,
"%s\%s" % (TZKEYNAME, name)) as tzkey:
keydict = valuestodict(tzkey)
self._stdname = keydict["Std"].encode("iso-8859-1")
self._dstname = keydict["Dlt"].encode("iso-8859-1")
self._display = keydict["Display"]
# See http://ww_winreg.jsiinc.com/SUBA/tip0300/rh0398.htm
tup = struct.unpack("=3l16h", keydict["TZI"])
self._stdoffset = -tup[0]-tup[1] # Bias + StandardBias * -1
self._dstoffset = self._stdoffset-tup[2] # + DaylightBias * -1
# for the meaning see the win32 TIME_ZONE_INFORMATION structure docs
# http://msdn.microsoft.com/en-us/library/windows/desktop/ms725481(v=vs.85).aspx
(self._stdmonth,
self._stddayofweek, # Sunday = 0
self._stdweeknumber, # Last = 5
self._stdhour,
self._stdminute) = tup[4:9]
(self._dstmonth,
self._dstdayofweek, # Sunday = 0
self._dstweeknumber, # Last = 5
self._dsthour,
self._dstminute) = tup[12:17]
def __repr__(self):
return "tzwin(%s)" % repr(self._name)
def __reduce__(self):
return (self.__class__, (self._name,))
class tzwinlocal(tzwinbase):
def __init__(self):
with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle:
with winreg.OpenKey(handle, TZLOCALKEYNAME) as tzlocalkey:
keydict = valuestodict(tzlocalkey)
self._stdname = keydict["StandardName"].encode("iso-8859-1")
self._dstname = keydict["DaylightName"].encode("iso-8859-1")
try:
with winreg.OpenKey(
handle, "%s\%s" % (TZKEYNAME, self._stdname)) as tzkey:
_keydict = valuestodict(tzkey)
self._display = _keydict["Display"]
except OSError:
self._display = None
self._stdoffset = -keydict["Bias"]-keydict["StandardBias"]
self._dstoffset = self._stdoffset-keydict["DaylightBias"]
# See http://ww_winreg.jsiinc.com/SUBA/tip0300/rh0398.htm
tup = struct.unpack("=8h", keydict["StandardStart"])
(self._stdmonth,
self._stddayofweek, # Sunday = 0
self._stdweeknumber, # Last = 5
self._stdhour,
self._stdminute) = tup[1:6]
tup = struct.unpack("=8h", keydict["DaylightStart"])
(self._dstmonth,
self._dstdayofweek, # Sunday = 0
self._dstweeknumber, # Last = 5
self._dsthour,
self._dstminute) = tup[1:6]
def __reduce__(self):
return (self.__class__, ())
def picknthweekday(year, month, dayofweek, hour, minute, whichweek):
"""dayofweek == 0 means Sunday, whichweek 5 means last instance"""
first = datetime.datetime(year, month, 1, hour, minute)
weekdayone = first.replace(day=((dayofweek-first.isoweekday()) % 7+1))
for n in range(whichweek):
dt = weekdayone+(whichweek-n)*ONEWEEK
if dt.month == month:
return dt
def valuestodict(key):
"""Convert a registry key's values to a dictionary."""
dict = {}
size = winreg.QueryInfoKey(key)[1]
for i in range(size):
data = winreg.EnumValue(key, i)
dict[data[0]] = data[1]
return dict

View file

@ -0,0 +1,108 @@
# -*- coding: utf-8 -*-
import logging
import os
import warnings
import tempfile
import shutil
from subprocess import check_call
from tarfile import TarFile
from pkgutil import get_data
from io import BytesIO
from contextlib import closing
from dateutil.tz import tzfile
__all__ = ["gettz", "rebuild"]
_ZONEFILENAME = "dateutil-zoneinfo.tar.gz"
# python2.6 compatability. Note that TarFile.__exit__ != TarFile.close, but
# it's close enough for python2.6
_tar_open = TarFile.open
if not hasattr(TarFile, '__exit__'):
def _tar_open(*args, **kwargs):
return closing(TarFile.open(*args, **kwargs))
class tzfile(tzfile):
def __reduce__(self):
return (gettz, (self._filename,))
def getzoneinfofile_stream():
try:
return BytesIO(get_data(__name__, _ZONEFILENAME))
except IOError as e: # TODO switch to FileNotFoundError?
warnings.warn("I/O error({0}): {1}".format(e.errno, e.strerror))
return None
class ZoneInfoFile(object):
def __init__(self, zonefile_stream=None):
if zonefile_stream is not None:
with _tar_open(fileobj=zonefile_stream, mode='r') as tf:
# dict comprehension does not work on python2.6
# TODO: get back to the nicer syntax when we ditch python2.6
# self.zones = {zf.name: tzfile(tf.extractfile(zf),
# filename = zf.name)
# for zf in tf.getmembers() if zf.isfile()}
self.zones = dict((zf.name, tzfile(tf.extractfile(zf),
filename=zf.name))
for zf in tf.getmembers() if zf.isfile())
# deal with links: They'll point to their parent object. Less
# waste of memory
# links = {zl.name: self.zones[zl.linkname]
# for zl in tf.getmembers() if zl.islnk() or zl.issym()}
links = dict((zl.name, self.zones[zl.linkname])
for zl in tf.getmembers() if
zl.islnk() or zl.issym())
self.zones.update(links)
else:
self.zones = dict()
# The current API has gettz as a module function, although in fact it taps into
# a stateful class. So as a workaround for now, without changing the API, we
# will create a new "global" class instance the first time a user requests a
# timezone. Ugly, but adheres to the api.
#
# TODO: deprecate this.
_CLASS_ZONE_INSTANCE = list()
def gettz(name):
if len(_CLASS_ZONE_INSTANCE) == 0:
_CLASS_ZONE_INSTANCE.append(ZoneInfoFile(getzoneinfofile_stream()))
return _CLASS_ZONE_INSTANCE[0].zones.get(name)
def rebuild(filename, tag=None, format="gz", zonegroups=[]):
"""Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar*
filename is the timezone tarball from ftp.iana.org/tz.
"""
tmpdir = tempfile.mkdtemp()
zonedir = os.path.join(tmpdir, "zoneinfo")
moduledir = os.path.dirname(__file__)
try:
with _tar_open(filename) as tf:
for name in zonegroups:
tf.extract(name, tmpdir)
filepaths = [os.path.join(tmpdir, n) for n in zonegroups]
try:
check_call(["zic", "-d", zonedir] + filepaths)
except OSError as e:
if e.errno == 2:
logging.error(
"Could not find zic. Perhaps you need to install "
"libc-bin or some other package that provides it, "
"or it's not in your PATH?")
raise
target = os.path.join(moduledir, _ZONEFILENAME)
with _tar_open(target, "w:%s" % format) as tf:
for entry in os.listdir(zonedir):
entrypath = os.path.join(zonedir, entry)
tf.add(entrypath, entry)
finally:
shutil.rmtree(tmpdir)

View file

@ -0,0 +1,26 @@
Metadata-Version: 1.1
Name: python-dateutil
Version: 2.4.2
Summary: Extensions to the standard Python datetime module
Home-page: https://dateutil.readthedocs.org
Author: Yaron de Leeuw
Author-email: me@jarondl.net
License: Simplified BSD
Description:
The dateutil module provides powerful extensions to the
datetime module available in the Python standard library.
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Topic :: Software Development :: Libraries
Requires: six

View file

@ -0,0 +1,25 @@
LICENSE
MANIFEST.in
NEWS
README.rst
setup.cfg
setup.py
updatezinfo.py
zonefile_metadata.json
dateutil/__init__.py
dateutil/easter.py
dateutil/parser.py
dateutil/relativedelta.py
dateutil/rrule.py
dateutil/tz.py
dateutil/tzwin.py
dateutil/test/__init__.py
dateutil/test/test.py
dateutil/zoneinfo/__init__.py
dateutil/zoneinfo/dateutil-zoneinfo.tar.gz
python_dateutil.egg-info/PKG-INFO
python_dateutil.egg-info/SOURCES.txt
python_dateutil.egg-info/dependency_links.txt
python_dateutil.egg-info/requires.txt
python_dateutil.egg-info/top_level.txt
python_dateutil.egg-info/zip-safe

View file

@ -0,0 +1 @@
six >=1.5

View file

@ -0,0 +1 @@
dateutil

View file

@ -0,0 +1,8 @@
[bdist_wheel]
universal = 1
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0

View file

@ -0,0 +1,51 @@
#!/usr/bin/python
from os.path import isfile
import codecs
import os
import re
from setuptools import setup
if isfile("MANIFEST"):
os.unlink("MANIFEST")
TOPDIR = os.path.dirname(__file__) or "."
VERSION = re.search('__version__ = "([^"]+)"',
codecs.open(TOPDIR + "/dateutil/__init__.py",
encoding='utf-8').read()).group(1)
setup(name="python-dateutil",
version=VERSION,
description="Extensions to the standard Python datetime module",
author="Yaron de Leeuw",
author_email="me@jarondl.net",
url="https://dateutil.readthedocs.org",
license="Simplified BSD",
long_description="""
The dateutil module provides powerful extensions to the
datetime module available in the Python standard library.
""",
packages=["dateutil", "dateutil.zoneinfo"],
package_data={"dateutil.zoneinfo": ["dateutil-zoneinfo.tar.gz"]},
zip_safe=True,
requires=["six"],
install_requires=["six >=1.5"], # XXX fix when packaging is sane again
classifiers=[
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Topic :: Software Development :: Libraries',
],
test_suite="dateutil.test.test"
)

View file

@ -0,0 +1,33 @@
#!/usr/bin/env python
import os
import hashlib
import json
import io
from six.moves.urllib import request
from dateutil.zoneinfo import rebuild
METADATA_FILE = "zonefile_metadata.json"
def main():
with io.open(METADATA_FILE, 'r') as f:
metadata = json.load(f)
if not os.path.isfile(metadata['tzdata_file']):
print("Downloading tz file from iana")
request.urlretrieve(os.path.join(metadata['releases_url'],
metadata['tzdata_file']),
metadata['tzdata_file'])
with open(metadata['tzdata_file'], 'rb') as tzfile:
sha_hasher = hashlib.sha512()
sha_hasher.update(tzfile.read())
sha_512_file = sha_hasher.hexdigest()
assert metadata['tzdata_file_sha512'] == sha_512_file, "SHA failed for"
print("Updating timezone information...")
rebuild(metadata['tzdata_file'], zonegroups=metadata['zonegroups'])
print("Done.")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,21 @@
{
"metadata_version" : 0.1,
"releases_url" : "ftp://ftp.iana.org/tz/releases/",
"tzdata_file" : "tzdata2015b.tar.gz",
"tzdata_file_sha512" : "767782b87e62a8f7a4dbcae595d16a54197c9e04ca974d7016d11f90ebaf2537b804d111f204af9052c68d4670afe0af0af9e5b150867a357fc199bb541368d0",
"zonegroups" : [
"africa",
"antarctica",
"asia",
"australasia",
"europe",
"northamerica",
"southamerica",
"pacificnew",
"etcetera",
"systemv",
"factory",
"backzone",
"backward"]
}

View file

@ -0,0 +1,31 @@
Copyright (c) 2001, 2002 Enthought, Inc.
All rights reserved.
Copyright (c) 2003-2009 SciPy Developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
a. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
b. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
c. Neither the name of the Enthought nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.

View file

@ -0,0 +1,19 @@
# Use .add_data_files and .add_data_dir methods in a appropriate
# setup.py files to include non-python files such as documentation,
# data, etc files to distribution. Avoid using MANIFEST.in for that.
#
include MANIFEST.in
include *.txt
include setupscons.py
include setupegg.py
include setup.py
include scipy/*.py
# Adding scons build relateed files not found by distutils
recursive-include scipy SConstruct SConscript
# Add documentation: we don't use add_data_dir since we do not want to include
# this at installation, only for sdist-generated tarballs
include doc/Makefile doc/postprocess.py
recursive-include doc/release *
recursive-include doc/source *
recursive-include doc/sphinxext *
prune scipy/special/tests/data/boost

View file

@ -0,0 +1,77 @@
SciPy is an open source library of routines for science and engineering
using Python. It is a community project sponsored by Enthought, Inc.
SciPy originated with code contributions by Travis Oliphant, Pearu
Peterson, and Eric Jones. Travis Oliphant and Eric Jones each contributed
about half the initial code. Pearu Peterson developed f2py, which is the
integral to wrapping the many Fortran libraries used in SciPy.
Since then many people have contributed to SciPy, both in code development,
suggestions, and financial support. Below is a partial list. If you've
been left off, please email the "SciPy Developers List" <scipy-dev@scipy.org>.
Please add names as needed so that we can keep up with all the contributors.
Kumar Appaiah for Dolph Chebyshev window.
Nathan Bell for sparsetools, help with scipy.sparse and scipy.splinalg.
Robert Cimrman for UMFpack wrapper for sparse matrix module.
David M. Cooke for improvements to system_info, and LBFGSB wrapper.
Aric Hagberg for ARPACK wrappers, help with splinalg.eigen.
Chuck Harris for Zeros package in optimize (1d root-finding algorithms).
Prabhu Ramachandran for improvements to gui_thread.
Robert Kern for improvements to stats and bug-fixes.
Jean-Sebastien Roy for fmin_tnc code which he adapted from Stephen Nash's
original Fortran.
Ed Schofield for Maximum entropy and Monte Carlo modules, help with
sparse matrix module.
Travis Vaught for numerous contributions to annual conference and community
web-site and the initial work on stats module clean up.
Jeff Whitaker for Mac OS X support.
David Cournapeau for bug-fixes, refactoring of fftpack and cluster,
implementing the numscons build, building Windows binaries and
adding single precision FFT.
Damian Eads for hierarchical clustering, dendrogram plotting,
distance functions in spatial package, vq documentation.
Anne Archibald for kd-trees and nearest neighbor in scipy.spatial.
Pauli Virtanen for Sphinx documentation generation, online documentation
framework and interpolation bugfixes.
Josef Perktold for major improvements to scipy.stats and its test suite and
fixes and tests to optimize.curve_fit and leastsq.
David Morrill for getting the scoreboard test system up and running.
Louis Luangkesorn for providing multiple tests for the stats module.
Jochen Kupper for the zoom feature in the now-deprecated plt plotting module.
Tiffany Kamm for working on the community web-site.
Mark Koudritsky for maintaining the web-site.
Andrew Straw for help with the web-page, documentation, packaging,
testing and work on the linalg module.
Stefan van der Walt for numerous bug-fixes, testing and documentation.
Jarrod Millman for release management, community coordination, and code
clean up.
Pierre Gerard-Marchant for statistical masked array functionality.
Alan McIntyre for updating SciPy tests to use the new NumPy test framework.
Matthew Brett for work on the Matlab file IO, bug-fixes, and improvements
to the testing framework.
Gary Strangman for the scipy.stats package.
Tiziano Zito for generalized symmetric and hermitian eigenvalue problem
solver.
Chris Burns for bug-fixes.
Per Brodtkorb for improvements to stats distributions.
Neilen Marais for testing and bug-fixing in the ARPACK wrappers.
Johannes Loehnert and Bart Vandereycken for fixes in the linalg
module.
David Huard for improvements to the interpolation interface.
David Warde-Farley for converting the ndimage docs to ReST.
Uwe Schmitt for wrapping non-negative least-squares.
Ondrej Certik for Debian packaging.
Paul Ivanov for porting Numeric-style C code to the new NumPy API.
Ariel Rokem for contributions on percentileofscore fixes and tests.
Yosef Meller for tests in the optimization module.
Institutions
------------
Enthought for providing resources and finances for development of SciPy.
Brigham Young University for providing resources for students to work on SciPy.
Agilent which gave a genereous donation for support of SciPy.
UC Berkeley for providing travel money and hosting numerous sprints.
The University of Stellenbosch for funding the development of
the SciKits portal.

View file

@ -0,0 +1,163 @@
# Makefile for Sphinx documentation
#
PYVER =
PYTHON = python$(PYVER)
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = LANG=C sphinx-build
PAPER =
NEED_AUTOSUMMARY = $(shell $(PYTHON) -c 'import sphinx; print sphinx.__version__ < "0.7" and "1" or ""')
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html web pickle htmlhelp latex changes linkcheck \
dist dist-build
#------------------------------------------------------------------------------
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " pickle to make pickle files (usable by e.g. sphinx-web)"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " changes to make an overview over all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " dist PYVER=... to make a distribution-ready tree"
@echo " upload USER=... to upload results to docs.scipy.org"
clean:
-rm -rf build/* source/generated
#------------------------------------------------------------------------------
# Automated generation of all documents
#------------------------------------------------------------------------------
# Build the current scipy version, and extract docs from it.
# We have to be careful of some issues:
#
# - Everything must be done using the same Python version
# - We must use eggs (otherwise they might override PYTHONPATH on import).
# - Different versions of easy_install install to different directories (!)
#
INSTALL_DIR = $(CURDIR)/build/inst-dist/
INSTALL_PPH = $(INSTALL_DIR)/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/lib/python$(PYVER)/dist-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/dist-packages
DIST_VARS=PYTHON="PYTHONPATH=$(INSTALL_PPH):$$PYTHONPATH python$(PYVER)" SPHINXBUILD="LANG=C PYTHONPATH=$(INSTALL_PPH):$$PYTHONPATH python$(PYVER) `which sphinx-build`"
UPLOAD_TARGET = $(USER)@docs.scipy.org:/home/docserver/www-root/doc/scipy/
upload:
@test -e build/dist || { echo "make dist is required first"; exit 1; }
@test output-is-fine -nt build/dist || { \
echo "Review the output in build/dist, and do 'touch output-is-fine' before uploading."; exit 1; }
rsync -r -z --delete-after -p \
$(if $(shell test -f build/dist/scipy-ref.pdf && echo "y"),, \
--exclude '**-ref.pdf' --exclude '**-user.pdf') \
$(if $(shell test -f build/dist/scipy-chm.zip && echo "y"),, \
--exclude '**-chm.zip') \
build/dist/ $(UPLOAD_TARGET)
dist:
make $(DIST_VARS) real-dist
real-dist: dist-build html
test -d build/latex || make latex
make -C build/latex all-pdf
-test -d build/htmlhelp || make htmlhelp-build
-rm -rf build/dist
mkdir -p build/dist
cp -r build/html build/dist/reference
touch build/dist/index.html
perl -pi -e 's#^\s*(<li><a href=".*?">SciPy.*?Reference Guide.*?&raquo;</li>)\s*$$#<li><a href="/">Numpy and Scipy Documentation</a> &raquo;</li> $$1#;' build/dist/*.html build/dist/*/*.html build/dist/*/*/*.html
(cd build/html && zip -9qr ../dist/scipy-html.zip .)
cp build/latex/scipy*.pdf build/dist
-zip build/dist/scipy-chm.zip build/htmlhelp/scipy.chm
cd build/dist && tar czf ../dist.tar.gz *
chmod ug=rwX,o=rX -R build/dist
find build/dist -type d -print0 | xargs -0r chmod g+s
dist-build:
rm -f ../dist/*.egg
cd .. && $(PYTHON) setupegg.py bdist_egg
install -d $(subst :, ,$(INSTALL_PPH))
$(PYTHON) `which easy_install` --prefix=$(INSTALL_DIR) ../dist/*.egg
#------------------------------------------------------------------------------
# Basic Sphinx generation rules for different formats
#------------------------------------------------------------------------------
generate: build/generate-stamp
build/generate-stamp: $(wildcard source/*.rst)
mkdir -p build
ifeq ($(NEED_AUTOSUMMARY),1)
$(PYTHON) \
./sphinxext/autosummary_generate.py source/*.rst \
-p dump.xml -o source/generated
endif
touch build/generate-stamp
html: generate
mkdir -p build/html build/doctrees
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html
$(PYTHON) postprocess.py html build/html/*.html
@echo
@echo "Build finished. The HTML pages are in build/html."
pickle: generate
mkdir -p build/pickle build/doctrees
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) build/pickle
@echo
@echo "Build finished; now you can process the pickle files or run"
@echo " sphinx-web build/pickle"
@echo "to start the sphinx-web server."
web: pickle
htmlhelp: generate
mkdir -p build/htmlhelp build/doctrees
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in build/htmlhelp."
htmlhelp-build: htmlhelp build/htmlhelp/scipy.chm
%.chm: %.hhp
-hhc.exe $^
latex: generate
mkdir -p build/latex build/doctrees
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex
$(PYTHON) postprocess.py tex build/latex/*.tex
perl -pi -e 's/\t(latex.*|pdflatex) (.*)/\t-$$1 -interaction batchmode $$2/' build/latex/Makefile
@echo
@echo "Build finished; the LaTeX files are in build/latex."
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
coverage: build
mkdir -p build/coverage build/doctrees
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) build/coverage
@echo "Coverage finished; see c.txt and python.txt in build/coverage"
changes: generate
mkdir -p build/changes build/doctrees
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes
@echo
@echo "The overview file is in build/changes."
linkcheck: generate
mkdir -p build/linkcheck build/doctrees
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in build/linkcheck/output.txt."

View file

@ -0,0 +1,55 @@
#!/usr/bin/env python
"""
%prog MODE FILES...
Post-processes HTML and Latex files output by Sphinx.
MODE is either 'html' or 'tex'.
"""
import re, optparse
def main():
p = optparse.OptionParser(__doc__)
options, args = p.parse_args()
if len(args) < 1:
p.error('no mode given')
mode = args.pop(0)
if mode not in ('html', 'tex'):
p.error('unknown mode %s' % mode)
for fn in args:
f = open(fn, 'r')
try:
if mode == 'html':
lines = process_html(fn, f.readlines())
elif mode == 'tex':
lines = process_tex(f.readlines())
finally:
f.close()
f = open(fn, 'w')
f.write("".join(lines))
f.close()
def process_html(fn, lines):
return lines
def process_tex(lines):
"""
Remove unnecessary section titles from the LaTeX file,
and convert UTF-8 non-breaking spaces to Latex nbsps.
"""
new_lines = []
for line in lines:
if re.match(r'^\\(section|subsection|subsubsection|paragraph|subparagraph){(numpy|scipy)\.', line):
pass # skip!
else:
new_lines.append(line)
return new_lines
if __name__ == "__main__":
main()

View file

@ -0,0 +1,348 @@
=========================
SciPy 0.7.0 Release Notes
=========================
.. contents::
SciPy 0.7.0 is the culmination of 16 months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and
better documentation. There have been a number of deprecations and
API changes in this release, which are documented below. All users
are encouraged to upgrade to this release, as there are a large number
of bug-fixes and optimizations. Moreover, our development attention
will now shift to bug-fix releases on the 0.7.x branch, and on adding
new features on the development trunk. This release requires Python
2.4 or 2.5 and NumPy 1.2 or greater.
Please note that SciPy is still considered to have "Beta" status, as
we work toward a SciPy 1.0.0 release. The 1.0.0 release will mark a
major milestone in the development of SciPy, after which changing the
package structure or API will be much more difficult. Whilst these
pre-1.0 releases are considered to have "Beta" status, we are
committed to making them as bug-free as possible. For example, in
addition to fixing numerous bugs in this release, we have also doubled
the number of unit tests since the last release.
However, until the 1.0 release, we are aggressively reviewing and
refining the functionality, organization, and interface. This is being
done in an effort to make the package as coherent, intuitive, and
useful as possible. To achieve this, we need help from the community
of users. Specifically, we need feedback regarding all aspects of the
project - everything - from which algorithms we implement, to details
about our function's call signatures.
Over the last year, we have seen a rapid increase in community
involvement, and numerous infrastructure improvements to lower the
barrier to contributions (e.g., more explicit coding standards,
improved testing infrastructure, better documentation tools). Over
the next year, we hope to see this trend continue and invite everyone
to become more involved.
Python 2.6 and 3.0
------------------
A significant amount of work has gone into making SciPy compatible
with Python 2.6; however, there are still some issues in this regard.
The main issue with 2.6 support is NumPy. On UNIX (including Mac OS
X), NumPy 1.2.1 mostly works, with a few caveats. On Windows, there
are problems related to the compilation process. The upcoming NumPy
1.3 release will fix these problems. Any remaining issues with 2.6
support for SciPy 0.7 will be addressed in a bug-fix release.
Python 3.0 is not supported at all; it requires NumPy to be ported to
Python 3.0. This requires immense effort, since a lot of C code has
to be ported. The transition to 3.0 is still under consideration;
currently, we don't have any timeline or roadmap for this transition.
Major documentation improvements
--------------------------------
SciPy documentation is greatly improved; you can view a HTML reference
manual `online <http://docs.scipy.org/>`__ or download it as a PDF
file. The new reference guide was built using the popular `Sphinx tool
<http://sphinx.pocoo.org/>`__.
This release also includes an updated tutorial, which hadn't been
available since SciPy was ported to NumPy in 2005. Though not
comprehensive, the tutorial shows how to use several essential parts
of Scipy. It also includes the ``ndimage`` documentation from the
``numarray`` manual.
Nevertheless, more effort is needed on the documentation front.
Luckily, contributing to Scipy documentation is now easier than
before: if you find that a part of it requires improvements, and want
to help us out, please register a user name in our web-based
documentation editor at http://docs.scipy.org/ and correct the issues.
Running Tests
-------------
NumPy 1.2 introduced a new testing framework based on `nose
<http://somethingaboutorange.com/mrl/projects/nose/>`__. Starting with
this release, SciPy now uses the new NumPy test framework as well.
Taking advantage of the new testing framework requires ``nose``
version 0.10, or later. One major advantage of the new framework is
that it greatly simplifies writing unit tests - which has all ready
paid off, given the rapid increase in tests. To run the full test
suite::
>>> import scipy
>>> scipy.test('full')
For more information, please see `The NumPy/SciPy Testing Guide
<http://projects.scipy.org/scipy/numpy/wiki/TestingGuidelines>`__.
We have also greatly improved our test coverage. There were just over
2,000 unit tests in the 0.6.0 release; this release nearly doubles
that number, with just over 4,000 unit tests.
Building SciPy
--------------
Support for NumScons has been added. NumScons is a tentative new build
system for NumPy/SciPy, using `SCons <http://www.scons.org/>`__ at its
core.
SCons is a next-generation build system, intended to replace the
venerable ``Make`` with the integrated functionality of
``autoconf``/``automake`` and ``ccache``. Scons is written in Python
and its configuration files are Python scripts. NumScons is meant to
replace NumPy's custom version of ``distutils`` providing more
advanced functionality, such as ``autoconf``, improved fortran
support, more tools, and support for ``numpy.distutils``/``scons``
cooperation.
Sandbox Removed
---------------
While porting SciPy to NumPy in 2005, several packages and modules
were moved into ``scipy.sandbox``. The sandbox was a staging ground
for packages that were undergoing rapid development and whose APIs
were in flux. It was also a place where broken code could live. The
sandbox has served its purpose well, but was starting to create
confusion. Thus ``scipy.sandbox`` was removed. Most of the code was
moved into ``scipy``, some code was made into a ``scikit``, and the
remaining code was just deleted, as the functionality had been
replaced by other code.
Sparse Matrices
---------------
Sparse matrices have seen extensive improvements. There is now
support for integer dtypes such ``int8``, ``uint32``, etc. Two new
sparse formats were added:
* new class ``dia_matrix`` : the sparse DIAgonal format
* new class ``bsr_matrix`` : the Block CSR format
Several new sparse matrix construction functions were added:
* ``sparse.kron`` : sparse Kronecker product
* ``sparse.bmat`` : sparse version of ``numpy.bmat``
* ``sparse.vstack`` : sparse version of ``numpy.vstack``
* ``sparse.hstack`` : sparse version of ``numpy.hstack``
Extraction of submatrices and nonzero values have been added:
* ``sparse.tril`` : extract lower triangle
* ``sparse.triu`` : extract upper triangle
* ``sparse.find`` : nonzero values and their indices
``csr_matrix`` and ``csc_matrix`` now support slicing and fancy
indexing (e.g., ``A[1:3, 4:7]`` and ``A[[3,2,6,8],:]``). Conversions
among all sparse formats are now possible:
* using member functions such as ``.tocsr()`` and ``.tolil()``
* using the ``.asformat()`` member function, e.g. ``A.asformat('csr')``
* using constructors ``A = lil_matrix([[1,2]]); B = csr_matrix(A)``
All sparse constructors now accept dense matrices and lists of lists.
For example:
* ``A = csr_matrix( rand(3,3) )`` and ``B = lil_matrix( [[1,2],[3,4]] )``
The handling of diagonals in the ``spdiags`` function has been changed.
It now agrees with the MATLAB(TM) function of the same name.
Numerous efficiency improvements to format conversions and sparse
matrix arithmetic have been made. Finally, this release contains
numerous bugfixes.
Statistics package
------------------
Statistical functions for masked arrays have been added, and are
accessible through ``scipy.stats.mstats``. The functions are similar
to their counterparts in ``scipy.stats`` but they have not yet been
verified for identical interfaces and algorithms.
Several bugs were fixed for statistical functions, of those,
``kstest`` and ``percentileofscore`` gained new keyword arguments.
Added deprecation warning for ``mean``, ``median``, ``var``, ``std``,
``cov``, and ``corrcoef``. These functions should be replaced by their
numpy counterparts. Note, however, that some of the default options
differ between the ``scipy.stats`` and numpy versions of these
functions.
Numerous bug fixes to ``stats.distributions``: all generic methods now
work correctly, several methods in individual distributions were
corrected. However, a few issues remain with higher moments (``skew``,
``kurtosis``) and entropy. The maximum likelihood estimator, ``fit``,
does not work out-of-the-box for some distributions - in some cases,
starting values have to be carefully chosen, in other cases, the
generic implementation of the maximum likelihood method might not be
the numerically appropriate estimation method.
We expect more bugfixes, increases in numerical precision and
enhancements in the next release of scipy.
Reworking of IO package
-----------------------
The IO code in both NumPy and SciPy is being extensively
reworked. NumPy will be where basic code for reading and writing NumPy
arrays is located, while SciPy will house file readers and writers for
various data formats (data, audio, video, images, matlab, etc.).
Several functions in ``scipy.io`` have been deprecated and will be
removed in the 0.8.0 release including ``npfile``, ``save``, ``load``,
``create_module``, ``create_shelf``, ``objload``, ``objsave``,
``fopen``, ``read_array``, ``write_array``, ``fread``, ``fwrite``,
``bswap``, ``packbits``, ``unpackbits``, and ``convert_objectarray``.
Some of these functions have been replaced by NumPy's raw reading and
writing capabilities, memory-mapping capabilities, or array methods.
Others have been moved from SciPy to NumPy, since basic array reading
and writing capability is now handled by NumPy.
The Matlab (TM) file readers/writers have a number of improvements:
* default version 5
* v5 writers for structures, cell arrays, and objects
* v5 readers/writers for function handles and 64-bit integers
* new struct_as_record keyword argument to ``loadmat``, which loads
struct arrays in matlab as record arrays in numpy
* string arrays have ``dtype='U...'`` instead of ``dtype=object``
* ``loadmat`` no longer squeezes singleton dimensions, i.e.
``squeeze_me=False`` by default
New Hierarchical Clustering module
----------------------------------
This module adds new hierarchical clustering functionality to the
``scipy.cluster`` package. The function interfaces are similar to the
functions provided MATLAB(TM)'s Statistics Toolbox to help facilitate
easier migration to the NumPy/SciPy framework. Linkage methods
implemented include single, complete, average, weighted, centroid,
median, and ward.
In addition, several functions are provided for computing
inconsistency statistics, cophenetic distance, and maximum distance
between descendants. The ``fcluster`` and ``fclusterdata`` functions
transform a hierarchical clustering into a set of flat clusters. Since
these flat clusters are generated by cutting the tree into a forest of
trees, the ``leaders`` function takes a linkage and a flat clustering,
and finds the root of each tree in the forest. The ``ClusterNode``
class represents a hierarchical clusterings as a field-navigable tree
object. ``to_tree`` converts a matrix-encoded hierarchical clustering
to a ``ClusterNode`` object. Routines for converting between MATLAB
and SciPy linkage encodings are provided. Finally, a ``dendrogram``
function plots hierarchical clusterings as a dendrogram, using
matplotlib.
New Spatial package
-------------------
The new spatial package contains a collection of spatial algorithms
and data structures, useful for spatial statistics and clustering
applications. It includes rapidly compiled code for computing exact
and approximate nearest neighbors, as well as a pure-python kd-tree
with the same interface, but that supports annotation and a variety of
other algorithms. The API for both modules may change somewhat, as
user requirements become clearer.
It also includes a ``distance`` module, containing a collection of
distance and dissimilarity functions for computing distances between
vectors, which is useful for spatial statistics, clustering, and
kd-trees. Distance and dissimilarity functions provided include
Bray-Curtis, Canberra, Chebyshev, City Block, Cosine, Dice, Euclidean,
Hamming, Jaccard, Kulsinski, Mahalanobis, Matching, Minkowski,
Rogers-Tanimoto, Russell-Rao, Squared Euclidean, Standardized
Euclidean, Sokal-Michener, Sokal-Sneath, and Yule.
The ``pdist`` function computes pairwise distance between all
unordered pairs of vectors in a set of vectors. The ``cdist`` computes
the distance on all pairs of vectors in the Cartesian product of two
sets of vectors. Pairwise distance matrices are stored in condensed
form; only the upper triangular is stored. ``squareform`` converts
distance matrices between square and condensed forms.
Reworked fftpack package
------------------------
FFTW2, FFTW3, MKL and DJBFFT wrappers have been removed. Only (NETLIB)
fftpack remains. By focusing on one backend, we hope to add new
features - like float32 support - more easily.
New Constants package
---------------------
``scipy.constants`` provides a collection of physical constants and
conversion factors. These constants are taken from CODATA Recommended
Values of the Fundamental Physical Constants: 2002. They may be found
at physics.nist.gov/constants. The values are stored in the dictionary
physical_constants as a tuple containing the value, the units, and the
relative precision - in that order. All constants are in SI units,
unless otherwise stated. Several helper functions are provided.
New Radial Basis Function module
--------------------------------
``scipy.interpolate`` now contains a Radial Basis Function module.
Radial basis functions can be used for smoothing/interpolating
scattered data in n-dimensions, but should be used with caution for
extrapolation outside of the observed data range.
New complex ODE integrator
--------------------------
``scipy.integrate.ode`` now contains a wrapper for the ZVODE
complex-valued ordinary differential equation solver (by Peter
N. Brown, Alan C. Hindmarsh, and George D. Byrne).
New generalized symmetric and hermitian eigenvalue problem solver
-----------------------------------------------------------------
``scipy.linalg.eigh`` now contains wrappers for more LAPACK symmetric
and hermitian eigenvalue problem solvers. Users can now solve
generalized problems, select a range of eigenvalues only, and choose
to use a faster algorithm at the expense of increased memory
usage. The signature of the ``scipy.linalg.eigh`` changed accordingly.
Bug fixes in the interpolation package
--------------------------------------
The shape of return values from ``scipy.interpolate.interp1d`` used to
be incorrect, if interpolated data had more than 2 dimensions and the
axis keyword was set to a non-default value. This has been fixed.
Moreover, ``interp1d`` returns now a scalar (0D-array) if the input
is a scalar. Users of ``scipy.interpolate.interp1d`` may need to
revise their code if it relies on the previous behavior.
Weave clean up
--------------
There were numerous improvements to ``scipy.weave``. ``blitz++`` was
relicensed by the author to be compatible with the SciPy license.
``wx_spec.py`` was removed.
Known problems
--------------
Here are known problems with scipy 0.7.0:
* weave test failures on windows: those are known, and are being revised.
* weave test failure with gcc 4.3 (std::labs): this is a gcc 4.3 bug. A
workaround is to add #include <cstdlib> in
scipy/weave/blitz/blitz/funcs.h (line 27). You can make the change in
the installed scipy (in site-packages).

View file

@ -0,0 +1,263 @@
=========================
SciPy 0.8.0 Release Notes
=========================
.. contents::
SciPy 0.8.0 is the culmination of 17 months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and
better documentation. There have been a number of deprecations and
API changes in this release, which are documented below. All users
are encouraged to upgrade to this release, as there are a large number
of bug-fixes and optimizations. Moreover, our development attention
will now shift to bug-fix releases on the 0.8.x branch, and on adding
new features on the development trunk. This release requires Python
2.4 - 2.6 and NumPy 1.4.1 or greater.
Please note that SciPy is still considered to have "Beta" status, as
we work toward a SciPy 1.0.0 release. The 1.0.0 release will mark a
major milestone in the development of SciPy, after which changing the
package structure or API will be much more difficult. Whilst these
pre-1.0 releases are considered to have "Beta" status, we are
committed to making them as bug-free as possible.
However, until the 1.0 release, we are aggressively reviewing and
refining the functionality, organization, and interface. This is being
done in an effort to make the package as coherent, intuitive, and
useful as possible. To achieve this, we need help from the community
of users. Specifically, we need feedback regarding all aspects of the
project - everything - from which algorithms we implement, to details
about our function's call signatures.
Python 3
========
Python 3 compatibility is planned and is currently technically
feasible, since Numpy has been ported. However, since the Python 3
compatible Numpy 1.5 has not been released yet, support for Python 3
in Scipy is not yet included in Scipy 0.8. SciPy 0.9, planned for fall
2010, will very likely include experimental support for Python 3.
Major documentation improvements
================================
SciPy documentation is greatly improved.
Deprecated features
===================
Swapping inputs for correlation functions (scipy.signal)
--------------------------------------------------------
Concern correlate, correlate2d, convolve and convolve2d. If the second input is
larger than the first input, the inputs are swapped before calling the
underlying computation routine. This behavior is deprecated, and will be
removed in scipy 0.9.0.
Obsolete code deprecated (scipy.misc)
-------------------------------------
The modules `helpmod`, `ppimport` and `pexec` from `scipy.misc` are deprecated.
They will be removed from SciPy in version 0.9.
Additional deprecations
-----------------------
* linalg: The function `solveh_banded` currently returns a tuple containing
the Cholesky factorization and the solution to the linear system. In
SciPy 0.9, the return value will be just the solution.
* The function `constants.codata.find` will generate a DeprecationWarning.
In Scipy version 0.8.0, the keyword argument 'disp' was added to the
function, with the default value 'True'. In 0.9.0, the default will be
'False'.
* The `qshape` keyword argument of `signal.chirp` is deprecated. Use
the argument `vertex_zero` instead.
* Passing the coefficients of a polynomial as the argument `f0` to
`signal.chirp` is deprecated. Use the function `signal.sweep_poly`
instead.
* The `io.recaster` module has been deprecated and will be removed in 0.9.0.
New features
============
DCT support (scipy.fftpack)
---------------------------
New realtransforms have been added, namely dct and idct for Discrete Cosine
Transform; type I, II and III are available.
Single precision support for fft functions (scipy.fftpack)
----------------------------------------------------------
fft functions can now handle single precision inputs as well: fft(x) will
return a single precision array if x is single precision.
At the moment, for FFT sizes that are not composites of 2, 3, and 5, the
transform is computed internally in double precision to avoid rounding error in
FFTPACK.
Correlation functions now implement the usual definition (scipy.signal)
-----------------------------------------------------------------------
The outputs should now correspond to their matlab and R counterparts, and do
what most people expect if the old_behavior=False argument is passed:
* correlate, convolve and their 2d counterparts do not swap their inputs
depending on their relative shape anymore;
* correlation functions now conjugate their second argument while computing
the slided sum-products, which correspond to the usual definition of
correlation.
Additions and modification to LTI functions (scipy.signal)
----------------------------------------------------------
* The functions `impulse2` and `step2` were added to `scipy.signal`.
They use the function `scipy.signal.lsim2` to compute the impulse and
step response of a system, respectively.
* The function `scipy.signal.lsim2` was changed to pass any additional
keyword arguments to the ODE solver.
Improved waveform generators (scipy.signal)
-------------------------------------------
Several improvements to the `chirp` function in `scipy.signal` were made:
* The waveform generated when `method="logarithmic"` was corrected; it
now generates a waveform that is also known as an "exponential" or
"geometric" chirp. (See http://en.wikipedia.org/wiki/Chirp.)
* A new `chirp` method, "hyperbolic", was added.
* Instead of the keyword `qshape`, `chirp` now uses the keyword
`vertex_zero`, a boolean.
* `chirp` no longer handles an arbitrary polynomial. This functionality
has been moved to a new function, `sweep_poly`.
A new function, `sweep_poly`, was added.
New functions and other changes in scipy.linalg
-----------------------------------------------
The functions `cho_solve_banded`, `circulant`, `companion`, `hadamard` and
`leslie` were added to `scipy.linalg`.
The function `block_diag` was enhanced to accept scalar and 1D arguments,
along with the usual 2D arguments.
New function and changes in scipy.optimize
------------------------------------------
The `curve_fit` function has been added; it takes a function and uses
non-linear least squares to fit that to the provided data.
The `leastsq` and `fsolve` functions now return an array of size one instead of
a scalar when solving for a single parameter.
New sparse least squares solver
-------------------------------
The `lsqr` function was added to `scipy.sparse`. `This routine
<http://www.stanford.edu/group/SOL/software/lsqr.html>`_ finds a
least-squares solution to a large, sparse, linear system of equations.
ARPACK-based sparse SVD
-----------------------
A naive implementation of SVD for sparse matrices is available in
scipy.sparse.linalg.eigen.arpack. It is based on using an symmetric solver on
<A, A>, and as such may not be very precise.
Alternative behavior available for `scipy.constants.find`
---------------------------------------------------------
The keyword argument `disp` was added to the function `scipy.constants.find`,
with the default value `True`. When `disp` is `True`, the behavior is the
same as in Scipy version 0.7. When `False`, the function returns the list of
keys instead of printing them. (In SciPy version 0.9, the default will be
reversed.)
Incomplete sparse LU decompositions
-----------------------------------
Scipy now wraps SuperLU version 4.0, which supports incomplete sparse LU
decompositions. These can be accessed via `scipy.sparse.linalg.spilu`.
Upgrade to SuperLU 4.0 also fixes some known bugs.
Faster matlab file reader and default behavior change
------------------------------------------------------
We've rewritten the matlab file reader in Cython and it should now read
matlab files at around the same speed that Matlab does.
The reader reads matlab named and anonymous functions, but it can't
write them.
Until scipy 0.8.0 we have returned arrays of matlab structs as numpy
object arrays, where the objects have attributes named for the struct
fields. As of 0.8.0, we return matlab structs as numpy structured
arrays. You can get the older behavior by using the optional
``struct_as_record=False`` keyword argument to `scipy.io.loadmat` and
friends.
There is an inconsistency in the matlab file writer, in that it writes
numpy 1D arrays as column vectors in matlab 5 files, and row vectors in
matlab 4 files. We will change this in the next version, so both write
row vectors. There is a `FutureWarning` when calling the writer to warn
of this change; for now we suggest using the ``oned_as='row'`` keyword
argument to `scipy.io.savemat` and friends.
Faster evaluation of orthogonal polynomials
-------------------------------------------
Values of orthogonal polynomials can be evaluated with new vectorized functions
in `scipy.special`: `eval_legendre`, `eval_chebyt`, `eval_chebyu`,
`eval_chebyc`, `eval_chebys`, `eval_jacobi`, `eval_laguerre`,
`eval_genlaguerre`, `eval_hermite`, `eval_hermitenorm`,
`eval_gegenbauer`, `eval_sh_legendre`, `eval_sh_chebyt`,
`eval_sh_chebyu`, `eval_sh_jacobi`. This is faster than constructing the
full coefficient representation of the polynomials, which was previously the
only available way.
Note that the previous orthogonal polynomial routines will now also invoke this
feature, when possible.
Lambert W function
------------------
`scipy.special.lambertw` can now be used for evaluating the Lambert W
function.
Improved hypergeometric 2F1 function
------------------------------------
Implementation of `scipy.special.hyp2f1` for real parameters was revised.
The new version should produce accurate values for all real parameters.
More flexible interface for Radial basis function interpolation
---------------------------------------------------------------
The `scipy.interpolate.Rbf` class now accepts a callable as input for the
"function" argument, in addition to the built-in radial basis functions which
can be selected with a string argument.
Removed features
================
scipy.stsci: the package was removed
The module `scipy.misc.limits` was removed.
scipy.io
--------
The IO code in both NumPy and SciPy is being extensively
reworked. NumPy will be where basic code for reading and writing NumPy
arrays is located, while SciPy will house file readers and writers for
various data formats (data, audio, video, images, matlab, etc.).
Several functions in `scipy.io` are removed in the 0.8.0 release including:
`npfile`, `save`, `load`, `create_module`, `create_shelf`,
`objload`, `objsave`, `fopen`, `read_array`, `write_array`,
`fread`, `fwrite`, `bswap`, `packbits`, `unpackbits`, and
`convert_objectarray`. Some of these functions have been replaced by NumPy's
raw reading and writing capabilities, memory-mapping capabilities, or array
methods. Others have been moved from SciPy to NumPy, since basic array reading
and writing capability is now handled by NumPy.

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View file

@ -0,0 +1,23 @@
{% extends "!autosummary/class.rst" %}
{% block methods %}
{% if methods %}
.. HACK
.. autosummary::
:toctree:
{% for item in methods %}
{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block attributes %}
{% if attributes %}
.. HACK
.. autosummary::
:toctree:
{% for item in attributes %}
{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}

View file

@ -0,0 +1,5 @@
<h3>Resources</h3>
<ul>
<li><a href="http://scipy.org/">Scipy.org website</a></li>
<li>&nbsp;</li>
</ul>

View file

@ -0,0 +1,14 @@
{% extends "!layout.html" %}
{% block sidebarsearch %}
{%- if sourcename %}
<ul class="this-page-menu">
{%- if 'generated/' in sourcename %}
<li><a href="/scipy/docs/{{ sourcename.replace('generated/', '').replace('.txt', '') |e }}">{{_('Edit page')}}</a></li>
{%- else %}
<li><a href="/scipy/docs/scipy-docs/{{ sourcename.replace('.txt', '.rst') |e }}">{{_('Edit page')}}</a></li>
{%- endif %}
</ul>
{%- endif %}
{{ super() }}
{% endblock %}

View file

@ -0,0 +1,10 @@
========================================================
Hierarchical clustering (:mod:`scipy.cluster.hierarchy`)
========================================================
.. warning::
This documentation is work-in-progress and unorganized.
.. automodule:: scipy.cluster.hierarchy
:members:

View file

@ -0,0 +1,10 @@
=========================================
Clustering package (:mod:`scipy.cluster`)
=========================================
.. toctree::
cluster.hierarchy
cluster.vq
.. automodule:: scipy.cluster

View file

@ -0,0 +1,6 @@
====================================================================
K-means clustering and vector quantization (:mod:`scipy.cluster.vq`)
====================================================================
.. automodule:: scipy.cluster.vq
:members:

View file

@ -0,0 +1,286 @@
# -*- coding: utf-8 -*-
import sys, os, re
# If your extensions are in another directory, add it here. If the directory
# is relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
sys.path.insert(0, os.path.abspath('../sphinxext'))
# Check Sphinx version
import sphinx
if sphinx.__version__ < "0.5":
raise RuntimeError("Sphinx 0.5.dev or newer required")
# -----------------------------------------------------------------------------
# General configuration
# -----------------------------------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.pngmath', 'numpydoc',
'sphinx.ext.intersphinx', 'sphinx.ext.coverage', 'plot_directive']
if sphinx.__version__ >= "0.7":
extensions.append('sphinx.ext.autosummary')
else:
extensions.append('autosummary')
extensions.append('only_directives')
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General substitutions.
project = 'SciPy'
copyright = '2008-2009, The Scipy community'
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
#
import scipy
# The short X.Y version (including the .devXXXX suffix if present)
version = re.sub(r'^(\d+\.\d+)\.\d+(.*)', r'\1\2', scipy.__version__)
if 'dev' in version:
# retain the .dev suffix, but clean it up
version = re.sub(r'(\.dev\d*).*?$', r'\1', version)
else:
# strip all other suffixes
version = re.sub(r'^(\d+\.\d+).*?$', r'\1', version)
# The full version, including alpha/beta/rc tags.
release = scipy.__version__
print "Scipy (VERSION %s) (RELEASE %s)" % (version, release)
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# The reST default role (used for this markup: `text`) to use for all documents.
default_role = "autolink"
# List of directories, relative to source directories, that shouldn't be searched
# for source files.
exclude_dirs = []
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = False
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -----------------------------------------------------------------------------
# HTML output
# -----------------------------------------------------------------------------
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
html_style = 'scipy.css'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = "%s v%s Reference Guide (DRAFT)" % (project, version)
# The name of an image file (within the static path) to place at the top of
# the sidebar.
html_logo = '_static/scipyshiny_small.png'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'
# Correct index page
#html_index = "index"
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'index': 'indexsidebar.html'
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
html_additional_pages = {}
# If false, no module index is generated.
html_use_modindex = True
# If true, the reST sources are included in the HTML build as _sources/<name>.
#html_copy_source = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".html").
html_file_suffix = '.html'
# Output file base name for HTML help builder.
htmlhelp_basename = 'scipy'
# Pngmath should try to align formulas properly
pngmath_use_preview = True
# -----------------------------------------------------------------------------
# LaTeX output
# -----------------------------------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
_stdauthor = 'Written by the SciPy community'
latex_documents = [
('index', 'scipy-ref.tex', 'SciPy Reference Guide', _stdauthor, 'manual'),
# ('user/index', 'scipy-user.tex', 'SciPy User Guide',
# _stdauthor, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
latex_preamble = r'''
\usepackage{amsmath}
\DeclareUnicodeCharacter{00A0}{\nobreakspace}
% In the parameters section, place a newline after the Parameters
% header
\usepackage{expdlist}
\let\latexdescription=\description
\def\description{\latexdescription{}{} \breaklabel}
% Make Examples/etc section headers smaller and more compact
\makeatletter
\titleformat{\paragraph}{\normalsize\py@HeaderFamily}%
{\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor}
\titlespacing*{\paragraph}{0pt}{1ex}{0pt}
\makeatother
% Fix footer/header
\renewcommand{\chaptermark}[1]{\markboth{\MakeUppercase{\thechapter.\ #1}}{}}
\renewcommand{\sectionmark}[1]{\markright{\MakeUppercase{\thesection.\ #1}}}
'''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
latex_use_modindex = False
# -----------------------------------------------------------------------------
# Intersphinx configuration
# -----------------------------------------------------------------------------
intersphinx_mapping = {
'http://docs.python.org/dev': None,
'http://docs.scipy.org/doc/numpy': None,
}
# -----------------------------------------------------------------------------
# Numpy extensions
# -----------------------------------------------------------------------------
# If we want to do a phantom import from an XML file for all autodocs
phantom_import_file = 'dump.xml'
# Edit links
#numpydoc_edit_link = '`Edit </pydocweb/doc/%(full_name)s/>`__'
# -----------------------------------------------------------------------------
# Autosummary
# -----------------------------------------------------------------------------
if sphinx.__version__ >= "0.7":
import glob
autosummary_generate = glob.glob("*.rst")
# -----------------------------------------------------------------------------
# Coverage checker
# -----------------------------------------------------------------------------
coverage_ignore_modules = r"""
""".split()
coverage_ignore_functions = r"""
test($|_) (some|all)true bitwise_not cumproduct pkgload
generic\.
""".split()
coverage_ignore_classes = r"""
""".split()
coverage_c_path = []
coverage_c_regexes = {}
coverage_ignore_c_items = {}
#------------------------------------------------------------------------------
# Plot
#------------------------------------------------------------------------------
plot_pre_code = """
import numpy as np
import scipy as sp
np.random.seed(123)
"""
plot_include_source = True
plot_formats = [('png', 100), 'pdf']
import math
phi = (math.sqrt(5) + 1)/2
import matplotlib
matplotlib.rcParams.update({
'font.size': 8,
'axes.titlesize': 8,
'axes.labelsize': 8,
'xtick.labelsize': 8,
'ytick.labelsize': 8,
'legend.fontsize': 8,
'figure.figsize': (3*phi, 3),
'figure.subplot.bottom': 0.2,
'figure.subplot.left': 0.2,
'figure.subplot.right': 0.9,
'figure.subplot.top': 0.85,
'figure.subplot.wspace': 0.4,
'text.usetex': False,
})

View file

@ -0,0 +1,582 @@
==================================
Constants (:mod:`scipy.constants`)
==================================
.. module:: scipy.constants
Physical and mathematical constants and units.
Mathematical constants
======================
============ =================================================================
``pi`` Pi
``golden`` Golden ratio
============ =================================================================
Physical constants
==================
============= =================================================================
``c`` speed of light in vacuum
``mu_0`` the magnetic constant :math:`\mu_0`
``epsilon_0`` the electric constant (vacuum permittivity), :math:`\epsilon_0`
``h`` the Planck constant :math:`h`
``hbar`` :math:`\hbar = h/(2\pi)`
``G`` Newtonian constant of gravitation
``g`` standard acceleration of gravity
``e`` elementary charge
``R`` molar gas constant
``alpha`` fine-structure constant
``N_A`` Avogadro constant
``k`` Boltzmann constant
``sigma`` Stefan-Boltzmann constant :math:`\sigma`
``Wien`` Wien displacement law constant
``Rydberg`` Rydberg constant
``m_e`` electron mass
``m_p`` proton mass
``m_n`` neutron mass
============= =================================================================
Constants database
==================
In addition to the above variables containing physical constants,
:mod:`scipy.constants` also contains a database of additional physical
constants.
.. autosummary::
:toctree: generated/
value
unit
precision
find
.. data:: physical_constants
Dictionary of physical constants, of the format
``physical_constants[name] = (value, unit, uncertainty)``.
Available constants:
====================================================================== ====
``alpha particle mass``
``alpha particle mass energy equivalent``
``alpha particle mass energy equivalent in MeV``
``alpha particle mass in u``
``alpha particle molar mass``
``alpha particle-electron mass ratio``
``alpha particle-proton mass ratio``
``Angstrom star``
``atomic mass constant``
``atomic mass constant energy equivalent``
``atomic mass constant energy equivalent in MeV``
``atomic mass unit-electron volt relationship``
``atomic mass unit-hartree relationship``
``atomic mass unit-hertz relationship``
``atomic mass unit-inverse meter relationship``
``atomic mass unit-joule relationship``
``atomic mass unit-kelvin relationship``
``atomic mass unit-kilogram relationship``
``atomic unit of 1st hyperpolarizablity``
``atomic unit of 2nd hyperpolarizablity``
``atomic unit of action``
``atomic unit of charge``
``atomic unit of charge density``
``atomic unit of current``
``atomic unit of electric dipole moment``
``atomic unit of electric field``
``atomic unit of electric field gradient``
``atomic unit of electric polarizablity``
``atomic unit of electric potential``
``atomic unit of electric quadrupole moment``
``atomic unit of energy``
``atomic unit of force``
``atomic unit of length``
``atomic unit of magnetic dipole moment``
``atomic unit of magnetic flux density``
``atomic unit of magnetizability``
``atomic unit of mass``
``atomic unit of momentum``
``atomic unit of permittivity``
``atomic unit of time``
``atomic unit of velocity``
``Avogadro constant``
``Bohr magneton``
``Bohr magneton in eV/T``
``Bohr magneton in Hz/T``
``Bohr magneton in inverse meters per tesla``
``Bohr magneton in K/T``
``Bohr radius``
``Boltzmann constant``
``Boltzmann constant in eV/K``
``Boltzmann constant in Hz/K``
``Boltzmann constant in inverse meters per kelvin``
``characteristic impedance of vacuum``
``classical electron radius``
``Compton wavelength``
``Compton wavelength over 2 pi``
``conductance quantum``
``conventional value of Josephson constant``
``conventional value of von Klitzing constant``
``Cu x unit``
``deuteron magnetic moment``
``deuteron magnetic moment to Bohr magneton ratio``
``deuteron magnetic moment to nuclear magneton ratio``
``deuteron mass``
``deuteron mass energy equivalent``
``deuteron mass energy equivalent in MeV``
``deuteron mass in u``
``deuteron molar mass``
``deuteron rms charge radius``
``deuteron-electron magnetic moment ratio``
``deuteron-electron mass ratio``
``deuteron-neutron magnetic moment ratio``
``deuteron-proton magnetic moment ratio``
``deuteron-proton mass ratio``
``electric constant``
``electron charge to mass quotient``
``electron g factor``
``electron gyromagnetic ratio``
``electron gyromagnetic ratio over 2 pi``
``electron magnetic moment``
``electron magnetic moment anomaly``
``electron magnetic moment to Bohr magneton ratio``
``electron magnetic moment to nuclear magneton ratio``
``electron mass``
``electron mass energy equivalent``
``electron mass energy equivalent in MeV``
``electron mass in u``
``electron molar mass``
``electron to alpha particle mass ratio``
``electron to shielded helion magnetic moment ratio``
``electron to shielded proton magnetic moment ratio``
``electron volt``
``electron volt-atomic mass unit relationship``
``electron volt-hartree relationship``
``electron volt-hertz relationship``
``electron volt-inverse meter relationship``
``electron volt-joule relationship``
``electron volt-kelvin relationship``
``electron volt-kilogram relationship``
``electron-deuteron magnetic moment ratio``
``electron-deuteron mass ratio``
``electron-muon magnetic moment ratio``
``electron-muon mass ratio``
``electron-neutron magnetic moment ratio``
``electron-neutron mass ratio``
``electron-proton magnetic moment ratio``
``electron-proton mass ratio``
``electron-tau mass ratio``
``elementary charge``
``elementary charge over h``
``Faraday constant``
``Faraday constant for conventional electric current``
``Fermi coupling constant``
``fine-structure constant``
``first radiation constant``
``first radiation constant for spectral radiance``
``Hartree energy``
``Hartree energy in eV``
``hartree-atomic mass unit relationship``
``hartree-electron volt relationship``
``hartree-hertz relationship``
``hartree-inverse meter relationship``
``hartree-joule relationship``
``hartree-kelvin relationship``
``hartree-kilogram relationship``
``helion mass``
``helion mass energy equivalent``
``helion mass energy equivalent in MeV``
``helion mass in u``
``helion molar mass``
``helion-electron mass ratio``
``helion-proton mass ratio``
``hertz-atomic mass unit relationship``
``hertz-electron volt relationship``
``hertz-hartree relationship``
``hertz-inverse meter relationship``
``hertz-joule relationship``
``hertz-kelvin relationship``
``hertz-kilogram relationship``
``inverse fine-structure constant``
``inverse meter-atomic mass unit relationship``
``inverse meter-electron volt relationship``
``inverse meter-hartree relationship``
``inverse meter-hertz relationship``
``inverse meter-joule relationship``
``inverse meter-kelvin relationship``
``inverse meter-kilogram relationship``
``inverse of conductance quantum``
``Josephson constant``
``joule-atomic mass unit relationship``
``joule-electron volt relationship``
``joule-hartree relationship``
``joule-hertz relationship``
``joule-inverse meter relationship``
``joule-kelvin relationship``
``joule-kilogram relationship``
``kelvin-atomic mass unit relationship``
``kelvin-electron volt relationship``
``kelvin-hartree relationship``
``kelvin-hertz relationship``
``kelvin-inverse meter relationship``
``kelvin-joule relationship``
``kelvin-kilogram relationship``
``kilogram-atomic mass unit relationship``
``kilogram-electron volt relationship``
``kilogram-hartree relationship``
``kilogram-hertz relationship``
``kilogram-inverse meter relationship``
``kilogram-joule relationship``
``kilogram-kelvin relationship``
``lattice parameter of silicon``
``Loschmidt constant (273.15 K, 101.325 kPa)``
``magnetic constant``
``magnetic flux quantum``
``Mo x unit``
``molar gas constant``
``molar mass constant``
``molar mass of carbon-12``
``molar Planck constant``
``molar Planck constant times c``
``molar volume of ideal gas (273.15 K, 100 kPa)``
``molar volume of ideal gas (273.15 K, 101.325 kPa)``
``molar volume of silicon``
``muon Compton wavelength``
``muon Compton wavelength over 2 pi``
``muon g factor``
``muon magnetic moment``
``muon magnetic moment anomaly``
``muon magnetic moment to Bohr magneton ratio``
``muon magnetic moment to nuclear magneton ratio``
``muon mass``
``muon mass energy equivalent``
``muon mass energy equivalent in MeV``
``muon mass in u``
``muon molar mass``
``muon-electron mass ratio``
``muon-neutron mass ratio``
``muon-proton magnetic moment ratio``
``muon-proton mass ratio``
``muon-tau mass ratio``
``natural unit of action``
``natural unit of action in eV s``
``natural unit of energy``
``natural unit of energy in MeV``
``natural unit of length``
``natural unit of mass``
``natural unit of momentum``
``natural unit of momentum in MeV/c``
``natural unit of time``
``natural unit of velocity``
``neutron Compton wavelength``
``neutron Compton wavelength over 2 pi``
``neutron g factor``
``neutron gyromagnetic ratio``
``neutron gyromagnetic ratio over 2 pi``
``neutron magnetic moment``
``neutron magnetic moment to Bohr magneton ratio``
``neutron magnetic moment to nuclear magneton ratio``
``neutron mass``
``neutron mass energy equivalent``
``neutron mass energy equivalent in MeV``
``neutron mass in u``
``neutron molar mass``
``neutron to shielded proton magnetic moment ratio``
``neutron-electron magnetic moment ratio``
``neutron-electron mass ratio``
``neutron-muon mass ratio``
``neutron-proton magnetic moment ratio``
``neutron-proton mass ratio``
``neutron-tau mass ratio``
``Newtonian constant of gravitation``
``Newtonian constant of gravitation over h-bar c``
``nuclear magneton``
``nuclear magneton in eV/T``
``nuclear magneton in inverse meters per tesla``
``nuclear magneton in K/T``
``nuclear magneton in MHz/T``
``Planck constant``
``Planck constant in eV s``
``Planck constant over 2 pi``
``Planck constant over 2 pi in eV s``
``Planck constant over 2 pi times c in MeV fm``
``Planck length``
``Planck mass``
``Planck temperature``
``Planck time``
``proton charge to mass quotient``
``proton Compton wavelength``
``proton Compton wavelength over 2 pi``
``proton g factor``
``proton gyromagnetic ratio``
``proton gyromagnetic ratio over 2 pi``
``proton magnetic moment``
``proton magnetic moment to Bohr magneton ratio``
``proton magnetic moment to nuclear magneton ratio``
``proton magnetic shielding correction``
``proton mass``
``proton mass energy equivalent``
``proton mass energy equivalent in MeV``
``proton mass in u``
``proton molar mass``
``proton rms charge radius``
``proton-electron mass ratio``
``proton-muon mass ratio``
``proton-neutron magnetic moment ratio``
``proton-neutron mass ratio``
``proton-tau mass ratio``
``quantum of circulation``
``quantum of circulation times 2``
``Rydberg constant``
``Rydberg constant times c in Hz``
``Rydberg constant times hc in eV``
``Rydberg constant times hc in J``
``Sackur-Tetrode constant (1 K, 100 kPa)``
``Sackur-Tetrode constant (1 K, 101.325 kPa)``
``second radiation constant``
``shielded helion gyromagnetic ratio``
``shielded helion gyromagnetic ratio over 2 pi``
``shielded helion magnetic moment``
``shielded helion magnetic moment to Bohr magneton ratio``
``shielded helion magnetic moment to nuclear magneton ratio``
``shielded helion to proton magnetic moment ratio``
``shielded helion to shielded proton magnetic moment ratio``
``shielded proton gyromagnetic ratio``
``shielded proton gyromagnetic ratio over 2 pi``
``shielded proton magnetic moment``
``shielded proton magnetic moment to Bohr magneton ratio``
``shielded proton magnetic moment to nuclear magneton ratio``
``speed of light in vacuum``
``standard acceleration of gravity``
``standard atmosphere``
``Stefan-Boltzmann constant``
``tau Compton wavelength``
``tau Compton wavelength over 2 pi``
``tau mass``
``tau mass energy equivalent``
``tau mass energy equivalent in MeV``
``tau mass in u``
``tau molar mass``
``tau-electron mass ratio``
``tau-muon mass ratio``
``tau-neutron mass ratio``
``tau-proton mass ratio``
``Thomson cross section``
``unified atomic mass unit``
``von Klitzing constant``
``weak mixing angle``
``Wien displacement law constant``
``{220} lattice spacing of silicon``
====================================================================== ====
Unit prefixes
=============
SI
--
============ =================================================================
``yotta`` :math:`10^{24}`
``zetta`` :math:`10^{21}`
``exa`` :math:`10^{18}`
``peta`` :math:`10^{15}`
``tera`` :math:`10^{12}`
``giga`` :math:`10^{9}`
``mega`` :math:`10^{6}`
``kilo`` :math:`10^{3}`
``hecto`` :math:`10^{2}`
``deka`` :math:`10^{1}`
``deci`` :math:`10^{-1}`
``centi`` :math:`10^{-2}`
``milli`` :math:`10^{-3}`
``micro`` :math:`10^{-6}`
``nano`` :math:`10^{-9}`
``pico`` :math:`10^{-12}`
``femto`` :math:`10^{-15}`
``atto`` :math:`10^{-18}`
``zepto`` :math:`10^{-21}`
============ =================================================================
Binary
------
============ =================================================================
``kibi`` :math:`2^{10}`
``mebi`` :math:`2^{20}`
``gibi`` :math:`2^{30}`
``tebi`` :math:`2^{40}`
``pebi`` :math:`2^{50}`
``exbi`` :math:`2^{60}`
``zebi`` :math:`2^{70}`
``yobi`` :math:`2^{80}`
============ =================================================================
Units
=====
Weight
------
================= ============================================================
``gram`` :math:`10^{-3}` kg
``metric_ton`` :math:`10^{3}` kg
``grain`` one grain in kg
``lb`` one pound (avoirdupous) in kg
``oz`` one ounce in kg
``stone`` one stone in kg
``grain`` one grain in kg
``long_ton`` one long ton in kg
``short_ton`` one short ton in kg
``troy_ounce`` one Troy ounce in kg
``troy_pound`` one Troy pound in kg
``carat`` one carat in kg
``m_u`` atomic mass constant (in kg)
================= ============================================================
Angle
-----
================= ============================================================
``degree`` degree in radians
``arcmin`` arc minute in radians
``arcsec`` arc second in radians
================= ============================================================
Time
----
================= ============================================================
``minute`` one minute in seconds
``hour`` one hour in seconds
``day`` one day in seconds
``week`` one week in seconds
``year`` one year (365 days) in seconds
``Julian_year`` one Julian year (365.25 days) in seconds
================= ============================================================
Length
------
================= ============================================================
``inch`` one inch in meters
``foot`` one foot in meters
``yard`` one yard in meters
``mile`` one mile in meters
``mil`` one mil in meters
``pt`` one point in meters
``survey_foot`` one survey foot in meters
``survey_mile`` one survey mile in meters
``nautical_mile`` one nautical mile in meters
``fermi`` one Fermi in meters
``angstrom`` one Ångström in meters
``micron`` one micron in meters
``au`` one astronomical unit in meters
``light_year`` one light year in meters
``parsec`` one parsec in meters
================= ============================================================
Pressure
--------
================= ============================================================
``atm`` standard atmosphere in pascals
``bar`` one bar in pascals
``torr`` one torr (mmHg) in pascals
``psi`` one psi in pascals
================= ============================================================
Area
----
================= ============================================================
``hectare`` one hectare in square meters
``acre`` one acre in square meters
================= ============================================================
Volume
------
=================== ========================================================
``liter`` one liter in cubic meters
``gallon`` one gallon (US) in cubic meters
``gallon_imp`` one gallon (UK) in cubic meters
``fluid_ounce`` one fluid ounce (US) in cubic meters
``fluid_ounce_imp`` one fluid ounce (UK) in cubic meters
``bbl`` one barrel in cubic meters
=================== ========================================================
Speed
-----
================= ==========================================================
``kmh`` kilometers per hour in meters per second
``mph`` miles per hour in meters per second
``mach`` one Mach (approx., at 15 °C, 1 atm) in meters per second
``knot`` one knot in meters per second
================= ==========================================================
Temperature
-----------
===================== =======================================================
``zero_Celsius`` zero of Celsius scale in Kelvin
``degree_Fahrenheit`` one Fahrenheit (only differences) in Kelvins
===================== =======================================================
.. autosummary::
:toctree: generated/
C2K
K2C
F2C
C2F
F2K
K2F
Energy
------
==================== =======================================================
``eV`` one electron volt in Joules
``calorie`` one calorie (thermochemical) in Joules
``calorie_IT`` one calorie (International Steam Table calorie, 1956) in Joules
``erg`` one erg in Joules
``Btu`` one British thermal unit (International Steam Table) in Joules
``Btu_th`` one British thermal unit (thermochemical) in Joules
``ton_TNT`` one ton of TNT in Joules
==================== =======================================================
Power
-----
==================== =======================================================
``hp`` one horsepower in watts
==================== =======================================================
Force
-----
==================== =======================================================
``dyn`` one dyne in newtons
``lbf`` one pound force in newtons
``kgf`` one kilogram force in newtons
==================== =======================================================
Optics
------
.. autosummary::
:toctree: generated/
lambda2nu
nu2lambda

View file

@ -0,0 +1,77 @@
Fourier transforms (:mod:`scipy.fftpack`)
=========================================
.. module:: scipy.fftpack
Fast Fourier transforms
-----------------------
.. autosummary::
:toctree: generated/
fft
ifft
fftn
ifftn
fft2
ifft2
rfft
irfft
Differential and pseudo-differential operators
----------------------------------------------
.. autosummary::
:toctree: generated/
diff
tilbert
itilbert
hilbert
ihilbert
cs_diff
sc_diff
ss_diff
cc_diff
shift
Helper functions
----------------
.. autosummary::
:toctree: generated/
fftshift
ifftshift
dftfreq
rfftfreq
Convolutions (:mod:`scipy.fftpack.convolve`)
--------------------------------------------
.. module:: scipy.fftpack.convolve
.. autosummary::
:toctree: generated/
convolve
convolve_z
init_convolution_kernel
destroy_convolve_cache
Other (:mod:`scipy.fftpack._fftpack`)
-------------------------------------
.. module:: scipy.fftpack._fftpack
.. autosummary::
:toctree: generated/
drfft
zfft
zrfft
zfftnd
destroy_drfft_cache
destroy_zfft_cache
destroy_zfftnd_cache

View file

@ -0,0 +1,44 @@
SciPy
=====
:Release: |version|
:Date: |today|
SciPy (pronounced "Sigh Pie") is open-source software for mathematics,
science, and engineering.
.. toctree::
:maxdepth: 2
tutorial/index
.. toctree::
:maxdepth: 1
release
Reference
---------
.. toctree::
:maxdepth: 1
cluster
constants
fftpack
integrate
interpolate
io
linalg
maxentropy
misc
ndimage
odr
optimize
signal
sparse
sparse.linalg
spatial
special
stats
weave

View file

@ -0,0 +1,44 @@
=============================================
Integration and ODEs (:mod:`scipy.integrate`)
=============================================
.. module:: scipy.integrate
Integrating functions, given function object
============================================
.. autosummary::
:toctree: generated/
quad
dblquad
tplquad
fixed_quad
quadrature
romberg
Integrating functions, given fixed samples
==========================================
.. autosummary::
:toctree: generated/
trapz
cumtrapz
simps
romb
.. seealso::
:mod:`scipy.special` for orthogonal polynomials (special) for Gaussian
quadrature roots and weights for other weighting factors and regions.
Integrators of ODE systems
==========================
.. autosummary::
:toctree: generated/
odeint
ode

View file

@ -0,0 +1,100 @@
========================================
Interpolation (:mod:`scipy.interpolate`)
========================================
.. module:: scipy.interpolate
Univariate interpolation
========================
.. autosummary::
:toctree: generated/
interp1d
BarycentricInterpolator
KroghInterpolator
PiecewisePolynomial
barycentric_interpolate
krogh_interpolate
piecewise_polynomial_interpolate
Multivariate interpolation
==========================
.. autosummary::
:toctree: generated/
interp2d
Rbf
1-D Splines
===========
.. autosummary::
:toctree: generated/
UnivariateSpline
InterpolatedUnivariateSpline
LSQUnivariateSpline
The above univariate spline classes have the following methods:
.. autosummary::
:toctree: generated/
UnivariateSpline.__call__
UnivariateSpline.derivatives
UnivariateSpline.integral
UnivariateSpline.roots
UnivariateSpline.get_coeffs
UnivariateSpline.get_knots
UnivariateSpline.get_residual
UnivariateSpline.set_smoothing_factor
Low-level interface to FITPACK functions:
.. autosummary::
:toctree: generated/
splrep
splprep
splev
splint
sproot
spalde
bisplrep
bisplev
2-D Splines
===========
.. seealso:: scipy.ndimage.map_coordinates
.. autosummary::
:toctree: generated/
BivariateSpline
SmoothBivariateSpline
LSQBivariateSpline
Low-level interface to FITPACK functions:
.. autosummary::
:toctree: generated/
bisplrep
bisplev
Additional tools
================
.. autosummary::
:toctree: generated/
lagrange
approximate_taylor_polynomial

View file

@ -0,0 +1,67 @@
==================================
Input and output (:mod:`scipy.io`)
==================================
.. seealso:: :ref:`numpy-reference.routines.io` (in Numpy)
.. module:: scipy.io
MATLAB® files
=============
.. autosummary::
:toctree: generated/
loadmat
savemat
Matrix Market files
===================
.. autosummary::
:toctree: generated/
mminfo
mmread
mmwrite
Other
=====
.. autosummary::
:toctree: generated/
save_as_module
npfile
Wav sound files (:mod:`scipy.io.wavfile`)
=========================================
.. module:: scipy.io.wavfile
.. autosummary::
:toctree: generated/
read
write
Arff files (:mod:`scipy.io.arff`)
=================================
.. automodule:: scipy.io.arff
.. autosummary::
:toctree: generated/
loadarff
Netcdf (:mod:`scipy.io.netcdf`)
===============================
.. module:: scipy.io.netcdf
.. autosummary::
:toctree: generated/
netcdf_file
netcdf_variable

View file

@ -0,0 +1,95 @@
====================================
Linear algebra (:mod:`scipy.linalg`)
====================================
.. module:: scipy.linalg
Basics
======
.. autosummary::
:toctree: generated/
inv
solve
solve_banded
solveh_banded
det
norm
lstsq
pinv
pinv2
Eigenvalue Problem
==================
.. autosummary::
:toctree: generated/
eig
eigvals
eigh
eigvalsh
eig_banded
eigvals_banded
Decompositions
==============
.. autosummary::
:toctree: generated/
lu
lu_factor
lu_solve
svd
svdvals
diagsvd
orth
cholesky
cholesky_banded
cho_factor
cho_solve
cho_solve_banded
qr
schur
rsf2csf
hessenberg
Matrix Functions
================
.. autosummary::
:toctree: generated/
expm
expm2
expm3
logm
cosm
sinm
tanm
coshm
sinhm
tanhm
signm
sqrtm
funm
Special Matrices
================
.. autosummary::
:toctree: generated/
block_diag
circulant
companion
hadamard
hankel
kron
leslie
toeplitz
tri
tril
triu

View file

@ -0,0 +1,10 @@
==========================================
Miscellaneous routines (:mod:`scipy.misc`)
==========================================
.. warning::
This documentation is work-in-progress and unorganized.
.. automodule:: scipy.misc
:members:

View file

@ -0,0 +1,122 @@
=========================================================
Multi-dimensional image processing (:mod:`scipy.ndimage`)
=========================================================
.. module:: scipy.ndimage
Functions for multi-dimensional image processing.
Filters :mod:`scipy.ndimage.filters`
====================================
.. module:: scipy.ndimage.filters
.. autosummary::
:toctree: generated/
convolve
convolve1d
correlate
correlate1d
gaussian_filter
gaussian_filter1d
gaussian_gradient_magnitude
gaussian_laplace
generic_filter
generic_filter1d
generic_gradient_magnitude
generic_laplace
laplace
maximum_filter
maximum_filter1d
median_filter
minimum_filter
minimum_filter1d
percentile_filter
prewitt
rank_filter
sobel
uniform_filter
uniform_filter1d
Fourier filters :mod:`scipy.ndimage.fourier`
============================================
.. module:: scipy.ndimage.fourier
.. autosummary::
:toctree: generated/
fourier_ellipsoid
fourier_gaussian
fourier_shift
fourier_uniform
Interpolation :mod:`scipy.ndimage.interpolation`
================================================
.. module:: scipy.ndimage.interpolation
.. autosummary::
:toctree: generated/
affine_transform
geometric_transform
map_coordinates
rotate
shift
spline_filter
spline_filter1d
zoom
Measurements :mod:`scipy.ndimage.measurements`
==============================================
.. module:: scipy.ndimage.measurements
.. autosummary::
:toctree: generated/
center_of_mass
extrema
find_objects
histogram
label
maximum
maximum_position
mean
minimum
minimum_position
standard_deviation
sum
variance
watershed_ift
Morphology :mod:`scipy.ndimage.morphology`
==========================================
.. module:: scipy.ndimage.morphology
.. autosummary::
:toctree: generated/
binary_closing
binary_dilation
binary_erosion
binary_fill_holes
binary_hit_or_miss
binary_opening
binary_propagation
black_tophat
distance_transform_bf
distance_transform_cdt
distance_transform_edt
generate_binary_structure
grey_closing
grey_dilation
grey_erosion
grey_opening
iterate_structure
morphological_gradient
morphological_laplace
white_tophat

View file

@ -0,0 +1,33 @@
=================================================
Orthogonal distance regression (:mod:`scipy.odr`)
=================================================
.. automodule:: scipy.odr
.. autoclass:: Data
.. automethod:: set_meta
.. autoclass:: Model
.. automethod:: set_meta
.. autoclass:: ODR
.. automethod:: restart
.. automethod:: run
.. automethod:: set_iprint
.. automethod:: set_job
.. autoclass:: Output
.. automethod:: pprint
.. autoexception:: odr_error
.. autoexception:: odr_stop
.. autofunction:: odr

View file

@ -0,0 +1,111 @@
=====================================================
Optimization and root finding (:mod:`scipy.optimize`)
=====================================================
.. module:: scipy.optimize
Optimization
============
General-purpose
---------------
.. autosummary::
:toctree: generated/
fmin
fmin_powell
fmin_cg
fmin_bfgs
fmin_ncg
leastsq
Constrained (multivariate)
--------------------------
.. autosummary::
:toctree: generated/
fmin_l_bfgs_b
fmin_tnc
fmin_cobyla
fmin_slsqp
nnls
Global
------
.. autosummary::
:toctree: generated/
anneal
brute
Scalar function minimizers
--------------------------
.. autosummary::
:toctree: generated/
fminbound
golden
bracket
brent
Fitting
=======
.. autosummary::
:toctree: generated/
curve_fit
Root finding
============
.. autosummary::
:toctree: generated/
fsolve
Scalar function solvers
-----------------------
.. autosummary::
:toctree: generated/
brentq
brenth
ridder
bisect
newton
Fixed point finding:
.. autosummary::
:toctree: generated/
fixed_point
General-purpose nonlinear (multidimensional)
--------------------------------------------
.. autosummary::
:toctree: generated/
broyden1
broyden2
broyden3
broyden_generalized
anderson
anderson2
Utility Functions
=================
.. autosummary::
:toctree: generated/
line_search
check_grad

View file

@ -0,0 +1,5 @@
*************
Release Notes
*************
.. include:: ../release/0.8.0-notes.rst

View file

@ -0,0 +1,168 @@
=======================================
Signal processing (:mod:`scipy.signal`)
=======================================
.. module:: scipy.signal
Convolution
===========
.. autosummary::
:toctree: generated/
convolve
correlate
fftconvolve
convolve2d
correlate2d
sepfir2d
B-splines
=========
.. autosummary::
:toctree: generated/
bspline
gauss_spline
cspline1d
qspline1d
cspline2d
qspline2d
spline_filter
Filtering
=========
.. autosummary::
:toctree: generated/
order_filter
medfilt
medfilt2d
wiener
symiirorder1
symiirorder2
lfilter
lfiltic
deconvolve
hilbert
get_window
decimate
detrend
resample
Filter design
=============
.. autosummary::
:toctree: generated/
bilinear
firwin
freqs
freqz
iirdesign
iirfilter
kaiserord
remez
unique_roots
residue
residuez
invres
Matlab-style IIR filter design
==============================
.. autosummary::
:toctree: generated/
butter
buttord
cheby1
cheb1ord
cheby2
cheb2ord
ellip
ellipord
bessel
Linear Systems
==============
.. autosummary::
:toctree: generated/
lti
lsim
lsim2
impulse
impulse2
step
step2
LTI Representations
===================
.. autosummary::
:toctree: generated/
tf2zpk
zpk2tf
tf2ss
ss2tf
zpk2ss
ss2zpk
Waveforms
=========
.. autosummary::
:toctree: generated/
chirp
gausspulse
sawtooth
square
sweep_poly
Window functions
================
.. autosummary::
:toctree: generated/
get_window
barthann
bartlett
blackman
blackmanharris
bohman
boxcar
chebwin
flattop
gaussian
general_gaussian
hamming
hann
kaiser
nuttall
parzen
slepian
triang
Wavelets
========
.. autosummary::
:toctree: generated/
cascade
daub
morlet
qmf

View file

@ -0,0 +1,10 @@
==================================================
Sparse linear algebra (:mod:`scipy.sparse.linalg`)
==================================================
.. warning::
This documentation is work-in-progress and unorganized.
.. automodule:: scipy.sparse.linalg
:members:

View file

@ -0,0 +1,64 @@
=====================================
Sparse matrices (:mod:`scipy.sparse`)
=====================================
.. automodule:: scipy.sparse
Sparse matrix classes
=====================
.. autosummary::
:toctree: generated/
csc_matrix
csr_matrix
bsr_matrix
lil_matrix
dok_matrix
coo_matrix
dia_matrix
Functions
=========
Building sparse matrices:
.. autosummary::
:toctree: generated/
eye
identity
kron
kronsum
lil_eye
lil_diags
spdiags
tril
triu
bmat
hstack
vstack
Identifying sparse matrices:
.. autosummary::
:toctree: generated/
issparse
isspmatrix
isspmatrix_csc
isspmatrix_csr
isspmatrix_bsr
isspmatrix_lil
isspmatrix_dok
isspmatrix_coo
isspmatrix_dia
Exceptions
==========
.. autoexception:: SparseEfficiencyWarning
.. autoexception:: SparseWarning

View file

@ -0,0 +1,6 @@
=====================================================
Distance computations (:mod:`scipy.spatial.distance`)
=====================================================
.. automodule:: scipy.spatial.distance
:members:

View file

@ -0,0 +1,14 @@
=============================================================
Spatial algorithms and data structures (:mod:`scipy.spatial`)
=============================================================
.. warning::
This documentation is work-in-progress and unorganized.
.. toctree::
spatial.distance
.. automodule:: scipy.spatial
:members:

View file

@ -0,0 +1,512 @@
========================================
Special functions (:mod:`scipy.special`)
========================================
.. module:: scipy.special
Nearly all of the functions below are universal functions and follow
broadcasting and automatic array-looping rules. Exceptions are noted.
Error handling
==============
Errors are handled by returning nans, or other appropriate values.
Some of the special function routines will print an error message
when an error occurs. By default this printing
is disabled. To enable such messages use errprint(1)
To disable such messages use errprint(0).
Example:
>>> print scipy.special.bdtr(-1,10,0.3)
>>> scipy.special.errprint(1)
>>> print scipy.special.bdtr(-1,10,0.3)
.. autosummary::
:toctree: generated/
errprint
errstate
Available functions
===================
Airy functions
--------------
.. autosummary::
:toctree: generated/
airy
airye
ai_zeros
bi_zeros
Elliptic Functions and Integrals
--------------------------------
.. autosummary::
:toctree: generated/
ellipj
ellipk
ellipkinc
ellipe
ellipeinc
Bessel Functions
----------------
.. autosummary::
:toctree: generated/
jn
jv
jve
yn
yv
yve
kn
kv
kve
iv
ive
hankel1
hankel1e
hankel2
hankel2e
The following is not an universal function:
.. autosummary::
:toctree: generated/
lmbda
Zeros of Bessel Functions
^^^^^^^^^^^^^^^^^^^^^^^^^
These are not universal functions:
.. autosummary::
:toctree: generated/
jnjnp_zeros
jnyn_zeros
jn_zeros
jnp_zeros
yn_zeros
ynp_zeros
y0_zeros
y1_zeros
y1p_zeros
Faster versions of common Bessel Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
j0
j1
y0
y1
i0
i0e
i1
i1e
k0
k0e
k1
k1e
Integrals of Bessel Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
itj0y0
it2j0y0
iti0k0
it2i0k0
besselpoly
Derivatives of Bessel Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
jvp
yvp
kvp
ivp
h1vp
h2vp
Spherical Bessel Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^
These are not universal functions:
.. autosummary::
:toctree: generated/
sph_jn
sph_yn
sph_jnyn
sph_in
sph_kn
sph_inkn
Riccati-Bessel Functions
^^^^^^^^^^^^^^^^^^^^^^^^
These are not universal functions:
.. autosummary::
:toctree: generated/
riccati_jn
riccati_yn
Struve Functions
----------------
.. autosummary::
:toctree: generated/
struve
modstruve
itstruve0
it2struve0
itmodstruve0
Raw Statistical Functions
-------------------------
.. seealso:: :mod:`scipy.stats`: Friendly versions of these functions.
.. autosummary::
:toctree: generated/
bdtr
bdtrc
bdtri
btdtr
btdtri
fdtr
fdtrc
fdtri
gdtr
gdtrc
gdtria
gdtrib
gdtrix
nbdtr
nbdtrc
nbdtri
pdtr
pdtrc
pdtri
stdtr
stdtridf
stdtrit
chdtr
chdtrc
chdtri
ndtr
ndtri
smirnov
smirnovi
kolmogorov
kolmogi
tklmbda
Gamma and Related Functions
---------------------------
.. autosummary::
:toctree: generated/
gamma
gammaln
gammainc
gammaincinv
gammaincc
gammainccinv
beta
betaln
betainc
betaincinv
psi
rgamma
polygamma
multigammaln
Error Function and Fresnel Integrals
------------------------------------
.. autosummary::
:toctree: generated/
erf
erfc
erfinv
erfcinv
erf_zeros
fresnel
fresnel_zeros
modfresnelp
modfresnelm
These are not universal functions:
.. autosummary::
:toctree: generated/
fresnelc_zeros
fresnels_zeros
Legendre Functions
------------------
.. autosummary::
:toctree: generated/
lpmv
sph_harm
These are not universal functions:
.. autosummary::
:toctree: generated/
lpn
lqn
lpmn
lqmn
Orthogonal polynomials
----------------------
The following functions evaluate values of orthogonal polynomials:
.. autosummary::
:toctree: generated/
eval_legendre
eval_chebyt
eval_chebyu
eval_chebyc
eval_chebys
eval_jacobi
eval_laguerre
eval_genlaguerre
eval_hermite
eval_hermitenorm
eval_gegenbauer
eval_sh_legendre
eval_sh_chebyt
eval_sh_chebyu
eval_sh_jacobi
The functions below, in turn, return :ref:`orthopoly1d` objects, which
functions similarly as :ref:`numpy.poly1d`. The :ref:`orthopoly1d`
class also has an attribute ``weights`` which returns the roots, weights,
and total weights for the appropriate form of Gaussian quadrature.
These are returned in an ``n x 3`` array with roots in the first column,
weights in the second column, and total weights in the final column.
.. autosummary::
:toctree: generated/
legendre
chebyt
chebyu
chebyc
chebys
jacobi
laguerre
genlaguerre
hermite
hermitenorm
gegenbauer
sh_legendre
sh_chebyt
sh_chebyu
sh_jacobi
.. warning::
Large-order polynomials obtained from these functions
are numerically unstable.
``orthopoly1d`` objects are converted to ``poly1d``, when doing
arithmetic. ``numpy.poly1d`` works in power basis and cannot
represent high-order polynomials accurately, which can cause
significant inaccuracy.
Hypergeometric Functions
------------------------
.. autosummary::
:toctree: generated/
hyp2f1
hyp1f1
hyperu
hyp0f1
hyp2f0
hyp1f2
hyp3f0
Parabolic Cylinder Functions
----------------------------
.. autosummary::
:toctree: generated/
pbdv
pbvv
pbwa
These are not universal functions:
.. autosummary::
:toctree: generated/
pbdv_seq
pbvv_seq
pbdn_seq
Mathieu and Related Functions
-----------------------------
.. autosummary::
:toctree: generated/
mathieu_a
mathieu_b
These are not universal functions:
.. autosummary::
:toctree: generated/
mathieu_even_coef
mathieu_odd_coef
The following return both function and first derivative:
.. autosummary::
:toctree: generated/
mathieu_cem
mathieu_sem
mathieu_modcem1
mathieu_modcem2
mathieu_modsem1
mathieu_modsem2
Spheroidal Wave Functions
-------------------------
.. autosummary::
:toctree: generated/
pro_ang1
pro_rad1
pro_rad2
obl_ang1
obl_rad1
obl_rad2
pro_cv
obl_cv
pro_cv_seq
obl_cv_seq
The following functions require pre-computed characteristic value:
.. autosummary::
:toctree: generated/
pro_ang1_cv
pro_rad1_cv
pro_rad2_cv
obl_ang1_cv
obl_rad1_cv
obl_rad2_cv
Kelvin Functions
----------------
.. autosummary::
:toctree: generated/
kelvin
kelvin_zeros
ber
bei
berp
beip
ker
kei
kerp
keip
These are not universal functions:
.. autosummary::
:toctree: generated/
ber_zeros
bei_zeros
berp_zeros
beip_zeros
ker_zeros
kei_zeros
kerp_zeros
keip_zeros
Other Special Functions
-----------------------
.. autosummary::
:toctree: generated/
expn
exp1
expi
wofz
dawsn
shichi
sici
spence
lambertw
zeta
zetac
Convenience Functions
---------------------
.. autosummary::
:toctree: generated/
cbrt
exp10
exp2
radian
cosdg
sindg
tandg
cotdg
log1p
expm1
cosm1
round

View file

@ -0,0 +1,81 @@
.. module:: scipy.stats.mstats
===================================================================
Statistical functions for masked arrays (:mod:`scipy.stats.mstats`)
===================================================================
This module contains a large number of statistical functions that can
be used with masked arrays.
Most of these functions are similar to those in scipy.stats but might
have small differences in the API or in the algorithm used. Since this
is a relatively new package, some API changes are still possible.
.. autosummary::
:toctree: generated/
argstoarray
betai
chisquare
count_tied_groups
describe
f_oneway
f_value_wilks_lambda
find_repeats
friedmanchisquare
gmean
hmean
kendalltau
kendalltau_seasonal
kruskalwallis
kruskalwallis
ks_twosamp
ks_twosamp
kurtosis
kurtosistest
linregress
mannwhitneyu
plotting_positions
mode
moment
mquantiles
msign
normaltest
obrientransform
pearsonr
plotting_positions
pointbiserialr
rankdata
samplestd
samplevar
scoreatpercentile
sem
signaltonoise
skew
skewtest
spearmanr
std
stderr
theilslopes
threshold
tmax
tmean
tmin
trim
trima
trimboth
trimmed_stde
trimr
trimtail
tsem
ttest_onesamp
ttest_ind
ttest_onesamp
ttest_rel
tvar
var
variation
winsorize
z
zmap
zs

View file

@ -0,0 +1,284 @@
.. module:: scipy.stats
==========================================
Statistical functions (:mod:`scipy.stats`)
==========================================
This module contains a large number of probability distributions as
well as a growing library of statistical functions.
Each included continuous distribution is an instance of the class rv_continous:
.. autosummary::
:toctree: generated/
rv_continuous
rv_continuous.pdf
rv_continuous.cdf
rv_continuous.sf
rv_continuous.ppf
rv_continuous.isf
rv_continuous.stats
Each discrete distribution is an instance of the class rv_discrete:
.. autosummary::
:toctree: generated/
rv_discrete
rv_discrete.pmf
rv_discrete.cdf
rv_discrete.sf
rv_discrete.ppf
rv_discrete.isf
rv_discrete.stats
Continuous distributions
========================
.. autosummary::
:toctree: generated/
norm
alpha
anglit
arcsine
beta
betaprime
bradford
burr
fisk
cauchy
chi
chi2
cosine
dgamma
dweibull
erlang
expon
exponweib
exponpow
fatiguelife
foldcauchy
f
foldnorm
fretchet_r
fretcher_l
genlogistic
genpareto
genexpon
genextreme
gausshyper
gamma
gengamma
genhalflogistic
gompertz
gumbel_r
gumbel_l
halfcauchy
halflogistic
halfnorm
hypsecant
invgamma
invnorm
invweibull
johnsonsb
johnsonsu
laplace
logistic
loggamma
loglaplace
lognorm
gilbrat
lomax
maxwell
mielke
nakagami
ncx2
ncf
t
nct
pareto
powerlaw
powerlognorm
powernorm
rdist
reciprocal
rayleigh
rice
recipinvgauss
semicircular
triang
truncexpon
truncnorm
tukeylambda
uniform
von_mises
wald
weibull_min
weibull_max
wrapcauchy
ksone
kstwobign
Discrete distributions
======================
.. autosummary::
:toctree: generated/
binom
bernoulli
nbinom
geom
hypergeom
logser
poisson
planck
boltzmann
randint
zipf
dlaplace
Statistical functions
=====================
Several of these functions have a similar version in scipy.stats.mstats
which work for masked arrays.
.. autosummary::
:toctree: generated/
gmean
hmean
mean
cmedian
median
mode
tmean
tvar
tmin
tmax
tstd
tsem
moment
variation
skew
kurtosis
describe
skewtest
kurtosistest
normaltest
.. autosummary::
:toctree: generated/
itemfreq
scoreatpercentile
percentileofscore
histogram2
histogram
cumfreq
relfreq
.. autosummary::
:toctree: generated/
obrientransform
samplevar
samplestd
signaltonoise
bayes_mvs
var
std
stderr
sem
z
zs
zmap
.. autosummary::
:toctree: generated/
threshold
trimboth
trim1
cov
corrcoef
.. autosummary::
:toctree: generated/
f_oneway
pearsonr
spearmanr
pointbiserialr
kendalltau
linregress
.. autosummary::
:toctree: generated/
ttest_1samp
ttest_ind
ttest_rel
kstest
chisquare
ks_2samp
mannwhitneyu
tiecorrect
ranksums
wilcoxon
kruskal
friedmanchisquare
.. autosummary::
:toctree: generated/
ansari
bartlett
levene
shapiro
anderson
binom_test
fligner
mood
oneway
.. autosummary::
:toctree: generated/
glm
anova
Plot-tests
==========
.. autosummary::
:toctree: generated/
probplot
ppcc_max
ppcc_plot
Masked statistics functions
===========================
.. toctree::
stats.mstats
Univariate and multivariate kernel density estimation (:mod:`scipy.stats.kde`)
==============================================================================
.. autosummary::
:toctree: generated/
gaussian_kde
For many more stat related functions install the software R and the
interface package rpy.

View file

@ -0,0 +1,302 @@
Basic functions in Numpy (and top-level scipy)
==============================================
.. sectionauthor:: Travis E. Oliphant
.. currentmodule:: numpy
.. contents::
Interaction with Numpy
------------------------
To begin with, all of the Numpy functions have been subsumed into the
:mod:`scipy` namespace so that all of those functions are available
without additionally importing Numpy. In addition, the universal
functions (addition, subtraction, division) have been altered to not
raise exceptions if floating-point errors are encountered; instead,
NaN's and Inf's are returned in the arrays. To assist in detection of
these events, several functions (:func:`sp.isnan`, :func:`sp.isfinite`,
:func:`sp.isinf`) are available.
Finally, some of the basic functions like log, sqrt, and inverse trig
functions have been modified to return complex numbers instead of
NaN's where appropriate (*i.e.* ``sp.sqrt(-1)`` returns ``1j``).
Top-level scipy routines
------------------------
The purpose of the top level of scipy is to collect general-purpose
routines that the other sub-packages can use and to provide a simple
replacement for Numpy. Anytime you might think to import Numpy, you
can import scipy instead and remove yourself from direct dependence on
Numpy. These routines are divided into several files for
organizational purposes, but they are all available under the numpy
namespace (and the scipy namespace). There are routines for type
handling and type checking, shape and matrix manipulation, polynomial
processing, and other useful functions. Rather than giving a detailed
description of each of these functions (which is available in the
Numpy Reference Guide or by using the :func:`help`, :func:`info` and
:func:`source` commands), this tutorial will discuss some of the more
useful commands which require a little introduction to use to their
full potential.
Type handling
^^^^^^^^^^^^^
Note the difference between :func:`sp.iscomplex`/:func:`sp.isreal` and
:func:`sp.iscomplexobj`/:func:`sp.isrealobj`. The former command is
array based and returns byte arrays of ones and zeros providing the
result of the element-wise test. The latter command is object based
and returns a scalar describing the result of the test on the entire
object.
Often it is required to get just the real and/or imaginary part of a
complex number. While complex numbers and arrays have attributes that
return those values, if one is not sure whether or not the object will
be complex-valued, it is better to use the functional forms
:func:`sp.real` and :func:`sp.imag` . These functions succeed for anything
that can be turned into a Numpy array. Consider also the function
:func:`sp.real_if_close` which transforms a complex-valued number with
tiny imaginary part into a real number.
Occasionally the need to check whether or not a number is a scalar
(Python (long)int, Python float, Python complex, or rank-0 array)
occurs in coding. This functionality is provided in the convenient
function :func:`sp.isscalar` which returns a 1 or a 0.
Finally, ensuring that objects are a certain Numpy type occurs often
enough that it has been given a convenient interface in SciPy through
the use of the :obj:`sp.cast` dictionary. The dictionary is keyed by the
type it is desired to cast to and the dictionary stores functions to
perform the casting. Thus, ``sp.cast['f'](d)`` returns an array
of :class:`sp.float32` from *d*. This function is also useful as an easy
way to get a scalar of a certain type::
>>> sp.cast['f'](sp.pi)
array(3.1415927410125732, dtype=float32)
Index Tricks
^^^^^^^^^^^^
There are some class instances that make special use of the slicing
functionality to provide efficient means for array construction. This
part will discuss the operation of :obj:`sp.mgrid` , :obj:`sp.ogrid` ,
:obj:`sp.r_` , and :obj:`sp.c_` for quickly constructing arrays.
One familiar with Matlab may complain that it is difficult to
construct arrays from the interactive session with Python. Suppose,
for example that one wants to construct an array that begins with 3
followed by 5 zeros and then contains 10 numbers spanning the range -1
to 1 (inclusive on both ends). Before SciPy, you would need to enter
something like the following
>>> concatenate(([3],[0]*5,arange(-1,1.002,2/9.0)))
With the :obj:`r_` command one can enter this as
>>> r_[3,[0]*5,-1:1:10j]
which can ease typing and make for more readable code. Notice how
objects are concatenated, and the slicing syntax is (ab)used to
construct ranges. The other term that deserves a little explanation is
the use of the complex number 10j as the step size in the slicing
syntax. This non-standard use allows the number to be interpreted as
the number of points to produce in the range rather than as a step
size (note we would have used the long integer notation, 10L, but this
notation may go away in Python as the integers become unified). This
non-standard usage may be unsightly to some, but it gives the user the
ability to quickly construct complicated vectors in a very readable
fashion. When the number of points is specified in this way, the end-
point is inclusive.
The "r" stands for row concatenation because if the objects between
commas are 2 dimensional arrays, they are stacked by rows (and thus
must have commensurate columns). There is an equivalent command
:obj:`c_` that stacks 2d arrays by columns but works identically to
:obj:`r_` for 1d arrays.
Another very useful class instance which makes use of extended slicing
notation is the function :obj:`mgrid`. In the simplest case, this
function can be used to construct 1d ranges as a convenient substitute
for arange. It also allows the use of complex-numbers in the step-size
to indicate the number of points to place between the (inclusive)
end-points. The real purpose of this function however is to produce N,
N-d arrays which provide coordinate arrays for an N-dimensional
volume. The easiest way to understand this is with an example of its
usage:
>>> mgrid[0:5,0:5]
array([[[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]],
[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]])
>>> mgrid[0:5:4j,0:5:4j]
array([[[ 0. , 0. , 0. , 0. ],
[ 1.6667, 1.6667, 1.6667, 1.6667],
[ 3.3333, 3.3333, 3.3333, 3.3333],
[ 5. , 5. , 5. , 5. ]],
[[ 0. , 1.6667, 3.3333, 5. ],
[ 0. , 1.6667, 3.3333, 5. ],
[ 0. , 1.6667, 3.3333, 5. ],
[ 0. , 1.6667, 3.3333, 5. ]]])
Having meshed arrays like this is sometimes very useful. However, it
is not always needed just to evaluate some N-dimensional function over
a grid due to the array-broadcasting rules of Numpy and SciPy. If this
is the only purpose for generating a meshgrid, you should instead use
the function :obj:`ogrid` which generates an "open "grid using NewAxis
judiciously to create N, N-d arrays where only one dimension in each
array has length greater than 1. This will save memory and create the
same result if the only purpose for the meshgrid is to generate sample
points for evaluation of an N-d function.
Shape manipulation
^^^^^^^^^^^^^^^^^^
In this category of functions are routines for squeezing out length-
one dimensions from N-dimensional arrays, ensuring that an array is at
least 1-, 2-, or 3-dimensional, and stacking (concatenating) arrays by
rows, columns, and "pages "(in the third dimension). Routines for
splitting arrays (roughly the opposite of stacking arrays) are also
available.
Polynomials
^^^^^^^^^^^
There are two (interchangeable) ways to deal with 1-d polynomials in
SciPy. The first is to use the :class:`poly1d` class from Numpy. This
class accepts coefficients or polynomial roots to initialize a
polynomial. The polynomial object can then be manipulated in algebraic
expressions, integrated, differentiated, and evaluated. It even prints
like a polynomial:
>>> p = poly1d([3,4,5])
>>> print p
2
3 x + 4 x + 5
>>> print p*p
4 3 2
9 x + 24 x + 46 x + 40 x + 25
>>> print p.integ(k=6)
3 2
x + 2 x + 5 x + 6
>>> print p.deriv()
6 x + 4
>>> p([4,5])
array([ 69, 100])
The other way to handle polynomials is as an array of coefficients
with the first element of the array giving the coefficient of the
highest power. There are explicit functions to add, subtract,
multiply, divide, integrate, differentiate, and evaluate polynomials
represented as sequences of coefficients.
Vectorizing functions (vectorize)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
One of the features that NumPy provides is a class :obj:`vectorize` to
convert an ordinary Python function which accepts scalars and returns
scalars into a "vectorized-function" with the same broadcasting rules
as other Numpy functions (*i.e.* the Universal functions, or
ufuncs). For example, suppose you have a Python function named
:obj:`addsubtract` defined as:
>>> def addsubtract(a,b):
... if a > b:
... return a - b
... else:
... return a + b
which defines a function of two scalar variables and returns a scalar
result. The class vectorize can be used to "vectorize "this function so that ::
>>> vec_addsubtract = vectorize(addsubtract)
returns a function which takes array arguments and returns an array
result:
>>> vec_addsubtract([0,3,6,9],[1,3,5,7])
array([1, 6, 1, 2])
This particular function could have been written in vector form
without the use of :obj:`vectorize` . But, what if the function you have written is the result of some
optimization or integration routine. Such functions can likely only be
vectorized using ``vectorize.``
Other useful functions
^^^^^^^^^^^^^^^^^^^^^^
There are several other functions in the scipy_base package including
most of the other functions that are also in the Numpy package. The
reason for duplicating these functions is to allow SciPy to
potentially alter their original interface and make it easier for
users to know how to get access to functions
>>> from scipy import *
Functions which should be mentioned are :obj:`mod(x,y)` which can
replace ``x % y`` when it is desired that the result take the sign of
*y* instead of *x* . Also included is :obj:`fix` which always rounds
to the nearest integer towards zero. For doing phase processing, the
functions :func:`angle`, and :obj:`unwrap` are also useful. Also, the
:obj:`linspace` and :obj:`logspace` functions return equally spaced samples
in a linear or log scale. Finally, it's useful to be aware of the indexing
capabilities of Numpy. Mention should be made of the new
function :obj:`select` which extends the functionality of :obj:`where` to
include multiple conditions and multiple choices. The calling
convention is ``select(condlist,choicelist,default=0).`` :obj:`select` is
a vectorized form of the multiple if-statement. It allows rapid
construction of a function which returns an array of results based on
a list of conditions. Each element of the return array is taken from
the array in a ``choicelist`` corresponding to the first condition in
``condlist`` that is true. For example
>>> x = r_[-2:3]
>>> x
array([-2, -1, 0, 1, 2])
>>> select([x > 3, x >= 0],[0,x+2])
array([0, 0, 2, 3, 4])
Common functions
----------------
Some functions depend on sub-packages of SciPy but should be available
from the top-level of SciPy due to their common use. These are
functions that might have been placed in scipy_base except for their
dependence on other sub-packages of SciPy. For example the
:obj:`factorial` and :obj:`comb` functions compute :math:`n!` and
:math:`n!/k!(n-k)!` using either exact integer arithmetic (thanks to
Python's Long integer object), or by using floating-point precision
and the gamma function. The functions :obj:`rand` and :obj:`randn`
are used so often that they warranted a place at the top level. There
are convenience functions for the interactive use: :obj:`disp`
(similar to print), and :obj:`who` (returns a list of defined
variables and memory consumption--upper bounded). Another function
returns a common image used in image processing: :obj:`lena`.
Finally, two functions are provided that are useful for approximating
derivatives of functions using discrete-differences. The function
:obj:`central_diff_weights` returns weighting coefficients for an
equally-spaced :math:`N`-point approximation to the derivative of
order *o*. These weights must be multiplied by the function
corresponding to these points and the results added to obtain the
derivative approximation. This function is intended for use when only
samples of the function are avaiable. When the function is an object
that can be handed to a routine and evaluated, the function
:obj:`derivative` can be used to automatically evaluate the object at
the correct points to obtain an N-point approximation to the *o*-th
derivative at a given point.

View file

@ -0,0 +1,55 @@
>>> sp.info(optimize.fmin)
fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None)
Minimize a function using the downhill simplex algorithm.
:Parameters:
func : callable func(x,*args)
The objective function to be minimized.
x0 : ndarray
Initial guess.
args : tuple
Extra arguments passed to func, i.e. ``f(x,*args)``.
callback : callable
Called after each iteration, as callback(xk), where xk is the
current parameter vector.
:Returns: (xopt, {fopt, iter, funcalls, warnflag})
xopt : ndarray
Parameter that minimizes function.
fopt : float
Value of function at minimum: ``fopt = func(xopt)``.
iter : int
Number of iterations performed.
funcalls : int
Number of function calls made.
warnflag : int
1 : Maximum number of function evaluations made.
2 : Maximum number of iterations reached.
allvecs : list
Solution at each iteration.
*Other Parameters*:
xtol : float
Relative error in xopt acceptable for convergence.
ftol : number
Relative error in func(xopt) acceptable for convergence.
maxiter : int
Maximum number of iterations to perform.
maxfun : number
Maximum number of function evaluations to make.
full_output : bool
Set to True if fval and warnflag outputs are desired.
disp : bool
Set to True to print convergence messages.
retall : bool
Set to True to return list of solutions at each iteration.
:Notes:
Uses a Nelder-Mead simplex algorithm to find the minimum of
function of one or more variables.

View file

@ -0,0 +1,25 @@
>>> help(integrate)
Methods for Integrating Functions given function object.
quad -- General purpose integration.
dblquad -- General purpose double integration.
tplquad -- General purpose triple integration.
fixed_quad -- Integrate func(x) using Gaussian quadrature of order n.
quadrature -- Integrate with given tolerance using Gaussian quadrature.
romberg -- Integrate func using Romberg integration.
Methods for Integrating Functions given fixed samples.
trapz -- Use trapezoidal rule to compute integral from samples.
cumtrapz -- Use trapezoidal rule to cumulatively compute integral.
simps -- Use Simpson's rule to compute integral from samples.
romb -- Use Romberg Integration to compute integral from
(2**k + 1) evenly-spaced samples.
See the special module's orthogonal polynomials (special) for Gaussian
quadrature roots and weights for other weighting factors and regions.
Interface to numerical integrators of ODE systems.
odeint -- General integration of ordinary differential equations.
ode -- Integrate ODE using VODE and ZVODE routines.

View file

@ -0,0 +1,91 @@
from scipy import optimize
>>> info(optimize)
Optimization Tools
==================
A collection of general-purpose optimization routines.
fmin -- Nelder-Mead Simplex algorithm
(uses only function calls)
fmin_powell -- Powell's (modified) level set method (uses only
function calls)
fmin_cg -- Non-linear (Polak-Ribiere) conjugate gradient algorithm
(can use function and gradient).
fmin_bfgs -- Quasi-Newton method (Broydon-Fletcher-Goldfarb-Shanno);
(can use function and gradient)
fmin_ncg -- Line-search Newton Conjugate Gradient (can use
function, gradient and Hessian).
leastsq -- Minimize the sum of squares of M equations in
N unknowns given a starting estimate.
Constrained Optimizers (multivariate)
fmin_l_bfgs_b -- Zhu, Byrd, and Nocedal's L-BFGS-B constrained optimizer
(if you use this please quote their papers -- see help)
fmin_tnc -- Truncated Newton Code originally written by Stephen Nash and
adapted to C by Jean-Sebastien Roy.
fmin_cobyla -- Constrained Optimization BY Linear Approximation
Global Optimizers
anneal -- Simulated Annealing
brute -- Brute force searching optimizer
Scalar function minimizers
fminbound -- Bounded minimization of a scalar function.
brent -- 1-D function minimization using Brent method.
golden -- 1-D function minimization using Golden Section method
bracket -- Bracket a minimum (given two starting points)
Also a collection of general-purpose root-finding routines.
fsolve -- Non-linear multi-variable equation solver.
Scalar function solvers
brentq -- quadratic interpolation Brent method
brenth -- Brent method (modified by Harris with hyperbolic
extrapolation)
ridder -- Ridder's method
bisect -- Bisection method
newton -- Secant method or Newton's method
fixed_point -- Single-variable fixed-point solver.
A collection of general-purpose nonlinear multidimensional solvers.
broyden1 -- Broyden's first method - is a quasi-Newton-Raphson
method for updating an approximate Jacobian and then
inverting it
broyden2 -- Broyden's second method - the same as broyden1, but
updates the inverse Jacobian directly
broyden3 -- Broyden's second method - the same as broyden2, but
instead of directly computing the inverse Jacobian,
it remembers how to construct it using vectors, and
when computing inv(J)*F, it uses those vectors to
compute this product, thus avoding the expensive NxN
matrix multiplication.
broyden_generalized -- Generalized Broyden's method, the same as broyden2,
but instead of approximating the full NxN Jacobian,
it construct it at every iteration in a way that
avoids the NxN matrix multiplication. This is not
as precise as broyden3.
anderson -- extended Anderson method, the same as the
broyden_generalized, but added w_0^2*I to before
taking inversion to improve the stability
anderson2 -- the Anderson method, the same as anderson, but
formulated differently
Utility Functions
line_search -- Return a step that satisfies the strong Wolfe conditions.
check_grad -- Check the supplied derivative using finite difference
techniques.

View file

@ -0,0 +1,45 @@
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
npoints = 20 # number of integer support points of the distribution minus 1
npointsh = npoints / 2
npointsf = float(npoints)
nbound = 4 #bounds for the truncated normal
normbound = (1 + 1 / npointsf) * nbound #actual bounds of truncated normal
grid = np.arange(-npointsh, npointsh+2, 1) #integer grid
gridlimitsnorm = (grid-0.5) / npointsh * nbound #bin limits for the truncnorm
gridlimits = grid - 0.5
grid = grid[:-1]
probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound))
gridint = grid
normdiscrete = stats.rv_discrete(
values=(gridint, np.round(probs, decimals=7)),
name='normdiscrete')
n_sample = 500
np.random.seed(87655678) #fix the seed for replicability
rvs = normdiscrete.rvs(size=n_sample)
rvsnd=rvs
f,l = np.histogram(rvs, bins=gridlimits)
sfreq = np.vstack([gridint, f, probs*n_sample]).T
fs = sfreq[:,1] / float(n_sample)
ft = sfreq[:,2] / float(n_sample)
nd_std = np.sqrt(normdiscrete.stats(moments='v'))
ind = gridint # the x locations for the groups
width = 0.35 # the width of the bars
plt.subplot(111)
rects1 = plt.bar(ind, ft, width, color='b')
rects2 = plt.bar(ind+width, fs, width, color='r')
normline = plt.plot(ind+width/2.0, stats.norm.pdf(ind, scale=nd_std),
color='b')
plt.ylabel('Frequency')
plt.title('Frequency and Probability of normdiscrete')
plt.xticks(ind+width, ind )
plt.legend((rects1[0], rects2[0]), ('true', 'sample'))
plt.show()

View file

@ -0,0 +1,48 @@
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
npoints = 20 # number of integer support points of the distribution minus 1
npointsh = npoints / 2
npointsf = float(npoints)
nbound = 4 #bounds for the truncated normal
normbound = (1 + 1 / npointsf) * nbound #actual bounds of truncated normal
grid = np.arange(-npointsh, npointsh+2,1) #integer grid
gridlimitsnorm = (grid - 0.5) / npointsh * nbound #bin limits for the truncnorm
gridlimits = grid - 0.5
grid = grid[:-1]
probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound))
gridint = grid
normdiscrete = stats.rv_discrete(
values=(gridint, np.round(probs, decimals=7)),
name='normdiscrete')
n_sample = 500
np.random.seed(87655678) #fix the seed for replicability
rvs = normdiscrete.rvs(size=n_sample)
rvsnd = rvs
f,l = np.histogram(rvs,bins=gridlimits)
sfreq = np.vstack([gridint,f,probs*n_sample]).T
fs = sfreq[:,1] / float(n_sample)
ft = sfreq[:,2] / float(n_sample)
fs = sfreq[:,1].cumsum() / float(n_sample)
ft = sfreq[:,2].cumsum() / float(n_sample)
nd_std = np.sqrt(normdiscrete.stats(moments='v'))
ind = gridint # the x locations for the groups
width = 0.35 # the width of the bars
plt.figure()
plt.subplot(111)
rects1 = plt.bar(ind, ft, width, color='b')
rects2 = plt.bar(ind+width, fs, width, color='r')
normline = plt.plot(ind+width/2.0, stats.norm.cdf(ind+0.5,scale=nd_std),
color='b')
plt.ylabel('cdf')
plt.title('Cumulative Frequency and CDF of normdiscrete')
plt.xticks(ind+width, ind )
plt.legend( (rects1[0], rects2[0]), ('true', 'sample') )
plt.show()

View file

@ -0,0 +1,145 @@
Fourier Transforms (:mod:`scipy.fftpack`)
=========================================
.. sectionauthor:: Scipy Developers
.. currentmodule:: scipy.fftpack
.. warning::
This is currently a stub page
.. contents::
Fourier analysis is fundamentally a method for expressing a function as a
sum of periodic components, and for recovering the signal from those
components. When both the function and its Fourier transform are
replaced with discretized counterparts, it is called the discrete Fourier
transform (DFT). The DFT has become a mainstay of numerical computing in
part because of a very fast algorithm for computing it, called the Fast
Fourier Transform (FFT), which was known to Gauss (1805) and was brought
to light in its current form by Cooley and Tukey [CT]_. Press et al. [NR]_
provide an accessible introduction to Fourier analysis and its
applications.
Fast Fourier transforms
-----------------------
One dimensional discrete Fourier transforms
-------------------------------------------
fft, ifft, rfft, irfft
Two and n dimensional discrete Fourier transforms
-------------------------------------------------
fft in more than one dimension
Discrete Cosine Transforms
--------------------------
Return the Discrete Cosine Transform [Mak]_ of arbitrary type sequence ``x``.
For a single dimension array ``x``, ``dct(x, norm='ortho')`` is equal to
matlab ``dct(x)``.
There are theoretically 8 types of the DCT [WP]_, only the first 3 types are
implemented in scipy. 'The' DCT generally refers to DCT type 2, and 'the'
Inverse DCT generally refers to DCT type 3.
type I
~~~~~~
There are several definitions of the DCT-I; we use the following
(for ``norm=None``):
.. math::
:nowrap:
\[ y_k = x_0 + (-1)^k x_{N-1} + 2\sum_{n=1}^{N-2} x_n
\cos\left({\pi nk\over N-1}\right),
\qquad 0 \le k < N. \]
Only None is supported as normalization mode for DCT-I. Note also that the
DCT-I is only supported for input size > 1
type II
~~~~~~~
There are several definitions of the DCT-II; we use the following
(for ``norm=None``):
.. math::
:nowrap:
\[ y_k = 2 \sum_{n=0}^{N-1} x_n
\cos \left({\pi(2n+1)k \over 2N} \right)
\qquad 0 \le k < N.\]
If ``norm='ortho'``, :math:`y_k` is multiplied by a scaling factor `f`:
.. math::
:nowrap:
\[f = \begin{cases} \sqrt{1/(4N)}, & \text{if $k = 0$} \\
\sqrt{1/(2N)}, & \text{otherwise} \end{cases} \]
Which makes the corresponding matrix of coefficients orthonormal
(`OO' = Id`).
type III
~~~~~~~~
There are several definitions, we use the following
(for ``norm=None``):
.. math::
:nowrap:
\[ y_k = x_0 + 2 \sum_{n=1}^{N-1} x_n
\cos\left({\pi n(2k+1) \over 2N}\right)
\qquad 0 \le k < N,\]
or, for ``norm='ortho'``:
.. math::
:nowrap:
\[ y_k = {x_0\over\sqrt{N}} + {1\over\sqrt{N}} \sum_{n=1}^{N-1}
x_n \cos\left({\pi n(2k+1) \over 2N}\right)
\qquad 0 \le k < N.\]
The (unnormalized) DCT-III is the inverse of the (unnormalized) DCT-II, up
to a factor `2N`. The orthonormalized DCT-III is exactly the inverse of the
orthonormalized DCT-II.
References
~~~~~~~~~~
.. [CT] Cooley, James W., and John W. Tukey, 1965, "An algorithm for the
machine calculation of complex Fourier series," *Math. Comput.*
19: 297-301.
.. [NR] Press, W., Teukolsky, S., Vetterline, W.T., and Flannery, B.P.,
2007, *Numerical Recipes: The Art of Scientific Computing*, ch.
12-13. Cambridge Univ. Press, Cambridge, UK.
.. [Mak] J. Makhoul, 1980, 'A Fast Cosine Transform in One and Two Dimensions',
`IEEE Transactions on acoustics, speech and signal processing`
vol. 28(1), pp. 27-34, http://dx.doi.org/10.1109/TASSP.1980.1163351
.. [WP] http://en.wikipedia.org/wiki/Discrete_cosine_transform
FFT convolution
---------------
scipy.fftpack.convolve performs a convolution of two one-dimensional
arrays in frequency domain.

View file

@ -0,0 +1,129 @@
============
Introduction
============
.. contents::
SciPy is a collection of mathematical algorithms and convenience
functions built on the Numpy extension for Python. It adds
significant power to the interactive Python session by exposing the
user to high-level commands and classes for the manipulation and
visualization of data. With SciPy, an interactive Python session
becomes a data-processing and system-prototyping environment rivaling
sytems such as Matlab, IDL, Octave, R-Lab, and SciLab.
The additional power of using SciPy within Python, however, is that a
powerful programming language is also available for use in developing
sophisticated programs and specialized applications. Scientific
applications written in SciPy benefit from the development of
additional modules in numerous niche's of the software landscape by
developers across the world. Everything from parallel programming to
web and data-base subroutines and classes have been made available to
the Python programmer. All of this power is available in addition to
the mathematical libraries in SciPy.
This document provides a tutorial for the first-time user of SciPy to
help get started with some of the features available in this powerful
package. It is assumed that the user has already installed the
package. Some general Python facility is also assumed such as could be
acquired by working through the Tutorial in the Python distribution.
For further introductory help the user is directed to the Numpy
documentation.
For brevity and convenience, we will often assume that the main
packages (numpy, scipy, and matplotlib) have been imported as::
>>> import numpy as np
>>> import scipy as sp
>>> import matplotlib as mpl
>>> import matplotlib.pyplot as plt
These are the import conventions that our community has adopted
after discussion on public mailing lists. You will see these
conventions used throughout NumPy and SciPy source code and
documentation. While we obviously don't require you to follow
these conventions in your own code, it is highly recommended.
SciPy Organization
------------------
SciPy is organized into subpackages covering different scientific
computing domains. These are summarized in the following table:
.. currentmodule:: scipy
================== ======================================================
Subpackage Description
================== ======================================================
:mod:`cluster` Clustering algorithms
:mod:`constants` Physical and mathematical constants
:mod:`fftpack` Fast Fourier Transform routines
:mod:`integrate` Integration and ordinary differential equation solvers
:mod:`interpolate` Interpolation and smoothing splines
:mod:`io` Input and Output
:mod:`linalg` Linear algebra
:mod:`maxentropy` Maximum entropy methods
:mod:`ndimage` N-dimensional image processing
:mod:`odr` Orthogonal distance regression
:mod:`optimize` Optimization and root-finding routines
:mod:`signal` Signal processing
:mod:`sparse` Sparse matrices and associated routines
:mod:`spatial` Spatial data structures and algorithms
:mod:`special` Special functions
:mod:`stats` Statistical distributions and functions
:mod:`weave` C/C++ integration
================== ======================================================
Scipy sub-packages need to be imported separately, for example::
>>> from scipy import linalg, optimize
Because of their ubiquitousness, some of the functions in these
subpackages are also made available in the scipy namespace to ease
their use in interactive sessions and programs. In addition, many
basic array functions from :mod:`numpy` are also available at the
top-level of the :mod:`scipy` package. Before looking at the
sub-packages individually, we will first look at some of these common
functions.
Finding Documentation
---------------------
Scipy and Numpy have HTML and PDF versions of their documentation
available at http://docs.scipy.org/, which currently details nearly
all available functionality. However, this documentation is still
work-in-progress, and some parts may be incomplete or sparse. As
we are a volunteer organization and depend on the community for
growth, your participation - everything from providing feedback to
improving the documentation and code - is welcome and actively
encouraged.
Python also provides the facility of documentation strings. The
functions and classes available in SciPy use this method for on-line
documentation. There are two methods for reading these messages and
getting help. Python provides the command :func:`help` in the pydoc
module. Entering this command with no arguments (i.e. ``>>> help`` )
launches an interactive help session that allows searching through the
keywords and modules available to all of Python. Running the command
help with an object as the argument displays the calling signature,
and the documentation string of the object.
The pydoc method of help is sophisticated but uses a pager to display
the text. Sometimes this can interfere with the terminal you are
running the interactive session within. A scipy-specific help system
is also available under the command ``sp.info``. The signature and
documentation string for the object passed to the ``help`` command are
printed to standard output (or to a writeable object passed as the
third argument). The second keyword argument of ``sp.info`` defines
the maximum width of the line for printing. If a module is passed as
the argument to help than a list of the functions and classes defined
in that module is printed. For example:
.. literalinclude:: examples/1-1
Another useful command is :func:`source`. When given a function
written in Python as an argument, it prints out a listing of the
source code for that function. This can be helpful in learning about
an algorithm or understanding exactly what a function is doing with
its arguments. Also don't forget about the Python command ``dir``
which can be used to look at the namespace of a module or package.

View file

@ -0,0 +1,22 @@
**************
SciPy Tutorial
**************
.. sectionauthor:: Travis E. Oliphant
.. toctree::
:maxdepth: 1
general
basic
special
integrate
optimize
interpolate
fftpack
signal
linalg
stats
ndimage
io
weave

View file

@ -0,0 +1,280 @@
Integration (:mod:`scipy.integrate`)
====================================
.. sectionauthor:: Travis E. Oliphant
.. currentmodule:: scipy.integrate
The :mod:`scipy.integrate` sub-package provides several integration
techniques including an ordinary differential equation integrator. An
overview of the module is provided by the help command:
.. literalinclude:: examples/4-1
General integration (:func:`quad`)
----------------------------------
The function :obj:`quad` is provided to integrate a function of one
variable between two points. The points can be :math:`\pm\infty`
(:math:`\pm` ``inf``) to indicate infinite limits. For example,
suppose you wish to integrate a bessel function ``jv(2.5,x)`` along
the interval :math:`[0,4.5].`
.. math::
:nowrap:
\[ I=\int_{0}^{4.5}J_{2.5}\left(x\right)\, dx.\]
This could be computed using :obj:`quad`:
>>> result = integrate.quad(lambda x: special.jv(2.5,x), 0, 4.5)
>>> print result
(1.1178179380783249, 7.8663172481899801e-09)
>>> I = sqrt(2/pi)*(18.0/27*sqrt(2)*cos(4.5)-4.0/27*sqrt(2)*sin(4.5)+
sqrt(2*pi)*special.fresnel(3/sqrt(pi))[0])
>>> print I
1.117817938088701
>>> print abs(result[0]-I)
1.03761443881e-11
The first argument to quad is a "callable" Python object (*i.e* a
function, method, or class instance). Notice the use of a lambda-
function in this case as the argument. The next two arguments are the
limits of integration. The return value is a tuple, with the first
element holding the estimated value of the integral and the second
element holding an upper bound on the error. Notice, that in this
case, the true value of this integral is
.. math::
:nowrap:
\[ I=\sqrt{\frac{2}{\pi}}\left(\frac{18}{27}\sqrt{2}\cos\left(4.5\right)-\frac{4}{27}\sqrt{2}\sin\left(4.5\right)+\sqrt{2\pi}\textrm{Si}\left(\frac{3}{\sqrt{\pi}}\right)\right),\]
where
.. math::
:nowrap:
\[ \textrm{Si}\left(x\right)=\int_{0}^{x}\sin\left(\frac{\pi}{2}t^{2}\right)\, dt.\]
is the Fresnel sine integral. Note that the numerically-computed
integral is within :math:`1.04\times10^{-11}` of the exact result --- well below the reported error bound.
Infinite inputs are also allowed in :obj:`quad` by using :math:`\pm`
``inf`` as one of the arguments. For example, suppose that a numerical
value for the exponential integral:
.. math::
:nowrap:
\[ E_{n}\left(x\right)=\int_{1}^{\infty}\frac{e^{-xt}}{t^{n}}\, dt.\]
is desired (and the fact that this integral can be computed as
``special.expn(n,x)`` is forgotten). The functionality of the function
:obj:`special.expn` can be replicated by defining a new function
:obj:`vec_expint` based on the routine :obj:`quad`:
>>> from scipy.integrate import quad
>>> def integrand(t,n,x):
... return exp(-x*t) / t**n
>>> def expint(n,x):
... return quad(integrand, 1, Inf, args=(n, x))[0]
>>> vec_expint = vectorize(expint)
>>> vec_expint(3,arange(1.0,4.0,0.5))
array([ 0.1097, 0.0567, 0.0301, 0.0163, 0.0089, 0.0049])
>>> special.expn(3,arange(1.0,4.0,0.5))
array([ 0.1097, 0.0567, 0.0301, 0.0163, 0.0089, 0.0049])
The function which is integrated can even use the quad argument
(though the error bound may underestimate the error due to possible
numerical error in the integrand from the use of :obj:`quad` ). The integral in this case is
.. math::
:nowrap:
\[ I_{n}=\int_{0}^{\infty}\int_{1}^{\infty}\frac{e^{-xt}}{t^{n}}\, dt\, dx=\frac{1}{n}.\]
>>> result = quad(lambda x: expint(3, x), 0, inf)
>>> print result
(0.33333333324560266, 2.8548934485373678e-09)
>>> I3 = 1.0/3.0
>>> print I3
0.333333333333
>>> print I3 - result[0]
8.77306560731e-11
This last example shows that multiple integration can be handled using
repeated calls to :func:`quad`. The mechanics of this for double and
triple integration have been wrapped up into the functions
:obj:`dblquad` and :obj:`tplquad`. The function, :obj:`dblquad`
performs double integration. Use the help function to be sure that the
arguments are defined in the correct order. In addition, the limits on
all inner integrals are actually functions which can be constant
functions. An example of using double integration to compute several
values of :math:`I_{n}` is shown below:
>>> from scipy.integrate import quad, dblquad
>>> def I(n):
... return dblquad(lambda t, x: exp(-x*t)/t**n, 0, Inf, lambda x: 1, lambda x: Inf)
>>> print I(4)
(0.25000000000435768, 1.0518245707751597e-09)
>>> print I(3)
(0.33333333325010883, 2.8604069919261191e-09)
>>> print I(2)
(0.49999999999857514, 1.8855523253868967e-09)
Gaussian quadrature (integrate.gauss_quadtol)
---------------------------------------------
A few functions are also provided in order to perform simple Gaussian
quadrature over a fixed interval. The first is :obj:`fixed_quad` which
performs fixed-order Gaussian quadrature. The second function is
:obj:`quadrature` which performs Gaussian quadrature of multiple
orders until the difference in the integral estimate is beneath some
tolerance supplied by the user. These functions both use the module
:mod:`special.orthogonal` which can calculate the roots and quadrature
weights of a large variety of orthogonal polynomials (the polynomials
themselves are available as special functions returning instances of
the polynomial class --- e.g. :obj:`special.legendre <scipy.special.legendre>`).
Integrating using samples
-------------------------
There are three functions for computing integrals given only samples:
:obj:`trapz` , :obj:`simps`, and :obj:`romb` . The first two
functions use Newton-Coates formulas of order 1 and 2 respectively to
perform integration. These two functions can handle,
non-equally-spaced samples. The trapezoidal rule approximates the
function as a straight line between adjacent points, while Simpson's
rule approximates the function between three adjacent points as a
parabola.
If the samples are equally-spaced and the number of samples available
is :math:`2^{k}+1` for some integer :math:`k`, then Romberg
integration can be used to obtain high-precision estimates of the
integral using the available samples. Romberg integration uses the
trapezoid rule at step-sizes related by a power of two and then
performs Richardson extrapolation on these estimates to approximate
the integral with a higher-degree of accuracy. (A different interface
to Romberg integration useful when the function can be provided is
also available as :func:`romberg`).
Ordinary differential equations (:func:`odeint`)
------------------------------------------------
Integrating a set of ordinary differential equations (ODEs) given
initial conditions is another useful example. The function
:obj:`odeint` is available in SciPy for integrating a first-order
vector differential equation:
.. math::
:nowrap:
\[ \frac{d\mathbf{y}}{dt}=\mathbf{f}\left(\mathbf{y},t\right),\]
given initial conditions :math:`\mathbf{y}\left(0\right)=y_{0}`, where
:math:`\mathbf{y}` is a length :math:`N` vector and :math:`\mathbf{f}`
is a mapping from :math:`\mathcal{R}^{N}` to :math:`\mathcal{R}^{N}.`
A higher-order ordinary differential equation can always be reduced to
a differential equation of this type by introducing intermediate
derivatives into the :math:`\mathbf{y}` vector.
For example suppose it is desired to find the solution to the
following second-order differential equation:
.. math::
:nowrap:
\[ \frac{d^{2}w}{dz^{2}}-zw(z)=0\]
with initial conditions :math:`w\left(0\right)=\frac{1}{\sqrt[3]{3^{2}}\Gamma\left(\frac{2}{3}\right)}` and :math:`\left.\frac{dw}{dz}\right|_{z=0}=-\frac{1}{\sqrt[3]{3}\Gamma\left(\frac{1}{3}\right)}.` It is known that the solution to this differential equation with these
boundary conditions is the Airy function
.. math::
:nowrap:
\[ w=\textrm{Ai}\left(z\right),\]
which gives a means to check the integrator using :func:`special.airy <scipy.special.airy>`.
First, convert this ODE into standard form by setting
:math:`\mathbf{y}=\left[\frac{dw}{dz},w\right]` and :math:`t=z`. Thus,
the differential equation becomes
.. math::
:nowrap:
\[ \frac{d\mathbf{y}}{dt}=\left[\begin{array}{c} ty_{1}\\ y_{0}\end{array}\right]=\left[\begin{array}{cc} 0 & t\\ 1 & 0\end{array}\right]\left[\begin{array}{c} y_{0}\\ y_{1}\end{array}\right]=\left[\begin{array}{cc} 0 & t\\ 1 & 0\end{array}\right]\mathbf{y}.\]
In other words,
.. math::
:nowrap:
\[ \mathbf{f}\left(\mathbf{y},t\right)=\mathbf{A}\left(t\right)\mathbf{y}.\]
As an interesting reminder, if :math:`\mathbf{A}\left(t\right)`
commutes with :math:`\int_{0}^{t}\mathbf{A}\left(\tau\right)\, d\tau`
under matrix multiplication, then this linear differential equation
has an exact solution using the matrix exponential:
.. math::
:nowrap:
\[ \mathbf{y}\left(t\right)=\exp\left(\int_{0}^{t}\mathbf{A}\left(\tau\right)d\tau\right)\mathbf{y}\left(0\right),\]
However, in this case, :math:`\mathbf{A}\left(t\right)` and its integral do not commute.
There are many optional inputs and outputs available when using odeint
which can help tune the solver. These additional inputs and outputs
are not needed much of the time, however, and the three required input
arguments and the output solution suffice. The required inputs are the
function defining the derivative, *fprime*, the initial conditions
vector, *y0*, and the time points to obtain a solution, *t*, (with
the initial value point as the first element of this sequence). The
output to :obj:`odeint` is a matrix where each row contains the
solution vector at each requested time point (thus, the initial
conditions are given in the first output row).
The following example illustrates the use of odeint including the
usage of the *Dfun* option which allows the user to specify a gradient
(with respect to :math:`\mathbf{y}` ) of the function,
:math:`\mathbf{f}\left(\mathbf{y},t\right)`.
>>> from scipy.integrate import odeint
>>> from scipy.special import gamma, airy
>>> y1_0 = 1.0/3**(2.0/3.0)/gamma(2.0/3.0)
>>> y0_0 = -1.0/3**(1.0/3.0)/gamma(1.0/3.0)
>>> y0 = [y0_0, y1_0]
>>> def func(y, t):
... return [t*y[1],y[0]]
>>> def gradient(y,t):
... return [[0,t],[1,0]]
>>> x = arange(0,4.0, 0.01)
>>> t = x
>>> ychk = airy(x)[0]
>>> y = odeint(func, y0, t)
>>> y2 = odeint(func, y0, t, Dfun=gradient)
>>> print ychk[:36:6]
[ 0.355028 0.339511 0.324068 0.308763 0.293658 0.278806]
>>> print y[:36:6,1]
[ 0.355028 0.339511 0.324067 0.308763 0.293658 0.278806]
>>> print y2[:36:6,1]
[ 0.355028 0.339511 0.324067 0.308763 0.293658 0.278806]

View file

@ -0,0 +1,399 @@
Interpolation (:mod:`scipy.interpolate`)
========================================
.. sectionauthor:: Travis E. Oliphant
.. currentmodule:: scipy.interpolate
.. contents::
There are two general interpolation facilities available in SciPy. The
first facility is an interpolation class which performs linear
1-dimensional interpolation. The second facility is based on the
FORTRAN library FITPACK and provides functions for 1- and
2-dimensional (smoothed) cubic-spline interpolation. There are both
procedural and object-oriented interfaces for the FITPACK library.
Linear 1-d interpolation (:class:`interp1d`)
--------------------------------------------
The interp1d class in scipy.interpolate is a convenient method to
create a function based on fixed data points which can be evaluated
anywhere within the domain defined by the given data using linear
interpolation. An instance of this class is created by passing the 1-d
vectors comprising the data. The instance of this class defines a
__call__ method and can therefore by treated like a function which
interpolates between known data values to obtain unknown values (it
also has a docstring for help). Behavior at the boundary can be
specified at instantiation time. The following example demonstrates
it's use.
.. plot::
>>> import numpy as np
>>> from scipy import interpolate
>>> x = np.arange(0,10)
>>> y = np.exp(-x/3.0)
>>> f = interpolate.interp1d(x, y)
>>> xnew = np.arange(0,9,0.1)
>>> import matplotlib.pyplot as plt
>>> plt.plot(x,y,'o',xnew,f(xnew),'-')
.. :caption: One-dimensional interpolation using the
.. class :obj:`interpolate.interp1d`
Spline interpolation in 1-d: Procedural (interpolate.splXXX)
------------------------------------------------------------
Spline interpolation requires two essential steps: (1) a spline
representation of the curve is computed, and (2) the spline is
evaluated at the desired points. In order to find the spline
representation, there are two different ways to represent a curve and
obtain (smoothing) spline coefficients: directly and parametrically.
The direct method finds the spline representation of a curve in a two-
dimensional plane using the function :obj:`splrep`. The
first two arguments are the only ones required, and these provide the
:math:`x` and :math:`y` components of the curve. The normal output is
a 3-tuple, :math:`\left(t,c,k\right)` , containing the knot-points,
:math:`t` , the coefficients :math:`c` and the order :math:`k` of the
spline. The default spline order is cubic, but this can be changed
with the input keyword, *k.*
For curves in :math:`N` -dimensional space the function
:obj:`splprep` allows defining the curve
parametrically. For this function only 1 input argument is
required. This input is a list of :math:`N` -arrays representing the
curve in :math:`N` -dimensional space. The length of each array is the
number of curve points, and each array provides one component of the
:math:`N` -dimensional data point. The parameter variable is given
with the keword argument, *u,* which defaults to an equally-spaced
monotonic sequence between :math:`0` and :math:`1` . The default
output consists of two objects: a 3-tuple, :math:`\left(t,c,k\right)`
, containing the spline representation and the parameter variable
:math:`u.`
The keyword argument, *s* , is used to specify the amount of smoothing
to perform during the spline fit. The default value of :math:`s` is
:math:`s=m-\sqrt{2m}` where :math:`m` is the number of data-points
being fit. Therefore, **if no smoothing is desired a value of**
:math:`\mathbf{s}=0` **should be passed to the routines.**
Once the spline representation of the data has been determined,
functions are available for evaluating the spline
(:func:`splev`) and its derivatives
(:func:`splev`, :func:`spalde`) at any point
and the integral of the spline between any two points (
:func:`splint`). In addition, for cubic splines ( :math:`k=3`
) with 8 or more knots, the roots of the spline can be estimated (
:func:`sproot`). These functions are demonstrated in the
example that follows.
.. plot::
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy import interpolate
Cubic-spline
>>> x = np.arange(0,2*np.pi+np.pi/4,2*np.pi/8)
>>> y = np.sin(x)
>>> tck = interpolate.splrep(x,y,s=0)
>>> xnew = np.arange(0,2*np.pi,np.pi/50)
>>> ynew = interpolate.splev(xnew,tck,der=0)
>>> plt.figure()
>>> plt.plot(x,y,'x',xnew,ynew,xnew,np.sin(xnew),x,y,'b')
>>> plt.legend(['Linear','Cubic Spline', 'True'])
>>> plt.axis([-0.05,6.33,-1.05,1.05])
>>> plt.title('Cubic-spline interpolation')
>>> plt.show()
Derivative of spline
>>> yder = interpolate.splev(xnew,tck,der=1)
>>> plt.figure()
>>> plt.plot(xnew,yder,xnew,np.cos(xnew),'--')
>>> plt.legend(['Cubic Spline', 'True'])
>>> plt.axis([-0.05,6.33,-1.05,1.05])
>>> plt.title('Derivative estimation from spline')
>>> plt.show()
Integral of spline
>>> def integ(x,tck,constant=-1):
>>> x = np.atleast_1d(x)
>>> out = np.zeros(x.shape, dtype=x.dtype)
>>> for n in xrange(len(out)):
>>> out[n] = interpolate.splint(0,x[n],tck)
>>> out += constant
>>> return out
>>>
>>> yint = integ(xnew,tck)
>>> plt.figure()
>>> plt.plot(xnew,yint,xnew,-np.cos(xnew),'--')
>>> plt.legend(['Cubic Spline', 'True'])
>>> plt.axis([-0.05,6.33,-1.05,1.05])
>>> plt.title('Integral estimation from spline')
>>> plt.show()
Roots of spline
>>> print interpolate.sproot(tck)
[ 0. 3.1416]
Parametric spline
>>> t = np.arange(0,1.1,.1)
>>> x = np.sin(2*np.pi*t)
>>> y = np.cos(2*np.pi*t)
>>> tck,u = interpolate.splprep([x,y],s=0)
>>> unew = np.arange(0,1.01,0.01)
>>> out = interpolate.splev(unew,tck)
>>> plt.figure()
>>> plt.plot(x,y,'x',out[0],out[1],np.sin(2*np.pi*unew),np.cos(2*np.pi*unew),x,y,'b')
>>> plt.legend(['Linear','Cubic Spline', 'True'])
>>> plt.axis([-1.05,1.05,-1.05,1.05])
>>> plt.title('Spline of parametrically-defined curve')
>>> plt.show()
Spline interpolation in 1-d: Object-oriented (:class:`UnivariateSpline`)
-----------------------------------------------------------------------------
The spline-fitting capabilities described above are also available via
an objected-oriented interface. The one dimensional splines are
objects of the `UnivariateSpline` class, and are created with the
:math:`x` and :math:`y` components of the curve provided as arguments
to the constructor. The class defines __call__, allowing the object
to be called with the x-axis values at which the spline should be
evaluated, returning the interpolated y-values. This is shown in
the example below for the subclass `InterpolatedUnivariateSpline`.
The methods :meth:`integral <UnivariateSpline.integral>`,
:meth:`derivatives <UnivariateSpline.derivatives>`, and
:meth:`roots <UnivariateSpline.roots>` methods are also available
on `UnivariateSpline` objects, allowing definite integrals,
derivatives, and roots to be computed for the spline.
The UnivariateSpline class can also be used to smooth data by
providing a non-zero value of the smoothing parameter `s`, with the
same meaning as the `s` keyword of the :obj:`splrep` function
described above. This results in a spline that has fewer knots
than the number of data points, and hence is no longer strictly
an interpolating spline, but rather a smoothing spline. If this
is not desired, the `InterpolatedUnivariateSpline` class is available.
It is a subclass of `UnivariateSpline` that always passes through all
points (equivalent to forcing the smoothing parameter to 0). This
class is demonstrated in the example below.
The `LSQUnivarateSpline` is the other subclass of `UnivarateSpline`.
It allows the user to specify the number and location of internal
knots as explicitly with the parameter `t`. This allows creation
of customized splines with non-linear spacing, to interpolate in
some domains and smooth in others, or change the character of the
spline.
.. plot::
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy import interpolate
InterpolatedUnivariateSpline
>>> x = np.arange(0,2*np.pi+np.pi/4,2*np.pi/8)
>>> y = np.sin(x)
>>> s = interpolate.InterpolatedUnivariateSpline(x,y)
>>> xnew = np.arange(0,2*np.pi,np.pi/50)
>>> ynew = s(xnew)
>>> plt.figure()
>>> plt.plot(x,y,'x',xnew,ynew,xnew,np.sin(xnew),x,y,'b')
>>> plt.legend(['Linear','InterpolatedUnivariateSpline', 'True'])
>>> plt.axis([-0.05,6.33,-1.05,1.05])
>>> plt.title('InterpolatedUnivariateSpline')
>>> plt.show()
LSQUnivarateSpline with non-uniform knots
>>> t = [np.pi/2-.1,np.pi/2-.1,3*np.pi/2-.1,3*np.pi/2+.1]
>>> s = interpolate.LSQUnivariateSpline(x,y,t)
>>> ynew = s(xnew)
>>> plt.figure()
>>> plt.plot(x,y,'x',xnew,ynew,xnew,np.sin(xnew),x,y,'b')
>>> plt.legend(['Linear','LSQUnivariateSpline', 'True'])
>>> plt.axis([-0.05,6.33,-1.05,1.05])
>>> plt.title('Spline with Specified Interior Knots')
>>> plt.show()
Two-dimensional spline representation: Procedural (:func:`bisplrep`)
--------------------------------------------------------------------
For (smooth) spline-fitting to a two dimensional surface, the function
:func:`bisplrep` is available. This function takes as required inputs
the **1-D** arrays *x*, *y*, and *z* which represent points on the
surface :math:`z=f\left(x,y\right).` The default output is a list
:math:`\left[tx,ty,c,kx,ky\right]` whose entries represent
respectively, the components of the knot positions, the coefficients
of the spline, and the order of the spline in each coordinate. It is
convenient to hold this list in a single object, *tck,* so that it can
be passed easily to the function :obj:`bisplev`. The
keyword, *s* , can be used to change the amount of smoothing performed
on the data while determining the appropriate spline. The default
value is :math:`s=m-\sqrt{2m}` where :math:`m` is the number of data
points in the *x, y,* and *z* vectors. As a result, if no smoothing is
desired, then :math:`s=0` should be passed to
:obj:`bisplrep` .
To evaluate the two-dimensional spline and it's partial derivatives
(up to the order of the spline), the function
:obj:`bisplev` is required. This function takes as the
first two arguments **two 1-D arrays** whose cross-product specifies
the domain over which to evaluate the spline. The third argument is
the *tck* list returned from :obj:`bisplrep`. If desired,
the fourth and fifth arguments provide the orders of the partial
derivative in the :math:`x` and :math:`y` direction respectively.
It is important to note that two dimensional interpolation should not
be used to find the spline representation of images. The algorithm
used is not amenable to large numbers of input points. The signal
processing toolbox contains more appropriate algorithms for finding
the spline representation of an image. The two dimensional
interpolation commands are intended for use when interpolating a two
dimensional function as shown in the example that follows. This
example uses the :obj:`mgrid <numpy.mgrid>` command in SciPy which is
useful for defining a "mesh-grid "in many dimensions. (See also the
:obj:`ogrid <numpy.ogrid>` command if the full-mesh is not
needed). The number of output arguments and the number of dimensions
of each argument is determined by the number of indexing objects
passed in :obj:`mgrid <numpy.mgrid>`.
.. plot::
>>> import numpy as np
>>> from scipy import interpolate
>>> import matplotlib.pyplot as plt
Define function over sparse 20x20 grid
>>> x,y = np.mgrid[-1:1:20j,-1:1:20j]
>>> z = (x+y)*np.exp(-6.0*(x*x+y*y))
>>> plt.figure()
>>> plt.pcolor(x,y,z)
>>> plt.colorbar()
>>> plt.title("Sparsely sampled function.")
>>> plt.show()
Interpolate function over new 70x70 grid
>>> xnew,ynew = np.mgrid[-1:1:70j,-1:1:70j]
>>> tck = interpolate.bisplrep(x,y,z,s=0)
>>> znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck)
>>> plt.figure()
>>> plt.pcolor(xnew,ynew,znew)
>>> plt.colorbar()
>>> plt.title("Interpolated function.")
>>> plt.show()
.. :caption: Example of two-dimensional spline interpolation.
Two-dimensional spline representation: Object-oriented (:class:`BivariateSpline`)
---------------------------------------------------------------------------------
The :class:`BivariateSpline` class is the 2-dimensional analog of the
:class:`UnivariateSpline` class. It and its subclasses implement
the FITPACK functions described above in an object oriented fashion,
allowing objects to be instantiated that can be called to compute
the spline value by passing in the two coordinates as the two
arguments.
Using radial basis functions for smoothing/interpolation
---------------------------------------------------------
Radial basis functions can be used for smoothing/interpolating scattered
data in n-dimensions, but should be used with caution for extrapolation
outside of the observed data range.
1-d Example
^^^^^^^^^^^
This example compares the usage of the Rbf and UnivariateSpline classes
from the scipy.interpolate module.
.. plot::
>>> import numpy as np
>>> from scipy.interpolate import Rbf, InterpolatedUnivariateSpline
>>> import matplotlib.pyplot as plt
>>> # setup data
>>> x = np.linspace(0, 10, 9)
>>> y = np.sin(x)
>>> xi = np.linspace(0, 10, 101)
>>> # use fitpack2 method
>>> ius = InterpolatedUnivariateSpline(x, y)
>>> yi = ius(xi)
>>> plt.subplot(2, 1, 1)
>>> plt.plot(x, y, 'bo')
>>> plt.plot(xi, yi, 'g')
>>> plt.plot(xi, np.sin(xi), 'r')
>>> plt.title('Interpolation using univariate spline')
>>> # use RBF method
>>> rbf = Rbf(x, y)
>>> fi = rbf(xi)
>>> plt.subplot(2, 1, 2)
>>> plt.plot(x, y, 'bo')
>>> plt.plot(xi, fi, 'g')
>>> plt.plot(xi, np.sin(xi), 'r')
>>> plt.title('Interpolation using RBF - multiquadrics')
>>> plt.show()
.. :caption: Example of one-dimensional RBF interpolation.
2-d Example
^^^^^^^^^^^
This example shows how to interpolate scattered 2d data.
.. plot::
>>> import numpy as np
>>> from scipy.interpolate import Rbf
>>> import matplotlib.pyplot as plt
>>> from matplotlib import cm
>>> # 2-d tests - setup scattered data
>>> x = np.random.rand(100)*4.0-2.0
>>> y = np.random.rand(100)*4.0-2.0
>>> z = x*np.exp(-x**2-y**2)
>>> ti = np.linspace(-2.0, 2.0, 100)
>>> XI, YI = np.meshgrid(ti, ti)
>>> # use RBF
>>> rbf = Rbf(x, y, z, epsilon=2)
>>> ZI = rbf(XI, YI)
>>> # plot the result
>>> n = plt.normalize(-2., 2.)
>>> plt.subplot(1, 1, 1)
>>> plt.pcolor(XI, YI, ZI, cmap=cm.jet)
>>> plt.scatter(x, y, 100, z, cmap=cm.jet)
>>> plt.title('RBF interpolation - multiquadrics')
>>> plt.xlim(-2, 2)
>>> plt.ylim(-2, 2)
>>> plt.colorbar()

View file

@ -0,0 +1,376 @@
File IO (:mod:`scipy.io`)
=========================
.. sectionauthor:: Matthew Brett
.. currentmodule:: scipy.io
.. seealso:: :ref:`numpy-reference.routines.io` (in numpy)
Matlab files
------------
.. autosummary::
:toctree: generated/
loadmat
savemat
Getting started:
>>> import scipy.io as sio
If you are using IPython, try tab completing on ``sio``. You'll find::
sio.loadmat
sio.savemat
These are the high-level functions you will most likely use. You'll also find::
sio.matlab
This is the package from which ``loadmat`` and ``savemat`` are imported.
Within ``sio.matlab``, you will find the ``mio`` module - containing
the machinery that ``loadmat`` and ``savemat`` use. From time to time
you may find yourself re-using this machinery.
How do I start?
```````````````
You may have a ``.mat`` file that you want to read into Scipy. Or, you
want to pass some variables from Scipy / Numpy into Matlab.
To save us using a Matlab license, let's start in Octave_. Octave has
Matlab-compatible save / load functions. Start Octave (``octave`` at
the command line for me):
.. sourcecode:: octave
octave:1> a = 1:12
a =
1 2 3 4 5 6 7 8 9 10 11 12
octave:2> a = reshape(a, [1 3 4])
a =
ans(:,:,1) =
1 2 3
ans(:,:,2) =
4 5 6
ans(:,:,3) =
7 8 9
ans(:,:,4) =
10 11 12
octave:3> save -6 octave_a.mat a % Matlab 6 compatible
octave:4> ls octave_a.mat
octave_a.mat
Now, to Python:
>>> mat_contents = sio.loadmat('octave_a.mat')
>>> print mat_contents
{'a': array([[[ 1., 4., 7., 10.],
[ 2., 5., 8., 11.],
[ 3., 6., 9., 12.]]]), '__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, written by Octave 3.2.3, 2010-05-30 02:13:40 UTC', '__globals__': []}
>>> oct_a = mat_contents['a']
>>> print oct_a
[[[ 1. 4. 7. 10.]
[ 2. 5. 8. 11.]
[ 3. 6. 9. 12.]]]
>>> print oct_a.shape
(1, 3, 4)
Now let's try the other way round:
>>> import numpy as np
>>> vect = np.arange(10)
>>> print vect.shape
(10,)
>>> sio.savemat('np_vector.mat', {'vect':vect})
/Users/mb312/usr/local/lib/python2.6/site-packages/scipy/io/matlab/mio.py:196: FutureWarning: Using oned_as default value ('column') This will change to 'row' in future versions
oned_as=oned_as)
Then back to Octave:
.. sourcecode:: octave
octave:5> load np_vector.mat
octave:6> vect
vect =
0
1
2
3
4
5
6
7
8
9
octave:7> size(vect)
ans =
10 1
Note the deprecation warning. The ``oned_as`` keyword determines the way in
which one-dimensional vectors are stored. In the future, this will default
to ``row`` instead of ``column``:
>>> sio.savemat('np_vector.mat', {'vect':vect}, oned_as='row')
We can load this in Octave or Matlab:
.. sourcecode:: octave
octave:8> load np_vector.mat
octave:9> vect
vect =
0 1 2 3 4 5 6 7 8 9
octave:10> size(vect)
ans =
1 10
Matlab structs
``````````````
Matlab structs are a little bit like Python dicts, except the field
names must be strings. Any Matlab object can be a value of a field. As
for all objects in Matlab, structs are in fact arrays of structs, where
a single struct is an array of shape (1, 1).
.. sourcecode:: octave
octave:11> my_struct = struct('field1', 1, 'field2', 2)
my_struct =
{
field1 = 1
field2 = 2
}
octave:12> save -6 octave_struct.mat my_struct
We can load this in Python:
>>> mat_contents = sio.loadmat('octave_struct.mat')
>>> print mat_contents
{'my_struct': array([[([[1.0]], [[2.0]])]],
dtype=[('field1', '|O8'), ('field2', '|O8')]), '__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, written by Octave 3.2.3, 2010-05-30 02:00:26 UTC', '__globals__': []}
>>> oct_struct = mat_contents['my_struct']
>>> print oct_struct.shape
(1, 1)
>>> val = oct_struct[0,0]
>>> print val
([[1.0]], [[2.0]])
>>> print val['field1']
[[ 1.]]
>>> print val['field2']
[[ 2.]]
>>> print val.dtype
[('field1', '|O8'), ('field2', '|O8')]
In this version of Scipy (0.8.0), Matlab structs come back as numpy
structured arrays, with fields named for the struct fields. You can see
the field names in the ``dtype`` output above. Note also:
>>> val = oct_struct[0,0]
and:
.. sourcecode:: octave
octave:13> size(my_struct)
ans =
1 1
So, in Matlab, the struct array must be at least 2D, and we replicate
that when we read into Scipy. If you want all length 1 dimensions
squeezed out, try this:
>>> mat_contents = sio.loadmat('octave_struct.mat', squeeze_me=True)
>>> oct_struct = mat_contents['my_struct']
>>> oct_struct.shape
()
Sometimes, it's more convenient to load the matlab structs as python
objects rather than numpy structured arrarys - it can make the access
syntax in python a bit more similar to that in matlab. In order to do
this, use the ``struct_as_record=False`` parameter to ``loadmat``.
>>> mat_contents = sio.loadmat('octave_struct.mat', struct_as_record=False)
>>> oct_struct = mat_contents['my_struct']
>>> oct_struct[0,0].field1
array([[ 1.]])
``struct_as_record=False`` works nicely with ``squeeze_me``:
>>> mat_contents = sio.loadmat('octave_struct.mat', struct_as_record=False, squeeze_me=True)
>>> oct_struct = mat_contents['my_struct']
>>> oct_struct.shape # but no - it's a scalar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'mat_struct' object has no attribute 'shape'
>>> print type(oct_struct)
<class 'scipy.io.matlab.mio5_params.mat_struct'>
>>> print oct_struct.field1
1.0
Saving struct arrays can be done in various ways. One simple method is
to use dicts:
>>> a_dict = {'field1': 0.5, 'field2': 'a string'}
>>> sio.savemat('saved_struct.mat', {'a_dict': a_dict})
loaded as:
.. sourcecode:: octave
octave:21> load saved_struct
octave:22> a_dict
a_dict =
{
field2 = a string
field1 = 0.50000
}
You can also save structs back again to Matlab (or Octave in our case)
like this:
>>> dt = [('f1', 'f8'), ('f2', 'S10')]
>>> arr = np.zeros((2,), dtype=dt)
>>> print arr
[(0.0, '') (0.0, '')]
>>> arr[0]['f1'] = 0.5
>>> arr[0]['f2'] = 'python'
>>> arr[1]['f1'] = 99
>>> arr[1]['f2'] = 'not perl'
>>> sio.savemat('np_struct_arr.mat', {'arr': arr})
Matlab cell arrays
``````````````````
Cell arrays in Matlab are rather like python lists, in the sense that
the elements in the arrays can contain any type of Matlab object. In
fact they are most similar to numpy object arrays, and that is how we
load them into numpy.
.. sourcecode:: octave
octave:14> my_cells = {1, [2, 3]}
my_cells =
{
[1,1] = 1
[1,2] =
2 3
}
octave:15> save -6 octave_cells.mat my_cells
Back to Python:
>>> mat_contents = sio.loadmat('octave_cells.mat')
>>> oct_cells = mat_contents['my_cells']
>>> print oct_cells.dtype
object
>>> val = oct_cells[0,0]
>>> print val
[[ 1.]]
>>> print val.dtype
float64
Saving to a Matlab cell array just involves making a numpy object array:
>>> obj_arr = np.zeros((2,), dtype=np.object)
>>> obj_arr[0] = 1
>>> obj_arr[1] = 'a string'
>>> print obj_arr
[1 a string]
>>> sio.savemat('np_cells.mat', {'obj_arr':obj_arr})
.. sourcecode:: octave
octave:16> load np_cells.mat
octave:17> obj_arr
obj_arr =
{
[1,1] = 1
[2,1] = a string
}
Matrix Market files
-------------------
.. autosummary::
:toctree: generated/
mminfo
mmread
mmwrite
Other
-----
.. autosummary::
:toctree: generated/
save_as_module
Wav sound files (:mod:`scipy.io.wavfile`)
-----------------------------------------
.. module:: scipy.io.wavfile
.. autosummary::
:toctree: generated/
read
write
Arff files (:mod:`scipy.io.arff`)
---------------------------------
.. automodule:: scipy.io.arff
.. autosummary::
:toctree: generated/
loadarff
Netcdf (:mod:`scipy.io.netcdf`)
-------------------------------
.. module:: scipy.io.netcdf
.. autosummary::
:toctree: generated/
netcdf_file
Allows reading of NetCDF files (version of pupynere_ package)
.. _pupynere: http://pypi.python.org/pypi/pupynere/
.. _octave: http://www.gnu.org/software/octave
.. _matlab: http://www.mathworks.com/

View file

@ -0,0 +1,825 @@
Linear Algebra
==============
.. sectionauthor:: Travis E. Oliphant
.. currentmodule: scipy
When SciPy is built using the optimized ATLAS LAPACK and BLAS
libraries, it has very fast linear algebra capabilities. If you dig
deep enough, all of the raw lapack and blas libraries are available
for your use for even more speed. In this section, some easier-to-use
interfaces to these routines are described.
All of these linear algebra routines expect an object that can be
converted into a 2-dimensional array. The output of these routines is
also a two-dimensional array. There is a matrix class defined in
Numpy, which you can initialize with an appropriate Numpy array in
order to get objects for which multiplication is matrix-multiplication
instead of the default, element-by-element multiplication.
Matrix Class
------------
The matrix class is initialized with the SciPy command :obj:`mat`
which is just convenient short-hand for :class:`matrix
<numpy.matrix>`. If you are going to be doing a lot of matrix-math, it
is convenient to convert arrays into matrices using this command. One
advantage of using the :func:`mat` command is that you can enter
two-dimensional matrices using MATLAB-like syntax with commas or
spaces separating columns and semicolons separting rows as long as the
matrix is placed in a string passed to :obj:`mat` .
Basic routines
--------------
Finding Inverse
^^^^^^^^^^^^^^^
The inverse of a matrix :math:`\mathbf{A}` is the matrix
:math:`\mathbf{B}` such that :math:`\mathbf{AB}=\mathbf{I}` where
:math:`\mathbf{I}` is the identity matrix consisting of ones down the
main diagonal. Usually :math:`\mathbf{B}` is denoted
:math:`\mathbf{B}=\mathbf{A}^{-1}` . In SciPy, the matrix inverse of
the Numpy array, A, is obtained using :obj:`linalg.inv` ``(A)`` , or
using ``A.I`` if ``A`` is a Matrix. For example, let
.. math::
:nowrap:
\[ \mathbf{A=}\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]\]
then
.. math::
:nowrap:
\[ \mathbf{A^{-1}=\frac{1}{25}\left[\begin{array}{ccc} -37 & 9 & 22\\ 14 & 2 & -9\\ 4 & -3 & 1\end{array}\right]=\left[\begin{array}{ccc} -1.48 & 0.36 & 0.88\\ 0.56 & 0.08 & -0.36\\ 0.16 & -0.12 & 0.04\end{array}\right].}\]
The following example demonstrates this computation in SciPy
>>> A = mat('[1 3 5; 2 5 1; 2 3 8]')
>>> A
matrix([[1, 3, 5],
[2, 5, 1],
[2, 3, 8]])
>>> A.I
matrix([[-1.48, 0.36, 0.88],
[ 0.56, 0.08, -0.36],
[ 0.16, -0.12, 0.04]])
>>> from scipy import linalg
>>> linalg.inv(A)
array([[-1.48, 0.36, 0.88],
[ 0.56, 0.08, -0.36],
[ 0.16, -0.12, 0.04]])
Solving linear system
^^^^^^^^^^^^^^^^^^^^^
Solving linear systems of equations is straightforward using the scipy
command :obj:`linalg.solve`. This command expects an input matrix and
a right-hand-side vector. The solution vector is then computed. An
option for entering a symmetrix matrix is offered which can speed up
the processing when applicable. As an example, suppose it is desired
to solve the following simultaneous equations:
.. math::
:nowrap:
\begin{eqnarray*} x+3y+5z & = & 10\\ 2x+5y+z & = & 8\\ 2x+3y+8z & = & 3\end{eqnarray*}
We could find the solution vector using a matrix inverse:
.. math::
:nowrap:
\[ \left[\begin{array}{c} x\\ y\\ z\end{array}\right]=\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]^{-1}\left[\begin{array}{c} 10\\ 8\\ 3\end{array}\right]=\frac{1}{25}\left[\begin{array}{c} -232\\ 129\\ 19\end{array}\right]=\left[\begin{array}{c} -9.28\\ 5.16\\ 0.76\end{array}\right].\]
However, it is better to use the linalg.solve command which can be
faster and more numerically stable. In this case it however gives the
same answer as shown in the following example:
>>> A = mat('[1 3 5; 2 5 1; 2 3 8]')
>>> b = mat('[10;8;3]')
>>> A.I*b
matrix([[-9.28],
[ 5.16],
[ 0.76]])
>>> linalg.solve(A,b)
array([[-9.28],
[ 5.16],
[ 0.76]])
Finding Determinant
^^^^^^^^^^^^^^^^^^^
The determinant of a square matrix :math:`\mathbf{A}` is often denoted
:math:`\left|\mathbf{A}\right|` and is a quantity often used in linear
algebra. Suppose :math:`a_{ij}` are the elements of the matrix
:math:`\mathbf{A}` and let :math:`M_{ij}=\left|\mathbf{A}_{ij}\right|`
be the determinant of the matrix left by removing the
:math:`i^{\textrm{th}}` row and :math:`j^{\textrm{th}}` column from
:math:`\mathbf{A}` . Then for any row :math:`i,`
.. math::
:nowrap:
\[ \left|\mathbf{A}\right|=\sum_{j}\left(-1\right)^{i+j}a_{ij}M_{ij}.\]
This is a recursive way to define the determinant where the base case
is defined by accepting that the determinant of a :math:`1\times1` matrix is the only matrix element. In SciPy the determinant can be
calculated with :obj:`linalg.det` . For example, the determinant of
.. math::
:nowrap:
\[ \mathbf{A=}\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]\]
is
.. math::
:nowrap:
\begin{eqnarray*} \left|\mathbf{A}\right| & = & 1\left|\begin{array}{cc} 5 & 1\\ 3 & 8\end{array}\right|-3\left|\begin{array}{cc} 2 & 1\\ 2 & 8\end{array}\right|+5\left|\begin{array}{cc} 2 & 5\\ 2 & 3\end{array}\right|\\ & = & 1\left(5\cdot8-3\cdot1\right)-3\left(2\cdot8-2\cdot1\right)+5\left(2\cdot3-2\cdot5\right)=-25.\end{eqnarray*}
In SciPy this is computed as shown in this example:
>>> A = mat('[1 3 5; 2 5 1; 2 3 8]')
>>> linalg.det(A)
-25.000000000000004
Computing norms
^^^^^^^^^^^^^^^
Matrix and vector norms can also be computed with SciPy. A wide range
of norm definitions are available using different parameters to the
order argument of :obj:`linalg.norm` . This function takes a rank-1
(vectors) or a rank-2 (matrices) array and an optional order argument
(default is 2). Based on these inputs a vector or matrix norm of the
requested order is computed.
For vector *x* , the order parameter can be any real number including
``inf`` or ``-inf``. The computed norm is
.. math::
:nowrap:
\[ \left\Vert \mathbf{x}\right\Vert =\left\{ \begin{array}{cc} \max\left|x_{i}\right| & \textrm{ord}=\textrm{inf}\\ \min\left|x_{i}\right| & \textrm{ord}=-\textrm{inf}\\ \left(\sum_{i}\left|x_{i}\right|^{\textrm{ord}}\right)^{1/\textrm{ord}} & \left|\textrm{ord}\right|<\infty.\end{array}\right.\]
For matrix :math:`\mathbf{A}` the only valid values for norm are :math:`\pm2,\pm1,` :math:`\pm` inf, and 'fro' (or 'f') Thus,
.. math::
:nowrap:
\[ \left\Vert \mathbf{A}\right\Vert =\left\{ \begin{array}{cc} \max_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=\textrm{inf}\\ \min_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=-\textrm{inf}\\ \max_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=1\\ \min_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=-1\\ \max\sigma_{i} & \textrm{ord}=2\\ \min\sigma_{i} & \textrm{ord}=-2\\ \sqrt{\textrm{trace}\left(\mathbf{A}^{H}\mathbf{A}\right)} & \textrm{ord}=\textrm{'fro'}\end{array}\right.\]
where :math:`\sigma_{i}` are the singular values of :math:`\mathbf{A}` .
Solving linear least-squares problems and pseudo-inverses
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Linear least-squares problems occur in many branches of applied
mathematics. In this problem a set of linear scaling coefficients is
sought that allow a model to fit data. In particular it is assumed
that data :math:`y_{i}` is related to data :math:`\mathbf{x}_{i}`
through a set of coefficients :math:`c_{j}` and model functions
:math:`f_{j}\left(\mathbf{x}_{i}\right)` via the model
.. math::
:nowrap:
\[ y_{i}=\sum_{j}c_{j}f_{j}\left(\mathbf{x}_{i}\right)+\epsilon_{i}\]
where :math:`\epsilon_{i}` represents uncertainty in the data. The
strategy of least squares is to pick the coefficients :math:`c_{j}` to
minimize
.. math::
:nowrap:
\[ J\left(\mathbf{c}\right)=\sum_{i}\left|y_{i}-\sum_{j}c_{j}f_{j}\left(x_{i}\right)\right|^{2}.\]
Theoretically, a global minimum will occur when
.. math::
:nowrap:
\[ \frac{\partial J}{\partial c_{n}^{*}}=0=\sum_{i}\left(y_{i}-\sum_{j}c_{j}f_{j}\left(x_{i}\right)\right)\left(-f_{n}^{*}\left(x_{i}\right)\right)\]
or
.. math::
:nowrap:
\begin{eqnarray*} \sum_{j}c_{j}\sum_{i}f_{j}\left(x_{i}\right)f_{n}^{*}\left(x_{i}\right) & = & \sum_{i}y_{i}f_{n}^{*}\left(x_{i}\right)\\ \mathbf{A}^{H}\mathbf{Ac} & = & \mathbf{A}^{H}\mathbf{y}\end{eqnarray*}
where
.. math::
:nowrap:
\[ \left\{ \mathbf{A}\right\} _{ij}=f_{j}\left(x_{i}\right).\]
When :math:`\mathbf{A^{H}A}` is invertible, then
.. math::
:nowrap:
\[ \mathbf{c}=\left(\mathbf{A}^{H}\mathbf{A}\right)^{-1}\mathbf{A}^{H}\mathbf{y}=\mathbf{A}^{\dagger}\mathbf{y}\]
where :math:`\mathbf{A}^{\dagger}` is called the pseudo-inverse of
:math:`\mathbf{A}.` Notice that using this definition of
:math:`\mathbf{A}` the model can be written
.. math::
:nowrap:
\[ \mathbf{y}=\mathbf{Ac}+\boldsymbol{\epsilon}.\]
The command :obj:`linalg.lstsq` will solve the linear least squares
problem for :math:`\mathbf{c}` given :math:`\mathbf{A}` and
:math:`\mathbf{y}` . In addition :obj:`linalg.pinv` or
:obj:`linalg.pinv2` (uses a different method based on singular value
decomposition) will find :math:`\mathbf{A}^{\dagger}` given
:math:`\mathbf{A}.`
The following example and figure demonstrate the use of
:obj:`linalg.lstsq` and :obj:`linalg.pinv` for solving a data-fitting
problem. The data shown below were generated using the model:
.. math::
:nowrap:
\[ y_{i}=c_{1}e^{-x_{i}}+c_{2}x_{i}\]
where :math:`x_{i}=0.1i` for :math:`i=1\ldots10` , :math:`c_{1}=5` ,
and :math:`c_{2}=4.` Noise is added to :math:`y_{i}` and the
coefficients :math:`c_{1}` and :math:`c_{2}` are estimated using
linear least squares.
.. plot::
>>> from numpy import *
>>> from scipy import linalg
>>> import matplotlib.pyplot as plt
>>> c1,c2= 5.0,2.0
>>> i = r_[1:11]
>>> xi = 0.1*i
>>> yi = c1*exp(-xi)+c2*xi
>>> zi = yi + 0.05*max(yi)*random.randn(len(yi))
>>> A = c_[exp(-xi)[:,newaxis],xi[:,newaxis]]
>>> c,resid,rank,sigma = linalg.lstsq(A,zi)
>>> xi2 = r_[0.1:1.0:100j]
>>> yi2 = c[0]*exp(-xi2) + c[1]*xi2
>>> plt.plot(xi,zi,'x',xi2,yi2)
>>> plt.axis([0,1.1,3.0,5.5])
>>> plt.xlabel('$x_i$')
>>> plt.title('Data fitting with linalg.lstsq')
>>> plt.show()
.. :caption: Example of linear least-squares fit
Generalized inverse
^^^^^^^^^^^^^^^^^^^
The generalized inverse is calculated using the command
:obj:`linalg.pinv` or :obj:`linalg.pinv2`. These two commands differ
in how they compute the generalized inverse. The first uses the
linalg.lstsq algorithm while the second uses singular value
decomposition. Let :math:`\mathbf{A}` be an :math:`M\times N` matrix,
then if :math:`M>N` the generalized inverse is
.. math::
:nowrap:
\[ \mathbf{A}^{\dagger}=\left(\mathbf{A}^{H}\mathbf{A}\right)^{-1}\mathbf{A}^{H}\]
while if :math:`M<N` matrix the generalized inverse is
.. math::
:nowrap:
\[ \mathbf{A}^{\#}=\mathbf{A}^{H}\left(\mathbf{A}\mathbf{A}^{H}\right)^{-1}.\]
In both cases for :math:`M=N` , then
.. math::
:nowrap:
\[ \mathbf{A}^{\dagger}=\mathbf{A}^{\#}=\mathbf{A}^{-1}\]
as long as :math:`\mathbf{A}` is invertible.
Decompositions
--------------
In many applications it is useful to decompose a matrix using other
representations. There are several decompositions supported by SciPy.
Eigenvalues and eigenvectors
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The eigenvalue-eigenvector problem is one of the most commonly
employed linear algebra operations. In one popular form, the
eigenvalue-eigenvector problem is to find for some square matrix
:math:`\mathbf{A}` scalars :math:`\lambda` and corresponding vectors
:math:`\mathbf{v}` such that
.. math::
:nowrap:
\[ \mathbf{Av}=\lambda\mathbf{v}.\]
For an :math:`N\times N` matrix, there are :math:`N` (not necessarily
distinct) eigenvalues --- roots of the (characteristic) polynomial
.. math::
:nowrap:
\[ \left|\mathbf{A}-\lambda\mathbf{I}\right|=0.\]
The eigenvectors, :math:`\mathbf{v}` , are also sometimes called right
eigenvectors to distinguish them from another set of left eigenvectors
that satisfy
.. math::
:nowrap:
\[ \mathbf{v}_{L}^{H}\mathbf{A}=\lambda\mathbf{v}_{L}^{H}\]
or
.. math::
:nowrap:
\[ \mathbf{A}^{H}\mathbf{v}_{L}=\lambda^{*}\mathbf{v}_{L}.\]
With it's default optional arguments, the command :obj:`linalg.eig`
returns :math:`\lambda` and :math:`\mathbf{v}.` However, it can also
return :math:`\mathbf{v}_{L}` and just :math:`\lambda` by itself (
:obj:`linalg.eigvals` returns just :math:`\lambda` as well).
In addtion, :obj:`linalg.eig` can also solve the more general eigenvalue problem
.. math::
:nowrap:
\begin{eqnarray*} \mathbf{Av} & = & \lambda\mathbf{Bv}\\ \mathbf{A}^{H}\mathbf{v}_{L} & = & \lambda^{*}\mathbf{B}^{H}\mathbf{v}_{L}\end{eqnarray*}
for square matrices :math:`\mathbf{A}` and :math:`\mathbf{B}.` The
standard eigenvalue problem is an example of the general eigenvalue
problem for :math:`\mathbf{B}=\mathbf{I}.` When a generalized
eigenvalue problem can be solved, then it provides a decomposition of
:math:`\mathbf{A}` as
.. math::
:nowrap:
\[ \mathbf{A}=\mathbf{BV}\boldsymbol{\Lambda}\mathbf{V}^{-1}\]
where :math:`\mathbf{V}` is the collection of eigenvectors into
columns and :math:`\boldsymbol{\Lambda}` is a diagonal matrix of
eigenvalues.
By definition, eigenvectors are only defined up to a constant scale
factor. In SciPy, the scaling factor for the eigenvectors is chosen so
that :math:`\left\Vert \mathbf{v}\right\Vert
^{2}=\sum_{i}v_{i}^{2}=1.`
As an example, consider finding the eigenvalues and eigenvectors of
the matrix
.. math::
:nowrap:
\[ \mathbf{A}=\left[\begin{array}{ccc} 1 & 5 & 2\\ 2 & 4 & 1\\ 3 & 6 & 2\end{array}\right].\]
The characteristic polynomial is
.. math::
:nowrap:
\begin{eqnarray*} \left|\mathbf{A}-\lambda\mathbf{I}\right| & = & \left(1-\lambda\right)\left[\left(4-\lambda\right)\left(2-\lambda\right)-6\right]-\\ & & 5\left[2\left(2-\lambda\right)-3\right]+2\left[12-3\left(4-\lambda\right)\right]\\ & = & -\lambda^{3}+7\lambda^{2}+8\lambda-3.\end{eqnarray*}
The roots of this polynomial are the eigenvalues of :math:`\mathbf{A}` :
.. math::
:nowrap:
\begin{eqnarray*} \lambda_{1} & = & 7.9579\\ \lambda_{2} & = & -1.2577\\ \lambda_{3} & = & 0.2997.\end{eqnarray*}
The eigenvectors corresponding to each eigenvalue can be found using
the original equation. The eigenvectors associated with these
eigenvalues can then be found.
>>> from scipy import linalg
>>> A = mat('[1 5 2; 2 4 1; 3 6 2]')
>>> la,v = linalg.eig(A)
>>> l1,l2,l3 = la
>>> print l1, l2, l3
(7.95791620491+0j) (-1.25766470568+0j) (0.299748500767+0j)
>>> print v[:,0]
[-0.5297175 -0.44941741 -0.71932146]
>>> print v[:,1]
[-0.90730751 0.28662547 0.30763439]
>>> print v[:,2]
[ 0.28380519 -0.39012063 0.87593408]
>>> print sum(abs(v**2),axis=0)
[ 1. 1. 1.]
>>> v1 = mat(v[:,0]).T
>>> print max(ravel(abs(A*v1-l1*v1)))
8.881784197e-16
Singular value decomposition
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Singular Value Decompostion (SVD) can be thought of as an extension of
the eigenvalue problem to matrices that are not square. Let
:math:`\mathbf{A}` be an :math:`M\times N` matrix with :math:`M` and
:math:`N` arbitrary. The matrices :math:`\mathbf{A}^{H}\mathbf{A}` and
:math:`\mathbf{A}\mathbf{A}^{H}` are square hermitian matrices [#]_ of
size :math:`N\times N` and :math:`M\times M` respectively. It is known
that the eigenvalues of square hermitian matrices are real and
non-negative. In addtion, there are at most
:math:`\min\left(M,N\right)` identical non-zero eigenvalues of
:math:`\mathbf{A}^{H}\mathbf{A}` and :math:`\mathbf{A}\mathbf{A}^{H}.`
Define these positive eigenvalues as :math:`\sigma_{i}^{2}.` The
square-root of these are called singular values of :math:`\mathbf{A}.`
The eigenvectors of :math:`\mathbf{A}^{H}\mathbf{A}` are collected by
columns into an :math:`N\times N` unitary [#]_ matrix
:math:`\mathbf{V}` while the eigenvectors of
:math:`\mathbf{A}\mathbf{A}^{H}` are collected by columns in the
unitary matrix :math:`\mathbf{U}` , the singular values are collected
in an :math:`M\times N` zero matrix
:math:`\mathbf{\boldsymbol{\Sigma}}` with main diagonal entries set to
the singular values. Then
.. math::
:nowrap:
\[ \mathbf{A=U}\boldsymbol{\Sigma}\mathbf{V}^{H}\]
is the singular-value decomposition of :math:`\mathbf{A}.` Every
matrix has a singular value decomposition. Sometimes, the singular
values are called the spectrum of :math:`\mathbf{A}.` The command
:obj:`linalg.svd` will return :math:`\mathbf{U}` ,
:math:`\mathbf{V}^{H}` , and :math:`\sigma_{i}` as an array of the
singular values. To obtain the matrix :math:`\mathbf{\Sigma}` use
:obj:`linalg.diagsvd`. The following example illustrates the use of
:obj:`linalg.svd` .
>>> A = mat('[1 3 2; 1 2 3]')
>>> M,N = A.shape
>>> U,s,Vh = linalg.svd(A)
>>> Sig = mat(linalg.diagsvd(s,M,N))
>>> U, Vh = mat(U), mat(Vh)
>>> print U
[[-0.70710678 -0.70710678]
[-0.70710678 0.70710678]]
>>> print Sig
[[ 5.19615242 0. 0. ]
[ 0. 1. 0. ]]
>>> print Vh
[[ -2.72165527e-01 -6.80413817e-01 -6.80413817e-01]
[ -6.18652536e-16 -7.07106781e-01 7.07106781e-01]
[ -9.62250449e-01 1.92450090e-01 1.92450090e-01]]
>>> print A
[[1 3 2]
[1 2 3]]
>>> print U*Sig*Vh
[[ 1. 3. 2.]
[ 1. 2. 3.]]
.. [#] A hermitian matrix :math:`\mathbf{D}` satisfies :math:`\mathbf{D}^{H}=\mathbf{D}.`
.. [#] A unitary matrix :math:`\mathbf{D}` satisfies :math:`\mathbf{D}^{H}\mathbf{D}=\mathbf{I}=\mathbf{D}\mathbf{D}^{H}` so that :math:`\mathbf{D}^{-1}=\mathbf{D}^{H}.`
LU decomposition
^^^^^^^^^^^^^^^^
The LU decompostion finds a representation for the :math:`M\times N` matrix :math:`\mathbf{A}` as
.. math::
:nowrap:
\[ \mathbf{A}=\mathbf{PLU}\]
where :math:`\mathbf{P}` is an :math:`M\times M` permutation matrix (a
permutation of the rows of the identity matrix), :math:`\mathbf{L}` is
in :math:`M\times K` lower triangular or trapezoidal matrix (
:math:`K=\min\left(M,N\right)` ) with unit-diagonal, and
:math:`\mathbf{U}` is an upper triangular or trapezoidal matrix. The
SciPy command for this decomposition is :obj:`linalg.lu` .
Such a decomposition is often useful for solving many simultaneous
equations where the left-hand-side does not change but the right hand
side does. For example, suppose we are going to solve
.. math::
:nowrap:
\[ \mathbf{A}\mathbf{x}_{i}=\mathbf{b}_{i}\]
for many different :math:`\mathbf{b}_{i}` . The LU decomposition allows this to be written as
.. math::
:nowrap:
\[ \mathbf{PLUx}_{i}=\mathbf{b}_{i}.\]
Because :math:`\mathbf{L}` is lower-triangular, the equation can be
solved for :math:`\mathbf{U}\mathbf{x}_{i}` and finally
:math:`\mathbf{x}_{i}` very rapidly using forward- and
back-substitution. An initial time spent factoring :math:`\mathbf{A}`
allows for very rapid solution of similar systems of equations in the
future. If the intent for performing LU decomposition is for solving
linear systems then the command :obj:`linalg.lu_factor` should be used
followed by repeated applications of the command
:obj:`linalg.lu_solve` to solve the system for each new
right-hand-side.
Cholesky decomposition
^^^^^^^^^^^^^^^^^^^^^^
Cholesky decomposition is a special case of LU decomposition
applicable to Hermitian positive definite matrices. When
:math:`\mathbf{A}=\mathbf{A}^{H}` and
:math:`\mathbf{x}^{H}\mathbf{Ax}\geq0` for all :math:`\mathbf{x}` ,
then decompositions of :math:`\mathbf{A}` can be found so that
.. math::
:nowrap:
\begin{eqnarray*} \mathbf{A} & = & \mathbf{U}^{H}\mathbf{U}\\ \mathbf{A} & = & \mathbf{L}\mathbf{L}^{H}\end{eqnarray*}
where :math:`\mathbf{L}` is lower-triangular and :math:`\mathbf{U}` is
upper triangular. Notice that :math:`\mathbf{L}=\mathbf{U}^{H}.` The
command :obj:`linagl.cholesky` computes the cholesky
factorization. For using cholesky factorization to solve systems of
equations there are also :obj:`linalg.cho_factor` and
:obj:`linalg.cho_solve` routines that work similarly to their LU
decomposition counterparts.
QR decomposition
^^^^^^^^^^^^^^^^
The QR decomposition (sometimes called a polar decomposition) works
for any :math:`M\times N` array and finds an :math:`M\times M` unitary
matrix :math:`\mathbf{Q}` and an :math:`M\times N` upper-trapezoidal
matrix :math:`\mathbf{R}` such that
.. math::
:nowrap:
\[ \mathbf{A=QR}.\]
Notice that if the SVD of :math:`\mathbf{A}` is known then the QR decomposition can be found
.. math::
:nowrap:
\[ \mathbf{A}=\mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^{H}=\mathbf{QR}\]
implies that :math:`\mathbf{Q}=\mathbf{U}` and
:math:`\mathbf{R}=\boldsymbol{\Sigma}\mathbf{V}^{H}.` Note, however,
that in SciPy independent algorithms are used to find QR and SVD
decompositions. The command for QR decomposition is :obj:`linalg.qr` .
Schur decomposition
^^^^^^^^^^^^^^^^^^^
For a square :math:`N\times N` matrix, :math:`\mathbf{A}` , the Schur
decomposition finds (not-necessarily unique) matrices
:math:`\mathbf{T}` and :math:`\mathbf{Z}` such that
.. math::
:nowrap:
\[ \mathbf{A}=\mathbf{ZT}\mathbf{Z}^{H}\]
where :math:`\mathbf{Z}` is a unitary matrix and :math:`\mathbf{T}` is
either upper-triangular or quasi-upper triangular depending on whether
or not a real schur form or complex schur form is requested. For a
real schur form both :math:`\mathbf{T}` and :math:`\mathbf{Z}` are
real-valued when :math:`\mathbf{A}` is real-valued. When
:math:`\mathbf{A}` is a real-valued matrix the real schur form is only
quasi-upper triangular because :math:`2\times2` blocks extrude from
the main diagonal corresponding to any complex- valued
eigenvalues. The command :obj:`linalg.schur` finds the Schur
decomposition while the command :obj:`linalg.rsf2csf` converts
:math:`\mathbf{T}` and :math:`\mathbf{Z}` from a real Schur form to a
complex Schur form. The Schur form is especially useful in calculating
functions of matrices.
The following example illustrates the schur decomposition:
>>> from scipy import linalg
>>> A = mat('[1 3 2; 1 4 5; 2 3 6]')
>>> T,Z = linalg.schur(A)
>>> T1,Z1 = linalg.schur(A,'complex')
>>> T2,Z2 = linalg.rsf2csf(T,Z)
>>> print T
[[ 9.90012467 1.78947961 -0.65498528]
[ 0. 0.54993766 -1.57754789]
[ 0. 0.51260928 0.54993766]]
>>> print T2
[[ 9.90012467 +0.00000000e+00j -0.32436598 +1.55463542e+00j
-0.88619748 +5.69027615e-01j]
[ 0.00000000 +0.00000000e+00j 0.54993766 +8.99258408e-01j
1.06493862 +1.37016050e-17j]
[ 0.00000000 +0.00000000e+00j 0.00000000 +0.00000000e+00j
0.54993766 -8.99258408e-01j]]
>>> print abs(T1-T2) # different
[[ 1.24357637e-14 2.09205364e+00 6.56028192e-01]
[ 0.00000000e+00 4.00296604e-16 1.83223097e+00]
[ 0.00000000e+00 0.00000000e+00 4.57756680e-16]]
>>> print abs(Z1-Z2) # different
[[ 0.06833781 1.10591375 0.23662249]
[ 0.11857169 0.5585604 0.29617525]
[ 0.12624999 0.75656818 0.22975038]]
>>> T,Z,T1,Z1,T2,Z2 = map(mat,(T,Z,T1,Z1,T2,Z2))
>>> print abs(A-Z*T*Z.H) # same
[[ 1.11022302e-16 4.44089210e-16 4.44089210e-16]
[ 4.44089210e-16 1.33226763e-15 8.88178420e-16]
[ 8.88178420e-16 4.44089210e-16 2.66453526e-15]]
>>> print abs(A-Z1*T1*Z1.H) # same
[[ 1.00043248e-15 2.22301403e-15 5.55749485e-15]
[ 2.88899660e-15 8.44927041e-15 9.77322008e-15]
[ 3.11291538e-15 1.15463228e-14 1.15464861e-14]]
>>> print abs(A-Z2*T2*Z2.H) # same
[[ 3.34058710e-16 8.88611201e-16 4.18773089e-18]
[ 1.48694940e-16 8.95109973e-16 8.92966151e-16]
[ 1.33228956e-15 1.33582317e-15 3.55373104e-15]]
Matrix Functions
----------------
Consider the function :math:`f\left(x\right)` with Taylor series expansion
.. math::
:nowrap:
\[ f\left(x\right)=\sum_{k=0}^{\infty}\frac{f^{\left(k\right)}\left(0\right)}{k!}x^{k}.\]
A matrix function can be defined using this Taylor series for the
square matrix :math:`\mathbf{A}` as
.. math::
:nowrap:
\[ f\left(\mathbf{A}\right)=\sum_{k=0}^{\infty}\frac{f^{\left(k\right)}\left(0\right)}{k!}\mathbf{A}^{k}.\]
While, this serves as a useful representation of a matrix function, it
is rarely the best way to calculate a matrix function.
Exponential and logarithm functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The matrix exponential is one of the more common matrix functions. It
can be defined for square matrices as
.. math::
:nowrap:
\[ e^{\mathbf{A}}=\sum_{k=0}^{\infty}\frac{1}{k!}\mathbf{A}^{k}.\]
The command :obj:`linalg.expm3` uses this Taylor series definition to compute the matrix exponential.
Due to poor convergence properties it is not often used.
Another method to compute the matrix exponential is to find an
eigenvalue decomposition of :math:`\mathbf{A}` :
.. math::
:nowrap:
\[ \mathbf{A}=\mathbf{V}\boldsymbol{\Lambda}\mathbf{V}^{-1}\]
and note that
.. math::
:nowrap:
\[ e^{\mathbf{A}}=\mathbf{V}e^{\boldsymbol{\Lambda}}\mathbf{V}^{-1}\]
where the matrix exponential of the diagonal matrix :math:`\boldsymbol{\Lambda}` is just the exponential of its elements. This method is implemented in :obj:`linalg.expm2` .
The preferred method for implementing the matrix exponential is to use
scaling and a Padé approximation for :math:`e^{x}` . This algorithm is
implemented as :obj:`linalg.expm` .
The inverse of the matrix exponential is the matrix logarithm defined
as the inverse of the matrix exponential.
.. math::
:nowrap:
\[ \mathbf{A}\equiv\exp\left(\log\left(\mathbf{A}\right)\right).\]
The matrix logarithm can be obtained with :obj:`linalg.logm` .
Trigonometric functions
^^^^^^^^^^^^^^^^^^^^^^^
The trigonometric functions :math:`\sin` , :math:`\cos` , and
:math:`\tan` are implemented for matrices in :func:`linalg.sinm`,
:func:`linalg.cosm`, and :obj:`linalg.tanm` respectively. The matrix
sin and cosine can be defined using Euler's identity as
.. math::
:nowrap:
\begin{eqnarray*} \sin\left(\mathbf{A}\right) & = & \frac{e^{j\mathbf{A}}-e^{-j\mathbf{A}}}{2j}\\ \cos\left(\mathbf{A}\right) & = & \frac{e^{j\mathbf{A}}+e^{-j\mathbf{A}}}{2}.\end{eqnarray*}
The tangent is
.. math::
:nowrap:
\[ \tan\left(x\right)=\frac{\sin\left(x\right)}{\cos\left(x\right)}=\left[\cos\left(x\right)\right]^{-1}\sin\left(x\right)\]
and so the matrix tangent is defined as
.. math::
:nowrap:
\[ \left[\cos\left(\mathbf{A}\right)\right]^{-1}\sin\left(\mathbf{A}\right).\]
Hyperbolic trigonometric functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The hyperbolic trigonemetric functions :math:`\sinh` , :math:`\cosh` ,
and :math:`\tanh` can also be defined for matrices using the familiar
definitions:
.. math::
:nowrap:
\begin{eqnarray*} \sinh\left(\mathbf{A}\right) & = & \frac{e^{\mathbf{A}}-e^{-\mathbf{A}}}{2}\\ \cosh\left(\mathbf{A}\right) & = & \frac{e^{\mathbf{A}}+e^{-\mathbf{A}}}{2}\\ \tanh\left(\mathbf{A}\right) & = & \left[\cosh\left(\mathbf{A}\right)\right]^{-1}\sinh\left(\mathbf{A}\right).\end{eqnarray*}
These matrix functions can be found using :obj:`linalg.sinhm`,
:obj:`linalg.coshm` , and :obj:`linalg.tanhm`.
Arbitrary function
^^^^^^^^^^^^^^^^^^
Finally, any arbitrary function that takes one complex number and
returns a complex number can be called as a matrix function using the
command :obj:`linalg.funm`. This command takes the matrix and an
arbitrary Python function. It then implements an algorithm from Golub
and Van Loan's book "Matrix Computations "to compute function applied
to the matrix using a Schur decomposition. Note that *the function
needs to accept complex numbers* as input in order to work with this
algorithm. For example the following code computes the zeroth-order
Bessel function applied to a matrix.
>>> from scipy import special, random, linalg
>>> A = random.rand(3,3)
>>> B = linalg.funm(A,lambda x: special.jv(0,x))
>>> print A
[[ 0.72578091 0.34105276 0.79570345]
[ 0.65767207 0.73855618 0.541453 ]
[ 0.78397086 0.68043507 0.4837898 ]]
>>> print B
[[ 0.72599893 -0.20545711 -0.22721101]
[-0.27426769 0.77255139 -0.23422637]
[-0.27612103 -0.21754832 0.7556849 ]]
>>> print linalg.eigvals(A)
[ 1.91262611+0.j 0.21846476+0.j -0.18296399+0.j]
>>> print special.jv(0, linalg.eigvals(A))
[ 0.27448286+0.j 0.98810383+0.j 0.99164854+0.j]
>>> print linalg.eigvals(B)
[ 0.27448286+0.j 0.98810383+0.j 0.99164854+0.j]
Note how, by virtue of how matrix analytic functions are defined,
the Bessel function has acted on the matrix eigenvalues.

File diff suppressed because it is too large Load diff

Binary file not shown.

View file

@ -0,0 +1,637 @@
Optimization (optimize)
=======================
.. sectionauthor:: Travis E. Oliphant
.. currentmodule:: scipy.optimize
There are several classical optimization algorithms provided by SciPy
in the :mod:`scipy.optimize` package. An overview of the module is
available using :func:`help` (or :func:`pydoc.help`):
.. literalinclude:: examples/5-1
The first four algorithms are unconstrained minimization algorithms
(:func:`fmin`: Nelder-Mead simplex, :func:`fmin_bfgs`: BFGS,
:func:`fmin_ncg`: Newton Conjugate Gradient, and :func:`leastsq`:
Levenburg-Marquardt). The last algorithm actually finds the roots of a
general function of possibly many variables. It is included in the
optimization package because at the (non-boundary) extreme points of a
function, the gradient is equal to zero.
Nelder-Mead Simplex algorithm (:func:`fmin`)
--------------------------------------------
The simplex algorithm is probably the simplest way to minimize a
fairly well-behaved function. The simplex algorithm requires only
function evaluations and is a good choice for simple minimization
problems. However, because it does not use any gradient evaluations,
it may take longer to find the minimum. To demonstrate the
minimization function consider the problem of minimizing the
Rosenbrock function of :math:`N` variables:
.. math::
:nowrap:
\[ f\left(\mathbf{x}\right)=\sum_{i=1}^{N-1}100\left(x_{i}-x_{i-1}^{2}\right)^{2}+\left(1-x_{i-1}\right)^{2}.\]
The minimum value of this function is 0 which is achieved when :math:`x_{i}=1.` This minimum can be found using the :obj:`fmin` routine as shown in the example below:
>>> from scipy.optimize import fmin
>>> def rosen(x):
... """The Rosenbrock function"""
... return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> xopt = fmin(rosen, x0, xtol=1e-8)
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 339
Function evaluations: 571
>>> print xopt
[ 1. 1. 1. 1. 1.]
Another optimization algorithm that needs only function calls to find
the minimum is Powell's method available as :func:`fmin_powell`.
Broyden-Fletcher-Goldfarb-Shanno algorithm (:func:`fmin_bfgs`)
--------------------------------------------------------------
In order to converge more quickly to the solution, this routine uses
the gradient of the objective function. If the gradient is not given
by the user, then it is estimated using first-differences. The
Broyden-Fletcher-Goldfarb-Shanno (BFGS) method typically requires
fewer function calls than the simplex algorithm even when the gradient
must be estimated.
To demonstrate this algorithm, the Rosenbrock function is again used.
The gradient of the Rosenbrock function is the vector:
.. math::
:nowrap:
\begin{eqnarray*} \frac{\partial f}{\partial x_{j}} & = & \sum_{i=1}^{N}200\left(x_{i}-x_{i-1}^{2}\right)\left(\delta_{i,j}-2x_{i-1}\delta_{i-1,j}\right)-2\left(1-x_{i-1}\right)\delta_{i-1,j}.\\ & = & 200\left(x_{j}-x_{j-1}^{2}\right)-400x_{j}\left(x_{j+1}-x_{j}^{2}\right)-2\left(1-x_{j}\right).\end{eqnarray*}
This expression is valid for the interior derivatives. Special cases
are
.. math::
:nowrap:
\begin{eqnarray*} \frac{\partial f}{\partial x_{0}} & = & -400x_{0}\left(x_{1}-x_{0}^{2}\right)-2\left(1-x_{0}\right),\\ \frac{\partial f}{\partial x_{N-1}} & = & 200\left(x_{N-1}-x_{N-2}^{2}\right).\end{eqnarray*}
A Python function which computes this gradient is constructed by the
code-segment:
>>> def rosen_der(x):
... xm = x[1:-1]
... xm_m1 = x[:-2]
... xm_p1 = x[2:]
... der = zeros_like(x)
... der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm)
... der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0])
... der[-1] = 200*(x[-1]-x[-2]**2)
... return der
The calling signature for the BFGS minimization algorithm is similar
to :obj:`fmin` with the addition of the *fprime* argument. An example
usage of :obj:`fmin_bfgs` is shown in the following example which
minimizes the Rosenbrock function.
>>> from scipy.optimize import fmin_bfgs
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> xopt = fmin_bfgs(rosen, x0, fprime=rosen_der)
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 53
Function evaluations: 65
Gradient evaluations: 65
>>> print xopt
[ 1. 1. 1. 1. 1.]
Newton-Conjugate-Gradient (:func:`fmin_ncg`)
--------------------------------------------
The method which requires the fewest function calls and is therefore
often the fastest method to minimize functions of many variables is
:obj:`fmin_ncg`. This method is a modified Newton's method and uses a
conjugate gradient algorithm to (approximately) invert the local
Hessian. Newton's method is based on fitting the function locally to
a quadratic form:
.. math::
:nowrap:
\[ f\left(\mathbf{x}\right)\approx f\left(\mathbf{x}_{0}\right)+\nabla f\left(\mathbf{x}_{0}\right)\cdot\left(\mathbf{x}-\mathbf{x}_{0}\right)+\frac{1}{2}\left(\mathbf{x}-\mathbf{x}_{0}\right)^{T}\mathbf{H}\left(\mathbf{x}_{0}\right)\left(\mathbf{x}-\mathbf{x}_{0}\right).\]
where :math:`\mathbf{H}\left(\mathbf{x}_{0}\right)` is a matrix of second-derivatives (the Hessian). If the Hessian is
positive definite then the local minimum of this function can be found
by setting the gradient of the quadratic form to zero, resulting in
.. math::
:nowrap:
\[ \mathbf{x}_{\textrm{opt}}=\mathbf{x}_{0}-\mathbf{H}^{-1}\nabla f.\]
The inverse of the Hessian is evaluted using the conjugate-gradient
method. An example of employing this method to minimizing the
Rosenbrock function is given below. To take full advantage of the
NewtonCG method, a function which computes the Hessian must be
provided. The Hessian matrix itself does not need to be constructed,
only a vector which is the product of the Hessian with an arbitrary
vector needs to be available to the minimization routine. As a result,
the user can provide either a function to compute the Hessian matrix,
or a function to compute the product of the Hessian with an arbitrary
vector.
Full Hessian example:
^^^^^^^^^^^^^^^^^^^^^
The Hessian of the Rosenbrock function is
.. math::
:nowrap:
\begin{eqnarray*} H_{ij}=\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}} & = & 200\left(\delta_{i,j}-2x_{i-1}\delta_{i-1,j}\right)-400x_{i}\left(\delta_{i+1,j}-2x_{i}\delta_{i,j}\right)-400\delta_{i,j}\left(x_{i+1}-x_{i}^{2}\right)+2\delta_{i,j},\\ & = & \left(202+1200x_{i}^{2}-400x_{i+1}\right)\delta_{i,j}-400x_{i}\delta_{i+1,j}-400x_{i-1}\delta_{i-1,j},\end{eqnarray*}
if :math:`i,j\in\left[1,N-2\right]` with :math:`i,j\in\left[0,N-1\right]` defining the :math:`N\times N` matrix. Other non-zero entries of the matrix are
.. math::
:nowrap:
\begin{eqnarray*} \frac{\partial^{2}f}{\partial x_{0}^{2}} & = & 1200x_{0}^{2}-400x_{1}+2,\\ \frac{\partial^{2}f}{\partial x_{0}\partial x_{1}}=\frac{\partial^{2}f}{\partial x_{1}\partial x_{0}} & = & -400x_{0},\\ \frac{\partial^{2}f}{\partial x_{N-1}\partial x_{N-2}}=\frac{\partial^{2}f}{\partial x_{N-2}\partial x_{N-1}} & = & -400x_{N-2},\\ \frac{\partial^{2}f}{\partial x_{N-1}^{2}} & = & 200.\end{eqnarray*}
For example, the Hessian when :math:`N=5` is
.. math::
:nowrap:
\[ \mathbf{H}=\left[\begin{array}{ccccc} 1200x_{0}^{2}-400x_{1}+2 & -400x_{0} & 0 & 0 & 0\\ -400x_{0} & 202+1200x_{1}^{2}-400x_{2} & -400x_{1} & 0 & 0\\ 0 & -400x_{1} & 202+1200x_{2}^{2}-400x_{3} & -400x_{2} & 0\\ 0 & & -400x_{2} & 202+1200x_{3}^{2}-400x_{4} & -400x_{3}\\ 0 & 0 & 0 & -400x_{3} & 200\end{array}\right].\]
The code which computes this Hessian along with the code to minimize
the function using :obj:`fmin_ncg` is shown in the following example:
>>> from scipy.optimize import fmin_ncg
>>> def rosen_hess(x):
... x = asarray(x)
... H = diag(-400*x[:-1],1) - diag(400*x[:-1],-1)
... diagonal = zeros_like(x)
... diagonal[0] = 1200*x[0]-400*x[1]+2
... diagonal[-1] = 200
... diagonal[1:-1] = 202 + 1200*x[1:-1]**2 - 400*x[2:]
... H = H + diag(diagonal)
... return H
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> xopt = fmin_ncg(rosen, x0, rosen_der, fhess=rosen_hess, avextol=1e-8)
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 23
Function evaluations: 26
Gradient evaluations: 23
Hessian evaluations: 23
>>> print xopt
[ 1. 1. 1. 1. 1.]
Hessian product example:
^^^^^^^^^^^^^^^^^^^^^^^^
For larger minimization problems, storing the entire Hessian matrix
can consume considerable time and memory. The Newton-CG algorithm only
needs the product of the Hessian times an arbitrary vector. As a
result, the user can supply code to compute this product rather than
the full Hessian by setting the *fhess_p* keyword to the desired
function. The *fhess_p* function should take the minimization vector as
the first argument and the arbitrary vector as the second
argument. Any extra arguments passed to the function to be minimized
will also be passed to this function. If possible, using Newton-CG
with the hessian product option is probably the fastest way to
minimize the function.
In this case, the product of the Rosenbrock Hessian with an arbitrary
vector is not difficult to compute. If :math:`\mathbf{p}` is the arbitrary vector, then :math:`\mathbf{H}\left(\mathbf{x}\right)\mathbf{p}` has elements:
.. math::
:nowrap:
\[ \mathbf{H}\left(\mathbf{x}\right)\mathbf{p}=\left[\begin{array}{c} \left(1200x_{0}^{2}-400x_{1}+2\right)p_{0}-400x_{0}p_{1}\\ \vdots\\ -400x_{i-1}p_{i-1}+\left(202+1200x_{i}^{2}-400x_{i+1}\right)p_{i}-400x_{i}p_{i+1}\\ \vdots\\ -400x_{N-2}p_{N-2}+200p_{N-1}\end{array}\right].\]
Code which makes use of the *fhess_p* keyword to minimize the
Rosenbrock function using :obj:`fmin_ncg` follows:
>>> from scipy.optimize import fmin_ncg
>>> def rosen_hess_p(x,p):
... x = asarray(x)
... Hp = zeros_like(x)
... Hp[0] = (1200*x[0]**2 - 400*x[1] + 2)*p[0] - 400*x[0]*p[1]
... Hp[1:-1] = -400*x[:-2]*p[:-2]+(202+1200*x[1:-1]**2-400*x[2:])*p[1:-1] \
... -400*x[1:-1]*p[2:]
... Hp[-1] = -400*x[-2]*p[-2] + 200*p[-1]
... return Hp
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> xopt = fmin_ncg(rosen, x0, rosen_der, fhess_p=rosen_hess_p, avextol=1e-8)
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 22
Function evaluations: 25
Gradient evaluations: 22
Hessian evaluations: 54
>>> print xopt
[ 1. 1. 1. 1. 1.]
Least-square fitting (:func:`leastsq`)
--------------------------------------
All of the previously-explained minimization procedures can be used to
solve a least-squares problem provided the appropriate objective
function is constructed. For example, suppose it is desired to fit a
set of data :math:`\left\{\mathbf{x}_{i}, \mathbf{y}_{i}\right\}`
to a known model,
:math:`\mathbf{y}=\mathbf{f}\left(\mathbf{x},\mathbf{p}\right)`
where :math:`\mathbf{p}` is a vector of parameters for the model that
need to be found. A common method for determining which parameter
vector gives the best fit to the data is to minimize the sum of squares
of the residuals. The residual is usually defined for each observed
data-point as
.. math::
:nowrap:
\[ e_{i}\left(\mathbf{p},\mathbf{y}_{i},\mathbf{x}_{i}\right)=\left\Vert \mathbf{y}_{i}-\mathbf{f}\left(\mathbf{x}_{i},\mathbf{p}\right)\right\Vert .\]
An objective function to pass to any of the previous minization
algorithms to obtain a least-squares fit is.
.. math::
:nowrap:
\[ J\left(\mathbf{p}\right)=\sum_{i=0}^{N-1}e_{i}^{2}\left(\mathbf{p}\right).\]
The :obj:`leastsq` algorithm performs this squaring and summing of the
residuals automatically. It takes as an input argument the vector
function :math:`\mathbf{e}\left(\mathbf{p}\right)` and returns the
value of :math:`\mathbf{p}` which minimizes
:math:`J\left(\mathbf{p}\right)=\mathbf{e}^{T}\mathbf{e}`
directly. The user is also encouraged to provide the Jacobian matrix
of the function (with derivatives down the columns or across the
rows). If the Jacobian is not provided, it is estimated.
An example should clarify the usage. Suppose it is believed some
measured data follow a sinusoidal pattern
.. math::
:nowrap:
\[ y_{i}=A\sin\left(2\pi kx_{i}+\theta\right)\]
where the parameters :math:`A,` :math:`k` , and :math:`\theta` are unknown. The residual vector is
.. math::
:nowrap:
\[ e_{i}=\left|y_{i}-A\sin\left(2\pi kx_{i}+\theta\right)\right|.\]
By defining a function to compute the residuals and (selecting an
appropriate starting position), the least-squares fit routine can be
used to find the best-fit parameters :math:`\hat{A},\,\hat{k},\,\hat{\theta}`.
This is shown in the following example:
.. plot::
>>> from numpy import *
>>> x = arange(0,6e-2,6e-2/30)
>>> A,k,theta = 10, 1.0/3e-2, pi/6
>>> y_true = A*sin(2*pi*k*x+theta)
>>> y_meas = y_true + 2*random.randn(len(x))
>>> def residuals(p, y, x):
... A,k,theta = p
... err = y-A*sin(2*pi*k*x+theta)
... return err
>>> def peval(x, p):
... return p[0]*sin(2*pi*p[1]*x+p[2])
>>> p0 = [8, 1/2.3e-2, pi/3]
>>> print array(p0)
[ 8. 43.4783 1.0472]
>>> from scipy.optimize import leastsq
>>> plsq = leastsq(residuals, p0, args=(y_meas, x))
>>> print plsq[0]
[ 10.9437 33.3605 0.5834]
>>> print array([A, k, theta])
[ 10. 33.3333 0.5236]
>>> import matplotlib.pyplot as plt
>>> plt.plot(x,peval(x,plsq[0]),x,y_meas,'o',x,y_true)
>>> plt.title('Least-squares fit to noisy data')
>>> plt.legend(['Fit', 'Noisy', 'True'])
>>> plt.show()
.. :caption: Least-square fitting to noisy data using
.. :obj:`scipy.optimize.leastsq`
.. _tutorial-sqlsp:
Sequential Least-square fitting with constraints (:func:`fmin_slsqp`)
---------------------------------------------------------------------
This module implements the Sequential Least SQuares Programming optimization algorithm (SLSQP).
.. math::
:nowrap:
\begin{eqnarray*} \min F(x) \\ \text{subject to } & C_j(X) = 0 , &j = 1,...,\text{MEQ}\\
& C_j(x) \geq 0 , &j = \text{MEQ}+1,...,M\\
& XL \leq x \leq XU , &I = 1,...,N. \end{eqnarray*}
The following script shows examples for how constraints can be specified.
::
"""
This script tests fmin_slsqp using Example 14.4 from Numerical Methods for
Engineers by Steven Chapra and Raymond Canale. This example maximizes the
function f(x) = 2*x*y + 2*x - x**2 - 2*y**2, which has a maximum at x=2,y=1.
"""
from scipy.optimize import fmin_slsqp
from numpy import array, asfarray, finfo,ones, sqrt, zeros
def testfunc(d,*args):
"""
Arguments:
d - A list of two elements, where d[0] represents x and
d[1] represents y in the following equation.
sign - A multiplier for f. Since we want to optimize it, and the scipy
optimizers can only minimize functions, we need to multiply it by
-1 to achieve the desired solution
Returns:
2*x*y + 2*x - x**2 - 2*y**2
"""
try:
sign = args[0]
except:
sign = 1.0
x = d[0]
y = d[1]
return sign*(2*x*y + 2*x - x**2 - 2*y**2)
def testfunc_deriv(d,*args):
""" This is the derivative of testfunc, returning a numpy array
representing df/dx and df/dy
"""
try:
sign = args[0]
except:
sign = 1.0
x = d[0]
y = d[1]
dfdx = sign*(-2*x + 2*y + 2)
dfdy = sign*(2*x - 4*y)
return array([ dfdx, dfdy ],float)
from time import time
print '\n\n'
print "Unbounded optimization. Derivatives approximated."
t0 = time()
x = fmin_slsqp(testfunc, [-1.0,1.0], args=(-1.0,), iprint=2, full_output=1)
print "Elapsed time:", 1000*(time()-t0), "ms"
print "Results",x
print "\n\n"
print "Unbounded optimization. Derivatives provided."
t0 = time()
x = fmin_slsqp(testfunc, [-1.0,1.0], args=(-1.0,), iprint=2, full_output=1)
print "Elapsed time:", 1000*(time()-t0), "ms"
print "Results",x
print "\n\n"
print "Bound optimization. Derivatives approximated."
t0 = time()
x = fmin_slsqp(testfunc, [-1.0,1.0], args=(-1.0,),
eqcons=[lambda x, y: x[0]-x[1] ], iprint=2, full_output=1)
print "Elapsed time:", 1000*(time()-t0), "ms"
print "Results",x
print "\n\n"
print "Bound optimization (equality constraints). Derivatives provided."
t0 = time()
x = fmin_slsqp(testfunc, [-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,),
eqcons=[lambda x, y: x[0]-x[1] ], iprint=2, full_output=1)
print "Elapsed time:", 1000*(time()-t0), "ms"
print "Results",x
print "\n\n"
print "Bound optimization (equality and inequality constraints)."
print "Derivatives provided."
t0 = time()
x = fmin_slsqp(testfunc,[-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,),
eqcons=[lambda x, y: x[0]-x[1] ],
ieqcons=[lambda x, y: x[0]-.5], iprint=2, full_output=1)
print "Elapsed time:", 1000*(time()-t0), "ms"
print "Results",x
print "\n\n"
def test_eqcons(d,*args):
try:
sign = args[0]
except:
sign = 1.0
x = d[0]
y = d[1]
return array([ x**3-y ])
def test_ieqcons(d,*args):
try:
sign = args[0]
except:
sign = 1.0
x = d[0]
y = d[1]
return array([ y-1 ])
print "Bound optimization (equality and inequality constraints)."
print "Derivatives provided via functions."
t0 = time()
x = fmin_slsqp(testfunc, [-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,),
f_eqcons=test_eqcons, f_ieqcons=test_ieqcons,
iprint=2, full_output=1)
print "Elapsed time:", 1000*(time()-t0), "ms"
print "Results",x
print "\n\n"
def test_fprime_eqcons(d,*args):
try:
sign = args[0]
except:
sign = 1.0
x = d[0]
y = d[1]
return array([ 3.0*(x**2.0), -1.0 ])
def test_fprime_ieqcons(d,*args):
try:
sign = args[0]
except:
sign = 1.0
x = d[0]
y = d[1]
return array([ 0.0, 1.0 ])
print "Bound optimization (equality and inequality constraints)."
print "Derivatives provided via functions."
print "Constraint jacobians provided via functions"
t0 = time()
x = fmin_slsqp(testfunc,[-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,),
f_eqcons=test_eqcons, f_ieqcons=test_ieqcons,
fprime_eqcons=test_fprime_eqcons,
fprime_ieqcons=test_fprime_ieqcons, iprint=2, full_output=1)
print "Elapsed time:", 1000*(time()-t0), "ms"
print "Results",x
print "\n\n"
Scalar function minimizers
--------------------------
Often only the minimum of a scalar function is needed (a scalar
function is one that takes a scalar as input and returns a scalar
output). In these circumstances, other optimization techniques have
been developed that can work faster.
Unconstrained minimization (:func:`brent`)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are actually two methods that can be used to minimize a scalar
function (:obj:`brent` and :func:`golden`), but :obj:`golden` is
included only for academic purposes and should rarely be used. The
brent method uses Brent's algorithm for locating a minimum. Optimally
a bracket should be given which contains the minimum desired. A
bracket is a triple :math:`\left(a,b,c\right)` such that
:math:`f\left(a\right)>f\left(b\right)<f\left(c\right)` and
:math:`a<b<c` . If this is not given, then alternatively two starting
points can be chosen and a bracket will be found from these points
using a simple marching algorithm. If these two starting points are
not provided 0 and 1 will be used (this may not be the right choice
for your function and result in an unexpected minimum being returned).
Bounded minimization (:func:`fminbound`)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Thus far all of the minimization routines described have been
unconstrained minimization routines. Very often, however, there are
constraints that can be placed on the solution space before
minimization occurs. The :obj:`fminbound` function is an example of a
constrained minimization procedure that provides a rudimentary
interval constraint for scalar functions. The interval constraint
allows the minimization to occur only between two fixed endpoints.
For example, to find the minimum of :math:`J_{1}\left(x\right)` near :math:`x=5` , :obj:`fminbound` can be called using the interval :math:`\left[4,7\right]` as a constraint. The result is :math:`x_{\textrm{min}}=5.3314` :
>>> from scipy.special import j1
>>> from scipy.optimize import fminbound
>>> xmin = fminbound(j1, 4, 7)
>>> print xmin
5.33144184241
Root finding
------------
Sets of equations
^^^^^^^^^^^^^^^^^
To find the roots of a polynomial, the command :obj:`roots
<scipy.roots>` is useful. To find a root of a set of non-linear
equations, the command :obj:`fsolve` is needed. For example, the
following example finds the roots of the single-variable
transcendental equation
.. math::
:nowrap:
\[ x+2\cos\left(x\right)=0,\]
and the set of non-linear equations
.. math::
:nowrap:
\begin{eqnarray*} x_{0}\cos\left(x_{1}\right) & = & 4,\\ x_{0}x_{1}-x_{1} & = & 5.\end{eqnarray*}
The results are :math:`x=-1.0299` and :math:`x_{0}=6.5041,\, x_{1}=0.9084` .
>>> def func(x):
... return x + 2*cos(x)
>>> def func2(x):
... out = [x[0]*cos(x[1]) - 4]
... out.append(x[1]*x[0] - x[1] - 5)
... return out
>>> from scipy.optimize import fsolve
>>> x0 = fsolve(func, 0.3)
>>> print x0
-1.02986652932
>>> x02 = fsolve(func2, [1, 1])
>>> print x02
[ 6.50409711 0.90841421]
Scalar function root finding
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If one has a single-variable equation, there are four different root
finder algorithms that can be tried. Each of these root finding
algorithms requires the endpoints of an interval where a root is
suspected (because the function changes signs). In general
:obj:`brentq` is the best choice, but the other methods may be useful
in certain circumstances or for academic purposes.
Fixed-point solving
^^^^^^^^^^^^^^^^^^^
A problem closely related to finding the zeros of a function is the
problem of finding a fixed-point of a function. A fixed point of a
function is the point at which evaluation of the function returns the
point: :math:`g\left(x\right)=x.` Clearly the fixed point of :math:`g`
is the root of :math:`f\left(x\right)=g\left(x\right)-x.`
Equivalently, the root of :math:`f` is the fixed_point of
:math:`g\left(x\right)=f\left(x\right)+x.` The routine
:obj:`fixed_point` provides a simple iterative method using Aitkens
sequence acceleration to estimate the fixed point of :math:`g` given a
starting point.

View file

@ -0,0 +1,544 @@
Signal Processing (signal)
==========================
.. sectionauthor:: Travis E. Oliphant
The signal processing toolbox currently contains some filtering
functions, a limited set of filter design tools, and a few B-spline
interpolation algorithms for one- and two-dimensional data. While the
B-spline algorithms could technically be placed under the
interpolation category, they are included here because they only work
with equally-spaced data and make heavy use of filter-theory and
transfer-function formalism to provide a fast B-spline transform. To
understand this section you will need to understand that a signal in
SciPy is an array of real or complex numbers.
B-splines
---------
A B-spline is an approximation of a continuous function over a finite-
domain in terms of B-spline coefficients and knot points. If the knot-
points are equally spaced with spacing :math:`\Delta x` , then the B-spline approximation to a 1-dimensional function is the
finite-basis expansion.
.. math::
:nowrap:
\[ y\left(x\right)\approx\sum_{j}c_{j}\beta^{o}\left(\frac{x}{\Delta x}-j\right).\]
In two dimensions with knot-spacing :math:`\Delta x` and :math:`\Delta y` , the function representation is
.. math::
:nowrap:
\[ z\left(x,y\right)\approx\sum_{j}\sum_{k}c_{jk}\beta^{o}\left(\frac{x}{\Delta x}-j\right)\beta^{o}\left(\frac{y}{\Delta y}-k\right).\]
In these expressions, :math:`\beta^{o}\left(\cdot\right)` is the space-limited B-spline basis function of order, :math:`o` . The requirement of equally-spaced knot-points and equally-spaced
data points, allows the development of fast (inverse-filtering)
algorithms for determining the coefficients, :math:`c_{j}` , from sample-values, :math:`y_{n}` . Unlike the general spline interpolation algorithms, these algorithms
can quickly find the spline coefficients for large images.
The advantage of representing a set of samples via B-spline basis
functions is that continuous-domain operators (derivatives, re-
sampling, integral, etc.) which assume that the data samples are drawn
from an underlying continuous function can be computed with relative
ease from the spline coefficients. For example, the second-derivative
of a spline is
.. math::
:nowrap:
\[ y{}^{\prime\prime}\left(x\right)=\frac{1}{\Delta x^{2}}\sum_{j}c_{j}\beta^{o\prime\prime}\left(\frac{x}{\Delta x}-j\right).\]
Using the property of B-splines that
.. math::
:nowrap:
\[ \frac{d^{2}\beta^{o}\left(w\right)}{dw^{2}}=\beta^{o-2}\left(w+1\right)-2\beta^{o-2}\left(w\right)+\beta^{o-2}\left(w-1\right)\]
it can be seen that
.. math::
:nowrap:
\[ y^{\prime\prime}\left(x\right)=\frac{1}{\Delta x^{2}}\sum_{j}c_{j}\left[\beta^{o-2}\left(\frac{x}{\Delta x}-j+1\right)-2\beta^{o-2}\left(\frac{x}{\Delta x}-j\right)+\beta^{o-2}\left(\frac{x}{\Delta x}-j-1\right)\right].\]
If :math:`o=3` , then at the sample points,
.. math::
:nowrap:
\begin{eqnarray*} \Delta x^{2}\left.y^{\prime}\left(x\right)\right|_{x=n\Delta x} & = & \sum_{j}c_{j}\delta_{n-j+1}-2c_{j}\delta_{n-j}+c_{j}\delta_{n-j-1},\\ & = & c_{n+1}-2c_{n}+c_{n-1}.\end{eqnarray*}
Thus, the second-derivative signal can be easily calculated from the
spline fit. if desired, smoothing splines can be found to make the
second-derivative less sensitive to random-errors.
The savvy reader will have already noticed that the data samples are
related to the knot coefficients via a convolution operator, so that
simple convolution with the sampled B-spline function recovers the
original data from the spline coefficients. The output of convolutions
can change depending on how boundaries are handled (this becomes
increasingly more important as the number of dimensions in the data-
set increases). The algorithms relating to B-splines in the signal-
processing sub package assume mirror-symmetric boundary conditions.
Thus, spline coefficients are computed based on that assumption, and
data-samples can be recovered exactly from the spline coefficients by
assuming them to be mirror-symmetric also.
Currently the package provides functions for determining second- and
third-order cubic spline coefficients from equally spaced samples in
one- and two-dimensions (:func:`signal.qspline1d`,
:func:`signal.qspline2d`, :func:`signal.cspline1d`,
:func:`signal.cspline2d`). The package also supplies a function (
:obj:`signal.bspline` ) for evaluating the bspline basis function,
:math:`\beta^{o}\left(x\right)` for arbitrary order and :math:`x.` For
large :math:`o` , the B-spline basis function can be approximated well
by a zero-mean Gaussian function with standard-deviation equal to
:math:`\sigma_{o}=\left(o+1\right)/12` :
.. math::
:nowrap:
\[ \beta^{o}\left(x\right)\approx\frac{1}{\sqrt{2\pi\sigma_{o}^{2}}}\exp\left(-\frac{x^{2}}{2\sigma_{o}}\right).\]
A function to compute this Gaussian for arbitrary :math:`x` and
:math:`o` is also available ( :obj:`signal.gauss_spline` ). The
following code and Figure uses spline-filtering to compute an
edge-image (the second-derivative of a smoothed spline) of Lena's face
which is an array returned by the command :func:`lena`. The command
:obj:`signal.sepfir2d` was used to apply a separable two-dimensional
FIR filter with mirror- symmetric boundary conditions to the spline
coefficients. This function is ideally suited for reconstructing
samples from spline coefficients and is faster than
:obj:`signal.convolve2d` which convolves arbitrary two-dimensional
filters and allows for choosing mirror-symmetric boundary conditions.
.. plot::
>>> from numpy import *
>>> from scipy import signal, misc
>>> import matplotlib.pyplot as plt
>>> image = misc.lena().astype(float32)
>>> derfilt = array([1.0,-2,1.0],float32)
>>> ck = signal.cspline2d(image,8.0)
>>> deriv = signal.sepfir2d(ck, derfilt, [1]) + \
>>> signal.sepfir2d(ck, [1], derfilt)
Alternatively we could have done::
laplacian = array([[0,1,0],[1,-4,1],[0,1,0]],float32)
deriv2 = signal.convolve2d(ck,laplacian,mode='same',boundary='symm')
>>> plt.figure()
>>> plt.imshow(image)
>>> plt.gray()
>>> plt.title('Original image')
>>> plt.show()
>>> plt.figure()
>>> plt.imshow(deriv)
>>> plt.gray()
>>> plt.title('Output of spline edge filter')
>>> plt.show()
.. :caption: Example of using smoothing splines to filter images.
Filtering
---------
Filtering is a generic name for any system that modifies an input
signal in some way. In SciPy a signal can be thought of as a Numpy
array. There are different kinds of filters for different kinds of
operations. There are two broad kinds of filtering operations: linear
and non-linear. Linear filters can always be reduced to multiplication
of the flattened Numpy array by an appropriate matrix resulting in
another flattened Numpy array. Of course, this is not usually the best
way to compute the filter as the matrices and vectors involved may be
huge. For example filtering a :math:`512 \times 512` image with this
method would require multiplication of a :math:`512^2 \times 512^2`
matrix with a :math:`512^2` vector. Just trying to store the
:math:`512^2 \times 512^2` matrix using a standard Numpy array would
require :math:`68,719,476,736` elements. At 4 bytes per element this
would require :math:`256\textrm{GB}` of memory. In most applications
most of the elements of this matrix are zero and a different method
for computing the output of the filter is employed.
Convolution/Correlation
^^^^^^^^^^^^^^^^^^^^^^^
Many linear filters also have the property of shift-invariance. This
means that the filtering operation is the same at different locations
in the signal and it implies that the filtering matrix can be
constructed from knowledge of one row (or column) of the matrix alone.
In this case, the matrix multiplication can be accomplished using
Fourier transforms.
Let :math:`x\left[n\right]` define a one-dimensional signal indexed by the integer :math:`n.` Full convolution of two one-dimensional signals can be expressed as
.. math::
:nowrap:
\[ y\left[n\right]=\sum_{k=-\infty}^{\infty}x\left[k\right]h\left[n-k\right].\]
This equation can only be implemented directly if we limit the
sequences to finite support sequences that can be stored in a
computer, choose :math:`n=0` to be the starting point of both
sequences, let :math:`K+1` be that value for which
:math:`y\left[n\right]=0` for all :math:`n>K+1` and :math:`M+1` be
that value for which :math:`x\left[n\right]=0` for all :math:`n>M+1` ,
then the discrete convolution expression is
.. math::
:nowrap:
\[ y\left[n\right]=\sum_{k=\max\left(n-M,0\right)}^{\min\left(n,K\right)}x\left[k\right]h\left[n-k\right].\]
For convenience assume :math:`K\geq M.` Then, more explicitly the output of this operation is
.. math::
:nowrap:
\begin{eqnarray*} y\left[0\right] & = & x\left[0\right]h\left[0\right]\\ y\left[1\right] & = & x\left[0\right]h\left[1\right]+x\left[1\right]h\left[0\right]\\ y\left[2\right] & = & x\left[0\right]h\left[2\right]+x\left[1\right]h\left[1\right]+x\left[2\right]h\left[0\right]\\ \vdots & \vdots & \vdots\\ y\left[M\right] & = & x\left[0\right]h\left[M\right]+x\left[1\right]h\left[M-1\right]+\cdots+x\left[M\right]h\left[0\right]\\ y\left[M+1\right] & = & x\left[1\right]h\left[M\right]+x\left[2\right]h\left[M-1\right]+\cdots+x\left[M+1\right]h\left[0\right]\\ \vdots & \vdots & \vdots\\ y\left[K\right] & = & x\left[K-M\right]h\left[M\right]+\cdots+x\left[K\right]h\left[0\right]\\ y\left[K+1\right] & = & x\left[K+1-M\right]h\left[M\right]+\cdots+x\left[K\right]h\left[1\right]\\ \vdots & \vdots & \vdots\\ y\left[K+M-1\right] & = & x\left[K-1\right]h\left[M\right]+x\left[K\right]h\left[M-1\right]\\ y\left[K+M\right] & = & x\left[K\right]h\left[M\right].\end{eqnarray*}
Thus, the full discrete convolution of two finite sequences of lengths :math:`K+1` and :math:`M+1` respectively results in a finite sequence of length :math:`K+M+1=\left(K+1\right)+\left(M+1\right)-1.`
One dimensional convolution is implemented in SciPy with the function
``signal.convolve`` . This function takes as inputs the signals
:math:`x,` :math:`h` , and an optional flag and returns the signal
:math:`y.` The optional flag allows for specification of which part of
the output signal to return. The default value of 'full' returns the
entire signal. If the flag has a value of 'same' then only the middle
:math:`K` values are returned starting at :math:`y\left[\left\lfloor
\frac{M-1}{2}\right\rfloor \right]` so that the output has the same
length as the largest input. If the flag has a value of 'valid' then
only the middle :math:`K-M+1=\left(K+1\right)-\left(M+1\right)+1`
output values are returned where :math:`z` depends on all of the
values of the smallest input from :math:`h\left[0\right]` to
:math:`h\left[M\right].` In other words only the values
:math:`y\left[M\right]` to :math:`y\left[K\right]` inclusive are
returned.
This same function ``signal.convolve`` can actually take :math:`N`
-dimensional arrays as inputs and will return the :math:`N`
-dimensional convolution of the two arrays. The same input flags are
available for that case as well.
Correlation is very similar to convolution except for the minus sign
becomes a plus sign. Thus
.. math::
:nowrap:
\[ w\left[n\right]=\sum_{k=-\infty}^{\infty}y\left[k\right]x\left[n+k\right]\]
is the (cross) correlation of the signals :math:`y` and :math:`x.` For finite-length signals with :math:`y\left[n\right]=0` outside of the range :math:`\left[0,K\right]` and :math:`x\left[n\right]=0` outside of the range :math:`\left[0,M\right],` the summation can simplify to
.. math::
:nowrap:
\[ w\left[n\right]=\sum_{k=\max\left(0,-n\right)}^{\min\left(K,M-n\right)}y\left[k\right]x\left[n+k\right].\]
Assuming again that :math:`K\geq M` this is
.. math::
:nowrap:
\begin{eqnarray*} w\left[-K\right] & = & y\left[K\right]x\left[0\right]\\ w\left[-K+1\right] & = & y\left[K-1\right]x\left[0\right]+y\left[K\right]x\left[1\right]\\ \vdots & \vdots & \vdots\\ w\left[M-K\right] & = & y\left[K-M\right]x\left[0\right]+y\left[K-M+1\right]x\left[1\right]+\cdots+y\left[K\right]x\left[M\right]\\ w\left[M-K+1\right] & = & y\left[K-M-1\right]x\left[0\right]+\cdots+y\left[K-1\right]x\left[M\right]\\ \vdots & \vdots & \vdots\\ w\left[-1\right] & = & y\left[1\right]x\left[0\right]+y\left[2\right]x\left[1\right]+\cdots+y\left[M+1\right]x\left[M\right]\\ w\left[0\right] & = & y\left[0\right]x\left[0\right]+y\left[1\right]x\left[1\right]+\cdots+y\left[M\right]x\left[M\right]\\ w\left[1\right] & = & y\left[0\right]x\left[1\right]+y\left[1\right]x\left[2\right]+\cdots+y\left[M-1\right]x\left[M\right]\\ w\left[2\right] & = & y\left[0\right]x\left[2\right]+y\left[1\right]x\left[3\right]+\cdots+y\left[M-2\right]x\left[M\right]\\ \vdots & \vdots & \vdots\\ w\left[M-1\right] & = & y\left[0\right]x\left[M-1\right]+y\left[1\right]x\left[M\right]\\ w\left[M\right] & = & y\left[0\right]x\left[M\right].\end{eqnarray*}
The SciPy function ``signal.correlate`` implements this
operation. Equivalent flags are available for this operation to return
the full :math:`K+M+1` length sequence ('full') or a sequence with the
same size as the largest sequence starting at
:math:`w\left[-K+\left\lfloor \frac{M-1}{2}\right\rfloor \right]`
('same') or a sequence where the values depend on all the values of
the smallest sequence ('valid'). This final option returns the
:math:`K-M+1` values :math:`w\left[M-K\right]` to
:math:`w\left[0\right]` inclusive.
The function :obj:`signal.correlate` can also take arbitrary :math:`N`
-dimensional arrays as input and return the :math:`N` -dimensional
convolution of the two arrays on output.
When :math:`N=2,` :obj:`signal.correlate` and/or
:obj:`signal.convolve` can be used to construct arbitrary image
filters to perform actions such as blurring, enhancing, and
edge-detection for an image.
Convolution is mainly used for filtering when one of the signals is
much smaller than the other ( :math:`K\gg M` ), otherwise linear
filtering is more easily accomplished in the frequency domain (see
Fourier Transforms).
Difference-equation filtering
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A general class of linear one-dimensional filters (that includes
convolution filters) are filters described by the difference equation
.. math::
:nowrap:
\[ \sum_{k=0}^{N}a_{k}y\left[n-k\right]=\sum_{k=0}^{M}b_{k}x\left[n-k\right]\]
where :math:`x\left[n\right]` is the input sequence and
:math:`y\left[n\right]` is the output sequence. If we assume initial
rest so that :math:`y\left[n\right]=0` for :math:`n<0` , then this
kind of filter can be implemented using convolution. However, the
convolution filter sequence :math:`h\left[n\right]` could be infinite
if :math:`a_{k}\neq0` for :math:`k\geq1.` In addition, this general
class of linear filter allows initial conditions to be placed on
:math:`y\left[n\right]` for :math:`n<0` resulting in a filter that
cannot be expressed using convolution.
The difference equation filter can be thought of as finding :math:`y\left[n\right]` recursively in terms of it's previous values
.. math::
:nowrap:
\[ a_{0}y\left[n\right]=-a_{1}y\left[n-1\right]-\cdots-a_{N}y\left[n-N\right]+\cdots+b_{0}x\left[n\right]+\cdots+b_{M}x\left[n-M\right].\]
Often :math:`a_{0}=1` is chosen for normalization. The implementation
in SciPy of this general difference equation filter is a little more
complicated then would be implied by the previous equation. It is
implemented so that only one signal needs to be delayed. The actual
implementation equations are (assuming :math:`a_{0}=1` ).
.. math::
:nowrap:
\begin{eqnarray*} y\left[n\right] & = & b_{0}x\left[n\right]+z_{0}\left[n-1\right]\\ z_{0}\left[n\right] & = & b_{1}x\left[n\right]+z_{1}\left[n-1\right]-a_{1}y\left[n\right]\\ z_{1}\left[n\right] & = & b_{2}x\left[n\right]+z_{2}\left[n-1\right]-a_{2}y\left[n\right]\\ \vdots & \vdots & \vdots\\ z_{K-2}\left[n\right] & = & b_{K-1}x\left[n\right]+z_{K-1}\left[n-1\right]-a_{K-1}y\left[n\right]\\ z_{K-1}\left[n\right] & = & b_{K}x\left[n\right]-a_{K}y\left[n\right],\end{eqnarray*}
where :math:`K=\max\left(N,M\right).` Note that :math:`b_{K}=0` if
:math:`K>M` and :math:`a_{K}=0` if :math:`K>N.` In this way, the
output at time :math:`n` depends only on the input at time :math:`n`
and the value of :math:`z_{0}` at the previous time. This can always
be calculated as long as the :math:`K` values
:math:`z_{0}\left[n-1\right]\ldots z_{K-1}\left[n-1\right]` are
computed and stored at each time step.
The difference-equation filter is called using the command
:obj:`signal.lfilter` in SciPy. This command takes as inputs the
vector :math:`b,` the vector, :math:`a,` a signal :math:`x` and
returns the vector :math:`y` (the same length as :math:`x` ) computed
using the equation given above. If :math:`x` is :math:`N`
-dimensional, then the filter is computed along the axis provided. If,
desired, initial conditions providing the values of
:math:`z_{0}\left[-1\right]` to :math:`z_{K-1}\left[-1\right]` can be
provided or else it will be assumed that they are all zero. If initial
conditions are provided, then the final conditions on the intermediate
variables are also returned. These could be used, for example, to
restart the calculation in the same state.
Sometimes it is more convenient to express the initial conditions in
terms of the signals :math:`x\left[n\right]` and
:math:`y\left[n\right].` In other words, perhaps you have the values
of :math:`x\left[-M\right]` to :math:`x\left[-1\right]` and the values
of :math:`y\left[-N\right]` to :math:`y\left[-1\right]` and would like
to determine what values of :math:`z_{m}\left[-1\right]` should be
delivered as initial conditions to the difference-equation filter. It
is not difficult to show that for :math:`0\leq m<K,`
.. math::
:nowrap:
\[ z_{m}\left[n\right]=\sum_{p=0}^{K-m-1}\left(b_{m+p+1}x\left[n-p\right]-a_{m+p+1}y\left[n-p\right]\right).\]
Using this formula we can find the intial condition vector :math:`z_{0}\left[-1\right]` to :math:`z_{K-1}\left[-1\right]` given initial conditions on :math:`y` (and :math:`x` ). The command :obj:`signal.lfiltic` performs this function.
Other filters
^^^^^^^^^^^^^
The signal processing package provides many more filters as well.
Median Filter
"""""""""""""
A median filter is commonly applied when noise is markedly non-
Gaussian or when it is desired to preserve edges. The median filter
works by sorting all of the array pixel values in a rectangular region
surrounding the point of interest. The sample median of this list of
neighborhood pixel values is used as the value for the output array.
The sample median is the middle array value in a sorted list of
neighborhood values. If there are an even number of elements in the
neighborhood, then the average of the middle two values is used as the
median. A general purpose median filter that works on N-dimensional
arrays is :obj:`signal.medfilt` . A specialized version that works
only for two-dimensional arrays is available as
:obj:`signal.medfilt2d` .
Order Filter
""""""""""""
A median filter is a specific example of a more general class of
filters called order filters. To compute the output at a particular
pixel, all order filters use the array values in a region surrounding
that pixel. These array values are sorted and then one of them is
selected as the output value. For the median filter, the sample median
of the list of array values is used as the output. A general order
filter allows the user to select which of the sorted values will be
used as the output. So, for example one could choose to pick the
maximum in the list or the minimum. The order filter takes an
additional argument besides the input array and the region mask that
specifies which of the elements in the sorted list of neighbor array
values should be used as the output. The command to perform an order
filter is :obj:`signal.order_filter` .
Wiener filter
"""""""""""""
The Wiener filter is a simple deblurring filter for denoising images.
This is not the Wiener filter commonly described in image
reconstruction problems but instead it is a simple, local-mean filter.
Let :math:`x` be the input signal, then the output is
.. math::
:nowrap:
\[ y=\left\{ \begin{array}{cc} \frac{\sigma^{2}}{\sigma_{x}^{2}}m_{x}+\left(1-\frac{\sigma^{2}}{\sigma_{x}^{2}}\right)x & \sigma_{x}^{2}\geq\sigma^{2},\\ m_{x} & \sigma_{x}^{2}<\sigma^{2}.\end{array}\right.\]
Where :math:`m_{x}` is the local estimate of the mean and
:math:`\sigma_{x}^{2}` is the local estimate of the variance. The
window for these estimates is an optional input parameter (default is
:math:`3\times3` ). The parameter :math:`\sigma^{2}` is a threshold
noise parameter. If :math:`\sigma` is not given then it is estimated
as the average of the local variances.
Hilbert filter
""""""""""""""
The Hilbert transform constructs the complex-valued analytic signal
from a real signal. For example if :math:`x=\cos\omega n` then
:math:`y=\textrm{hilbert}\left(x\right)` would return (except near the
edges) :math:`y=\exp\left(j\omega n\right).` In the frequency domain,
the hilbert transform performs
.. math::
:nowrap:
\[ Y=X\cdot H\]
where :math:`H` is 2 for positive frequencies, :math:`0` for negative frequencies and :math:`1` for zero-frequencies.
.. XXX: TODO
..
.. Detrend
.. """""""
..
.. Filter design
.. -------------
..
..
.. Finite-impulse response design
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
..
.. Inifinite-impulse response design
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
..
.. Analog filter frequency response
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
..
.. Digital filter frequency response
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
..
.. Linear Time-Invariant Systems
.. -----------------------------
..
..
.. LTI Object
.. ^^^^^^^^^^
..
..
.. Continuous-Time Simulation
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^
..
..
.. Step response
.. ^^^^^^^^^^^^^
..
..
.. Impulse response
.. ^^^^^^^^^^^^^^^^
..
..
.. Input/Output
.. ============
..
..
.. Binary
.. ------
..
..
.. Arbitrary binary input and output (fopen)
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
..
.. Read and write Matlab .mat files
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
..
.. Saving workspace
.. ^^^^^^^^^^^^^^^^
..
..
.. Text-file
.. ---------
..
..
.. Read text-files (read_array)
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
..
.. Write a text-file (write_array)
.. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..
..
.. Fourier Transforms
.. ==================
..
..
.. One-dimensional
.. ---------------
..
..
.. Two-dimensional
.. ---------------
..
..
.. N-dimensional
.. -------------
..
..
.. Shifting
.. --------
..
..
.. Sample frequencies
.. ------------------
..
..
.. Hilbert transform
.. -----------------
..
..
.. Tilbert transform
.. -----------------

View file

@ -0,0 +1,23 @@
Special functions (:mod:`scipy.special`)
========================================
.. sectionauthor:: Travis E. Oliphant
.. currentmodule:: scipy.special
The main feature of the :mod:`scipy.special` package is the definition of
numerous special functions of mathematical physics. Available
functions include airy, elliptic, bessel, gamma, beta, hypergeometric,
parabolic cylinder, mathieu, spheroidal wave, struve, and
kelvin. There are also some low-level stats functions that are not
intended for general use as an easier interface to these functions is
provided by the ``stats`` module. Most of these functions can take
array arguments and return array results following the same
broadcasting rules as other math functions in Numerical Python. Many
of these functions also accept complex numbers as input. For a
complete list of the available functions with a one-line description
type ``>>> help(special).`` Each function also has its own
documentation accessible using help. If you don't see a function you
need, consider writing it and contributing it to the library. You can
write the function in either C, Fortran, or Python. Look in the source
code of the library for examples of each of these kinds of functions.

View file

@ -0,0 +1,579 @@
Statistics
==========
.. sectionauthor:: Travis E. Oliphant
Introduction
------------
SciPy has a tremendous number of basic statistics routines with more
easily added by the end user (if you create one please contribute it).
All of the statistics functions are located in the sub-package
:mod:`scipy.stats` and a fairly complete listing of these functions
can be had using ``info(stats)``.
Random Variables
^^^^^^^^^^^^^^^^
There are two general distribution classes that have been implemented
for encapsulating
:ref:`continuous random variables <continuous-random-variables>`
and
:ref:`discrete random variables <discrete-random-variables>`
. Over 80 continuous random variables and 10 discrete random
variables have been implemented using these classes. The list of the
random variables available is in the docstring for the stats sub-
package.
Note: The following is work in progress
Distributions
-------------
First some imports
>>> import numpy as np
>>> from scipy import stats
>>> import warnings
>>> warnings.simplefilter('ignore', DeprecationWarning)
We can obtain the list of available distribution through introspection:
>>> dist_continu = [d for d in dir(stats) if
... isinstance(getattr(stats,d), stats.rv_continuous)]
>>> dist_discrete = [d for d in dir(stats) if
... isinstance(getattr(stats,d), stats.rv_discrete)]
>>> print 'number of continuous distributions:', len(dist_continu)
number of continuous distributions: 84
>>> print 'number of discrete distributions: ', len(dist_discrete)
number of discrete distributions: 12
Distributions can be used in one of two ways, either by passing all distribution
parameters to each method call or by freezing the parameters for the instance
of the distribution. As an example, we can get the median of the distribution by using
the percent point function, ppf, which is the inverse of the cdf:
>>> print stats.nct.ppf(0.5, 10, 2.5)
2.56880722561
>>> my_nct = stats.nct(10, 2.5)
>>> print my_nct.ppf(0.5)
2.56880722561
``help(stats.nct)`` prints the complete docstring of the distribution. Instead
we can print just some basic information::
>>> print stats.nct.extradoc #contains the distribution specific docs
Non-central Student T distribution
df**(df/2) * gamma(df+1)
nct.pdf(x,df,nc) = --------------------------------------------------
2**df*exp(nc**2/2)*(df+x**2)**(df/2) * gamma(df/2)
for df > 0, nc > 0.
>>> print 'number of arguments: %d, shape parameters: %s'% (stats.nct.numargs,
... stats.nct.shapes)
number of arguments: 2, shape parameters: df,nc
>>> print 'bounds of distribution lower: %s, upper: %s' % (stats.nct.a,
... stats.nct.b)
bounds of distribution lower: -1.#INF, upper: 1.#INF
We can list all methods and properties of the distribution with
``dir(stats.nct)``. Some of the methods are private methods, that are
not named as such, i.e. no leading underscore, for example veccdf or
xa and xb are for internal calculation. The main methods we can see
when we list the methods of the frozen distribution:
>>> print dir(my_nct) #reformatted
['__class__', '__delattr__', '__dict__', '__doc__', '__getattribute__',
'__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__',
'__repr__', '__setattr__', '__str__', '__weakref__', 'args', 'cdf', 'dist',
'entropy', 'isf', 'kwds', 'moment', 'pdf', 'pmf', 'ppf', 'rvs', 'sf', 'stats']
The main public methods are:
* rvs: Random Variates
* pdf: Probability Density Function
* cdf: Cumulative Distribution Function
* sf: Survival Function (1-CDF)
* ppf: Percent Point Function (Inverse of CDF)
* isf: Inverse Survival Function (Inverse of SF)
* stats: Return mean, variance, (Fisher's) skew, or (Fisher's) kurtosis
* moment: non-central moments of the distribution
The main additional methods of the not frozen distribution are related to the estimation
of distrition parameters:
* fit: maximum likelihood estimation of distribution parameters, including location
and scale
* fit_loc_scale: estimation of location and scale when shape parameters are given
* nnlf: negative log likelihood function
* expect: Calculate the expectation of a function against the pdf or pmf
All continuous distributions take `loc` and `scale` as keyword
parameters to adjust the location and scale of the distribution,
e.g. for the standard normal distribution location is the mean and
scale is the standard deviation. The standardized distribution for a
random variable `x` is obtained through ``(x - loc) / scale``.
Discrete distribution have most of the same basic methods, however
pdf is replaced the probability mass function `pmf`, no estimation
methods, such as fit, are available, and scale is not a valid
keyword parameter. The location parameter, keyword `loc` can be used
to shift the distribution.
The basic methods, pdf, cdf, sf, ppf, and isf are vectorized with
``np.vectorize``, and the usual numpy broadcasting is applied. For
example, we can calculate the critical values for the upper tail of
the t distribution for different probabilites and degrees of freedom.
>>> stats.t.isf([0.1, 0.05, 0.01], [[10], [11]])
array([[ 1.37218364, 1.81246112, 2.76376946],
[ 1.36343032, 1.79588482, 2.71807918]])
Here, the first row are the critical values for 10 degrees of freedom and the second row
is for 11 d.o.f., i.e. this is the same as
>>> stats.t.isf([0.1, 0.05, 0.01], 10)
array([ 1.37218364, 1.81246112, 2.76376946])
>>> stats.t.isf([0.1, 0.05, 0.01], 11)
array([ 1.36343032, 1.79588482, 2.71807918])
If both, probabilities and degrees of freedom have the same array shape, then element
wise matching is used. As an example, we can obtain the 10% tail for 10 d.o.f., the 5% tail
for 11 d.o.f. and the 1% tail for 12 d.o.f. by
>>> stats.t.isf([0.1, 0.05, 0.01], [10, 11, 12])
array([ 1.37218364, 1.79588482, 2.68099799])
Performance and Remaining Issues
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The performance of the individual methods, in terms of speed, varies
widely by distribution and method. The results of a method are
obtained in one of two ways, either by explicit calculation or by a
generic algorithm that is independent of the specific distribution.
Explicit calculation, requires that the method is directly specified
for the given distribution, either through analytic formulas or
through special functions in scipy.special or numpy.random for
`rvs`. These are usually relatively fast calculations. The generic
methods are used if the distribution does not specify any explicit
calculation. To define a distribution, only one of pdf or cdf is
necessary, all other methods can be derived using numeric integration
and root finding. These indirect methods can be very slow. As an
example, ``rgh = stats.gausshyper.rvs(0.5, 2, 2, 2, size=100)`` creates
random variables in a very indirect way and takes about 19 seconds
for 100 random variables on my computer, while one million random
variables from the standard normal or from the t distribution take
just above one second.
The distributions in scipy.stats have recently been corrected and improved
and gained a considerable test suite, however a few issues remain:
* skew and kurtosis, 3rd and 4th moments and entropy are not thoroughly
tested and some coarse testing indicates that there are still some
incorrect results left.
* the distributions have been tested over some range of parameters,
however in some corner ranges, a few incorrect results may remain.
* the maximum likelihood estimation in `fit` does not work with
default starting parameters for all distributions and the user
needs to supply good starting parameters. Also, for some
distribution using a maximum likelihood estimator might
inherently not be the best choice.
The next example shows how to build our own discrete distribution,
and more examples for the usage of the distributions are shown below
together with the statistical tests.
Example: discrete distribution rv_discrete
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the following we use stats.rv_discrete to generate a discrete distribution
that has the probabilites of the truncated normal for the intervalls
centered around the integers.
>>> npoints = 20 # number of integer support points of the distribution minus 1
>>> npointsh = npoints / 2
>>> npointsf = float(npoints)
>>> nbound = 4 # bounds for the truncated normal
>>> normbound = (1+1/npointsf) * nbound # actual bounds of truncated normal
>>> grid = np.arange(-npointsh, npointsh+2, 1) # integer grid
>>> gridlimitsnorm = (grid-0.5) / npointsh * nbound # bin limits for the truncnorm
>>> gridlimits = grid - 0.5
>>> grid = grid[:-1]
>>> probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound))
>>> gridint = grid
>>> normdiscrete = stats.rv_discrete(values = (gridint,
... np.round(probs, decimals=7)), name='normdiscrete')
From the docstring of rv_discrete:
"You can construct an aribtrary discrete rv where P{X=xk} = pk by
passing to the rv_discrete initialization method (through the values=
keyword) a tuple of sequences (xk, pk) which describes only those
values of X (xk) that occur with nonzero probability (pk)."
There are some requirements for this distribution to work. The
keyword `name` is required. The support points of the distribution
xk have to be integers. Also, I needed to limit the number of
decimals. If the last two requirements are not satisfied an
exception may be raised or the resulting numbers may be incorrect.
After defining the distribution, we obtain access to all methods of
discrete distributions.
>>> print 'mean = %6.4f, variance = %6.4f, skew = %6.4f, kurtosis = %6.4f'% \
... normdiscrete.stats(moments = 'mvsk')
mean = -0.0000, variance = 6.3302, skew = 0.0000, kurtosis = -0.0076
>>> nd_std = np.sqrt(normdiscrete.stats(moments = 'v'))
**Generate a random sample and compare observed frequencies with probabilities**
>>> n_sample = 500
>>> np.random.seed(87655678) # fix the seed for replicability
>>> rvs = normdiscrete.rvs(size=n_sample)
>>> rvsnd = rvs
>>> f, l = np.histogram(rvs, bins=gridlimits)
>>> sfreq = np.vstack([gridint, f, probs*n_sample]).T
>>> print sfreq
[[ -1.00000000e+01 0.00000000e+00 2.95019349e-02]
[ -9.00000000e+00 0.00000000e+00 1.32294142e-01]
[ -8.00000000e+00 0.00000000e+00 5.06497902e-01]
[ -7.00000000e+00 2.00000000e+00 1.65568919e+00]
[ -6.00000000e+00 1.00000000e+00 4.62125309e+00]
[ -5.00000000e+00 9.00000000e+00 1.10137298e+01]
[ -4.00000000e+00 2.60000000e+01 2.24137683e+01]
[ -3.00000000e+00 3.70000000e+01 3.89503370e+01]
[ -2.00000000e+00 5.10000000e+01 5.78004747e+01]
[ -1.00000000e+00 7.10000000e+01 7.32455414e+01]
[ 0.00000000e+00 7.40000000e+01 7.92618251e+01]
[ 1.00000000e+00 8.90000000e+01 7.32455414e+01]
[ 2.00000000e+00 5.50000000e+01 5.78004747e+01]
[ 3.00000000e+00 5.00000000e+01 3.89503370e+01]
[ 4.00000000e+00 1.70000000e+01 2.24137683e+01]
[ 5.00000000e+00 1.10000000e+01 1.10137298e+01]
[ 6.00000000e+00 4.00000000e+00 4.62125309e+00]
[ 7.00000000e+00 3.00000000e+00 1.65568919e+00]
[ 8.00000000e+00 0.00000000e+00 5.06497902e-01]
[ 9.00000000e+00 0.00000000e+00 1.32294142e-01]
[ 1.00000000e+01 0.00000000e+00 2.95019349e-02]]
.. plot:: examples/normdiscr_plot1.py
:align: center
:include-source: 0
.. plot:: examples/normdiscr_plot2.py
:align: center
:include-source: 0
Next, we can test, whether our sample was generated by our normdiscrete
distribution. This also verifies, whether the random numbers are generated
correctly
The chisquare test requires that there are a minimum number of observations
in each bin. We combine the tail bins into larger bins so that they contain
enough observations.
>>> f2 = np.hstack([f[:5].sum(), f[5:-5], f[-5:].sum()])
>>> p2 = np.hstack([probs[:5].sum(), probs[5:-5], probs[-5:].sum()])
>>> ch2, pval = stats.chisquare(f2, p2*n_sample)
>>> print 'chisquare for normdiscrete: chi2 = %6.3f pvalue = %6.4f' % (ch2, pval)
chisquare for normdiscrete: chi2 = 12.466 pvalue = 0.4090
The pvalue in this case is high, so we can be quite confident that
our random sample was actually generated by the distribution.
Analysing One Sample
--------------------
First, we create some random variables. We set a seed so that in each run
we get identical results to look at. As an example we take a sample from
the Student t distribution:
>>> np.random.seed(282629734)
>>> x = stats.t.rvs(10, size=1000)
Here, we set the required shape parameter of the t distribution, which
in statistics corresponds to the degrees of freedom, to 10. Using size=100 means
that our sample consists of 1000 independently drawn (pseudo) random numbers.
Since we did not specify the keyword arguments `loc` and `scale`, those are
set to their default values zero and one.
Descriptive Statistics
^^^^^^^^^^^^^^^^^^^^^^
`x` is a numpy array, and we have direct access to all array methods, e.g.
>>> print x.max(), x.min() # equivalent to np.max(x), np.min(x)
5.26327732981 -3.78975572422
>>> print x.mean(), x.var() # equivalent to np.mean(x), np.var(x)
0.0140610663985 1.28899386208
How do the some sample properties compare to their theoretical counterparts?
>>> m, v, s, k = stats.t.stats(10, moments='mvsk')
>>> n, (smin, smax), sm, sv, ss, sk = stats.describe(x)
>>> print 'distribution:',
distribution:
>>> sstr = 'mean = %6.4f, variance = %6.4f, skew = %6.4f, kurtosis = %6.4f'
>>> print sstr %(m, v, s ,k)
mean = 0.0000, variance = 1.2500, skew = 0.0000, kurtosis = 1.0000
>>> print 'sample: ',
sample:
>>> print sstr %(sm, sv, ss, sk)
mean = 0.0141, variance = 1.2903, skew = 0.2165, kurtosis = 1.0556
Note: stats.describe uses the unbiased estimator for the variance, while
np.var is the biased estimator.
For our sample the sample statistics differ a by a small amount from
their theoretical counterparts.
T-test and KS-test
^^^^^^^^^^^^^^^^^^
We can use the t-test to test whether the mean of our sample differs
in a statistcally significant way from the theoretical expectation.
>>> print 't-statistic = %6.3f pvalue = %6.4f' % stats.ttest_1samp(x, m)
t-statistic = 0.391 pvalue = 0.6955
The pvalue is 0.7, this means that with an alpha error of, for
example, 10%, we cannot reject the hypothesis that the sample mean
is equal to zero, the expectation of the standard t-distribution.
As an exercise, we can calculate our ttest also directly without
using the provided function, which should give us the same answer,
and so it does:
>>> tt = (sm-m)/np.sqrt(sv/float(n)) # t-statistic for mean
>>> pval = stats.t.sf(np.abs(tt), n-1)*2 # two-sided pvalue = Prob(abs(t)>tt)
>>> print 't-statistic = %6.3f pvalue = %6.4f' % (tt, pval)
t-statistic = 0.391 pvalue = 0.6955
The Kolmogorov-Smirnov test can be used to test the hypothesis that
the sample comes from the standard t-distribution
>>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x, 't', (10,))
KS-statistic D = 0.016 pvalue = 0.9606
Again the p-value is high enough that we cannot reject the
hypothesis that the random sample really is distributed according to the
t-distribution. In real applications, we don't know what the
underlying distribution is. If we perform the Kolmogorov-Smirnov
test of our sample against the standard normal distribution, then we
also cannot reject the hypothesis that our sample was generated by the
normal distribution given that in this example the p-value is almost 40%.
>>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x,'norm')
KS-statistic D = 0.028 pvalue = 0.3949
However, the standard normal distribution has a variance of 1, while our
sample has a variance of 1.29. If we standardize our sample and test it
against the normal distribution, then the p-value is again large enough
that we cannot reject the hypothesis that the sample came form the
normal distribution.
>>> d, pval = stats.kstest((x-x.mean())/x.std(), 'norm')
>>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % (d, pval)
KS-statistic D = 0.032 pvalue = 0.2402
Note: The Kolmogorov-Smirnov test assumes that we test against a
distribution with given parameters, since in the last case we
estimated mean and variance, this assumption is violated, and the
distribution of the test statistic on which the p-value is based, is
not correct.
Tails of the distribution
^^^^^^^^^^^^^^^^^^^^^^^^^
Finally, we can check the upper tail of the distribution. We can use
the percent point function ppf, which is the inverse of the cdf
function, to obtain the critical values, or, more directly, we can use
the inverse of the survival function
>>> crit01, crit05, crit10 = stats.t.ppf([1-0.01, 1-0.05, 1-0.10], 10)
>>> print 'critical values from ppf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% (crit01, crit05, crit10)
critical values from ppf at 1%, 5% and 10% 2.7638 1.8125 1.3722
>>> print 'critical values from isf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% tuple(stats.t.isf([0.01,0.05,0.10],10))
critical values from isf at 1%, 5% and 10% 2.7638 1.8125 1.3722
>>> freq01 = np.sum(x>crit01) / float(n) * 100
>>> freq05 = np.sum(x>crit05) / float(n) * 100
>>> freq10 = np.sum(x>crit10) / float(n) * 100
>>> print 'sample %%-frequency at 1%%, 5%% and 10%% tail %8.4f %8.4f %8.4f'% (freq01, freq05, freq10)
sample %-frequency at 1%, 5% and 10% tail 1.4000 5.8000 10.5000
In all three cases, our sample has more weight in the top tail than the
underlying distribution.
We can briefly check a larger sample to see if we get a closer match. In this
case the empirical frequency is quite close to the theoretical probability,
but if we repeat this several times the fluctuations are still pretty large.
>>> freq05l = np.sum(stats.t.rvs(10, size=10000) > crit05) / 10000.0 * 100
>>> print 'larger sample %%-frequency at 5%% tail %8.4f'% freq05l
larger sample %-frequency at 5% tail 4.8000
We can also compare it with the tail of the normal distribution, which
has less weight in the tails:
>>> print 'tail prob. of normal at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% \
... tuple(stats.norm.sf([crit01, crit05, crit10])*100)
tail prob. of normal at 1%, 5% and 10% 0.2857 3.4957 8.5003
The chisquare test can be used to test, whether for a finite number of bins,
the observed frequencies differ significantly from the probabilites of the
hypothesized distribution.
>>> quantiles = [0.0, 0.01, 0.05, 0.1, 1-0.10, 1-0.05, 1-0.01, 1.0]
>>> crit = stats.t.ppf(quantiles, 10)
>>> print crit
[ -Inf -2.76376946 -1.81246112 -1.37218364 1.37218364 1.81246112
2.76376946 Inf]
>>> n_sample = x.size
>>> freqcount = np.histogram(x, bins=crit)[0]
>>> tprob = np.diff(quantiles)
>>> nprob = np.diff(stats.norm.cdf(crit))
>>> tch, tpval = stats.chisquare(freqcount, tprob*n_sample)
>>> nch, npval = stats.chisquare(freqcount, nprob*n_sample)
>>> print 'chisquare for t: chi2 = %6.3f pvalue = %6.4f' % (tch, tpval)
chisquare for t: chi2 = 2.300 pvalue = 0.8901
>>> print 'chisquare for normal: chi2 = %6.3f pvalue = %6.4f' % (nch, npval)
chisquare for normal: chi2 = 64.605 pvalue = 0.0000
We see that the standard normal distribution is clearly rejected while the
standard t-distribution cannot be rejected. Since the variance of our sample
differs from both standard distribution, we can again redo the test taking
the estimate for scale and location into account.
The fit method of the distributions can be used to estimate the parameters
of the distribution, and the test is repeated using probabilites of the
estimated distribution.
>>> tdof, tloc, tscale = stats.t.fit(x)
>>> nloc, nscale = stats.norm.fit(x)
>>> tprob = np.diff(stats.t.cdf(crit, tdof, loc=tloc, scale=tscale))
>>> nprob = np.diff(stats.norm.cdf(crit, loc=nloc, scale=nscale))
>>> tch, tpval = stats.chisquare(freqcount, tprob*n_sample)
>>> nch, npval = stats.chisquare(freqcount, nprob*n_sample)
>>> print 'chisquare for t: chi2 = %6.3f pvalue = %6.4f' % (tch, tpval)
chisquare for t: chi2 = 1.577 pvalue = 0.9542
>>> print 'chisquare for normal: chi2 = %6.3f pvalue = %6.4f' % (nch, npval)
chisquare for normal: chi2 = 11.084 pvalue = 0.0858
Taking account of the estimated parameters, we can still reject the
hypothesis that our sample came from a normal distribution (at the 5% level),
but again, with a p-value of 0.95, we cannot reject the t distribution.
Special tests for normal distributions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Since the normal distribution is the most common distribution in statistics,
there are several additional functions available to test whether a sample
could have been drawn from a normal distribution
First we can test if skew and kurtosis of our sample differ significantly from
those of a normal distribution:
>>> print 'normal skewtest teststat = %6.3f pvalue = %6.4f' % stats.skewtest(x)
normal skewtest teststat = 2.785 pvalue = 0.0054
>>> print 'normal kurtosistest teststat = %6.3f pvalue = %6.4f' % stats.kurtosistest(x)
normal kurtosistest teststat = 4.757 pvalue = 0.0000
These two tests are combined in the normality test
>>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(x)
normaltest teststat = 30.379 pvalue = 0.0000
In all three tests the p-values are very low and we can reject the hypothesis
that the our sample has skew and kurtosis of the normal distribution.
Since skew and kurtosis of our sample are based on central moments, we get
exactly the same results if we test the standardized sample:
>>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % \
... stats.normaltest((x-x.mean())/x.std())
normaltest teststat = 30.379 pvalue = 0.0000
Because normality is rejected so strongly, we can check whether the
normaltest gives reasonable results for other cases:
>>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(stats.t.rvs(10, size=100))
normaltest teststat = 4.698 pvalue = 0.0955
>>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(stats.norm.rvs(size=1000))
normaltest teststat = 0.613 pvalue = 0.7361
When testing for normality of a small sample of t-distributed observations
and a large sample of normal distributed observation, then in neither case
can we reject the null hypothesis that the sample comes from a normal
distribution. In the first case this is because the test is not powerful
enough to distinguish a t and a normally distributed random variable in a
small sample.
Comparing two samples
---------------------
In the following, we are given two samples, which can come either from the
same or from different distribution, and we want to test whether these
samples have the same statistical properties.
Comparing means
^^^^^^^^^^^^^^^
Test with sample with identical means:
>>> rvs1 = stats.norm.rvs(loc=5, scale=10, size=500)
>>> rvs2 = stats.norm.rvs(loc=5, scale=10, size=500)
>>> stats.ttest_ind(rvs1, rvs2)
(-0.54890361750888583, 0.5831943748663857)
Test with sample with different means:
>>> rvs3 = stats.norm.rvs(loc=8, scale=10, size=500)
>>> stats.ttest_ind(rvs1, rvs3)
(-4.5334142901750321, 6.507128186505895e-006)
Kolmogorov-Smirnov test for two samples ks_2samp
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For the example where both samples are drawn from the same distribution,
we cannot reject the null hypothesis since the pvalue is high
>>> stats.ks_2samp(rvs1, rvs2)
(0.025999999999999995, 0.99541195173064878)
In the second example, with different location, i.e. means, we can
reject the null hypothesis since the pvalue is below 1%
>>> stats.ks_2samp(rvs1, rvs3)
(0.11399999999999999, 0.0027132103661283141)

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,690 @@
.. _discrete-random-variables:
==================================
Discrete Statistical Distributions
==================================
Discrete random variables take on only a countable number of values.
The commonly used distributions are included in SciPy and described in
this document. Each discrete distribution can take one extra integer
parameter: :math:`L.` The relationship between the general distribution
:math:`p` and the standard distribution :math:`p_{0}` is
.. math::
:nowrap:
\[ p\left(x\right)=p_{0}\left(x-L\right)\]
which allows for shifting of the input. When a distribution generator
is initialized, the discrete distribution can either specify the
beginning and ending (integer) values :math:`a` and :math:`b` which must be such that
.. math::
:nowrap:
\[ p_{0}\left(x\right)=0\quad x<a\textrm{ or }x>b\]
in which case, it is assumed that the pdf function is specified on the
integers :math:`a+mk\leq b` where :math:`k` is a non-negative integer ( :math:`0,1,2,\ldots` ) and :math:`m` is a positive integer multiplier. Alternatively, the two lists :math:`x_{k}` and :math:`p\left(x_{k}\right)` can be provided directly in which case a dictionary is set up
internally to evaulate probabilities and generate random variates.
Probability Mass Function (PMF)
-------------------------------
The probability mass function of a random variable X is defined as the
probability that the random variable takes on a particular value.
.. math::
:nowrap:
\[ p\left(x_{k}\right)=P\left[X=x_{k}\right]\]
This is also sometimes called the probability density function,
although technically
.. math::
:nowrap:
\[ f\left(x\right)=\sum_{k}p\left(x_{k}\right)\delta\left(x-x_{k}\right)\]
is the probability density function for a discrete distribution [#]_ .
.. [#]
XXX: Unknown layout Plain Layout: Note that we will be using :math:`p` to represent the probability mass function and a parameter (a
XXX: probability). The usage should be obvious from context.
Cumulative Distribution Function (CDF)
--------------------------------------
The cumulative distribution function is
.. math::
:nowrap:
\[ F\left(x\right)=P\left[X\leq x\right]=\sum_{x_{k}\leq x}p\left(x_{k}\right)\]
and is also useful to be able to compute. Note that
.. math::
:nowrap:
\[ F\left(x_{k}\right)-F\left(x_{k-1}\right)=p\left(x_{k}\right)\]
Survival Function
-----------------
The survival function is just
.. math::
:nowrap:
\[ S\left(x\right)=1-F\left(x\right)=P\left[X>k\right]\]
the probability that the random variable is strictly larger than :math:`k` .
Percent Point Function (Inverse CDF)
------------------------------------
The percent point function is the inverse of the cumulative
distribution function and is
.. math::
:nowrap:
\[ G\left(q\right)=F^{-1}\left(q\right)\]
for discrete distributions, this must be modified for cases where
there is no :math:`x_{k}` such that :math:`F\left(x_{k}\right)=q.` In these cases we choose :math:`G\left(q\right)` to be the smallest value :math:`x_{k}=G\left(q\right)` for which :math:`F\left(x_{k}\right)\geq q` . If :math:`q=0` then we define :math:`G\left(0\right)=a-1` . This definition allows random variates to be defined in the same way
as with continuous rv's using the inverse cdf on a uniform
distribution to generate random variates.
Inverse survival function
-------------------------
The inverse survival function is the inverse of the survival function
.. math::
:nowrap:
\[ Z\left(\alpha\right)=S^{-1}\left(\alpha\right)=G\left(1-\alpha\right)\]
and is thus the smallest non-negative integer :math:`k` for which :math:`F\left(k\right)\geq1-\alpha` or the smallest non-negative integer :math:`k` for which :math:`S\left(k\right)\leq\alpha.`
Hazard functions
----------------
If desired, the hazard function and the cumulative hazard function
could be defined as
.. math::
:nowrap:
\[ h\left(x_{k}\right)=\frac{p\left(x_{k}\right)}{1-F\left(x_{k}\right)}\]
and
.. math::
:nowrap:
\[ H\left(x\right)=\sum_{x_{k}\leq x}h\left(x_{k}\right)=\sum_{x_{k}\leq x}\frac{F\left(x_{k}\right)-F\left(x_{k-1}\right)}{1-F\left(x_{k}\right)}.\]
Moments
-------
Non-central moments are defined using the PDF
.. math::
:nowrap:
\[ \mu_{m}^{\prime}=E\left[X^{m}\right]=\sum_{k}x_{k}^{m}p\left(x_{k}\right).\]
Central moments are computed similarly :math:`\mu=\mu_{1}^{\prime}`
.. math::
:nowrap:
\begin{eqnarray*} \mu_{m}=E\left[\left(X-\mu\right)^{m}\right] & = & \sum_{k}\left(x_{k}-\mu\right)^{m}p\left(x_{k}\right)\\ & = & \sum_{k=0}^{m}\left(-1\right)^{m-k}\left(\begin{array}{c} m\\ k\end{array}\right)\mu^{m-k}\mu_{k}^{\prime}\end{eqnarray*}
The mean is the first moment
.. math::
:nowrap:
\[ \mu=\mu_{1}^{\prime}=E\left[X\right]=\sum_{k}x_{k}p\left(x_{k}\right)\]
the variance is the second central moment
.. math::
:nowrap:
\[ \mu_{2}=E\left[\left(X-\mu\right)^{2}\right]=\sum_{x_{k}}x_{k}^{2}p\left(x_{k}\right)-\mu^{2}.\]
Skewness is defined as
.. math::
:nowrap:
\[ \gamma_{1}=\frac{\mu_{3}}{\mu_{2}^{3/2}}\]
while (Fisher) kurtosis is
.. math::
:nowrap:
\[ \gamma_{2}=\frac{\mu_{4}}{\mu_{2}^{2}}-3,\]
so that a normal distribution has a kurtosis of zero.
Moment generating function
--------------------------
The moment generating funtion is defined as
.. math::
:nowrap:
\[ M_{X}\left(t\right)=E\left[e^{Xt}\right]=\sum_{x_{k}}e^{x_{k}t}p\left(x_{k}\right)\]
Moments are found as the derivatives of the moment generating function
evaluated at :math:`0.`
Fitting data
------------
To fit data to a distribution, maximizing the likelihood function is
common. Alternatively, some distributions have well-known minimum
variance unbiased estimators. These will be chosen by default, but the
likelihood function will always be available for minimizing.
If :math:`f_{i}\left(k;\boldsymbol{\theta}\right)` is the PDF of a random-variable where :math:`\boldsymbol{\theta}` is a vector of parameters ( *e.g.* :math:`L` and :math:`S` ), then for a collection of :math:`N` independent samples from this distribution, the joint distribution the
random vector :math:`\mathbf{k}` is
.. math::
:nowrap:
\[ f\left(\mathbf{k};\boldsymbol{\theta}\right)=\prod_{i=1}^{N}f_{i}\left(k_{i};\boldsymbol{\theta}\right).\]
The maximum likelihood estimate of the parameters :math:`\boldsymbol{\theta}` are the parameters which maximize this function with :math:`\mathbf{x}` fixed and given by the data:
.. math::
:nowrap:
\begin{eqnarray*} \hat{\boldsymbol{\theta}} & = & \arg\max_{\boldsymbol{\theta}}f\left(\mathbf{k};\boldsymbol{\theta}\right)\\ & = & \arg\min_{\boldsymbol{\theta}}l_{\mathbf{k}}\left(\boldsymbol{\theta}\right).\end{eqnarray*}
Where
.. math::
:nowrap:
\begin{eqnarray*} l_{\mathbf{k}}\left(\boldsymbol{\theta}\right) & = & -\sum_{i=1}^{N}\log f\left(k_{i};\boldsymbol{\theta}\right)\\ & = & -N\overline{\log f\left(k_{i};\boldsymbol{\theta}\right)}\end{eqnarray*}
Standard notation for mean
--------------------------
We will use
.. math::
:nowrap:
\[ \overline{y\left(\mathbf{x}\right)}=\frac{1}{N}\sum_{i=1}^{N}y\left(x_{i}\right)\]
where :math:`N` should be clear from context.
Combinations
------------
Note that
.. math::
:nowrap:
\[ k!=k\cdot\left(k-1\right)\cdot\left(k-2\right)\cdot\cdots\cdot1=\Gamma\left(k+1\right)\]
and has special cases of
.. math::
:nowrap:
\begin{eqnarray*} 0! & \equiv & 1\\ k! & \equiv & 0\quad k<0\end{eqnarray*}
and
.. math::
:nowrap:
\[ \left(\begin{array}{c} n\\ k\end{array}\right)=\frac{n!}{\left(n-k\right)!k!}.\]
If :math:`n<0` or :math:`k<0` or :math:`k>n` we define :math:`\left(\begin{array}{c} n\\ k\end{array}\right)=0`
Bernoulli
=========
A Bernoulli random variable of parameter :math:`p` takes one of only two values :math:`X=0` or :math:`X=1` . The probability of success ( :math:`X=1` ) is :math:`p` , and the probability of failure ( :math:`X=0` ) is :math:`1-p.` It can be thought of as a binomial random variable with :math:`n=1` . The PMF is :math:`p\left(k\right)=0` for :math:`k\neq0,1` and
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;p\right) & = & \begin{cases} 1-p & k=0\\ p & k=1\end{cases}\\ F\left(x;p\right) & = & \begin{cases} 0 & x<0\\ 1-p & 0\le x<1\\ 1 & 1\leq x\end{cases}\\ G\left(q;p\right) & = & \begin{cases} 0 & 0\leq q<1-p\\ 1 & 1-p\leq q\leq1\end{cases}\\ \mu & = & p\\ \mu_{2} & = & p\left(1-p\right)\\ \gamma_{3} & = & \frac{1-2p}{\sqrt{p\left(1-p\right)}}\\ \gamma_{4} & = & \frac{1-6p\left(1-p\right)}{p\left(1-p\right)}\end{eqnarray*}
.. math::
:nowrap:
\[ M\left(t\right)=1-p\left(1-e^{t}\right)\]
.. math::
:nowrap:
\[ \mu_{m}^{\prime}=p\]
.. math::
:nowrap:
\[ h\left[X\right]=p\log p+\left(1-p\right)\log\left(1-p\right)\]
Binomial
========
A binomial random variable with parameters :math:`\left(n,p\right)` can be described as the sum of :math:`n` independent Bernoulli random variables of parameter :math:`p;`
.. math::
:nowrap:
\[ Y=\sum_{i=1}^{n}X_{i}.\]
Therefore, this random variable counts the number of successes in :math:`n` independent trials of a random experiment where the probability of
success is :math:`p.`
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;n,p\right) & = & \left(\begin{array}{c} n\\ k\end{array}\right)p^{k}\left(1-p\right)^{n-k}\,\, k\in\left\{ 0,1,\ldots n\right\} ,\\ F\left(x;n,p\right) & = & \sum_{k\leq x}\left(\begin{array}{c} n\\ k\end{array}\right)p^{k}\left(1-p\right)^{n-k}=I_{1-p}\left(n-\left\lfloor x\right\rfloor ,\left\lfloor x\right\rfloor +1\right)\quad x\geq0\end{eqnarray*}
where the incomplete beta integral is
.. math::
:nowrap:
\[ I_{x}\left(a,b\right)=\frac{\Gamma\left(a+b\right)}{\Gamma\left(a\right)\Gamma\left(b\right)}\int_{0}^{x}t^{a-1}\left(1-t\right)^{b-1}dt.\]
Now
.. math::
:nowrap:
\begin{eqnarray*} \mu & = & np\\ \mu_{2} & = & np\left(1-p\right)\\ \gamma_{1} & = & \frac{1-2p}{\sqrt{np\left(1-p\right)}}\\ \gamma_{2} & = & \frac{1-6p\left(1-p\right)}{np\left(1-p\right)}.\end{eqnarray*}
.. math::
:nowrap:
\[ M\left(t\right)=\left[1-p\left(1-e^{t}\right)\right]^{n}\]
Boltzmann (truncated Planck)
============================
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;N,\lambda\right) & = & \frac{1-e^{-\lambda}}{1-e^{-\lambda N}}\exp\left(-\lambda k\right)\quad k\in\left\{ 0,1,\ldots,N-1\right\} \\ F\left(x;N,\lambda\right) & = & \left\{ \begin{array}{cc} 0 & x<0\\ \frac{1-\exp\left[-\lambda\left(\left\lfloor x\right\rfloor +1\right)\right]}{1-\exp\left(-\lambda N\right)} & 0\leq x\leq N-1\\ 1 & x\geq N-1\end{array}\right.\\ G\left(q,\lambda\right) & = & \left\lceil -\frac{1}{\lambda}\log\left[1-q\left(1-e^{-\lambda N}\right)\right]-1\right\rceil \end{eqnarray*}
Define :math:`z=e^{-\lambda}`
.. math::
:nowrap:
\begin{eqnarray*} \mu & = & \frac{z}{1-z}-\frac{Nz^{N}}{1-z^{N}}\\ \mu_{2} & = & \frac{z}{\left(1-z\right)^{2}}-\frac{N^{2}z^{N}}{\left(1-z^{N}\right)^{2}}\\ \gamma_{1} & = & \frac{z\left(1+z\right)\left(\frac{1-z^{N}}{1-z}\right)^{3}-N^{3}z^{N}\left(1+z^{N}\right)}{\left[z\left(\frac{1-z^{N}}{1-z}\right)^{2}-N^{2}z^{N}\right]^{3/2}}\\ \gamma_{2} & = & \frac{z\left(1+4z+z^{2}\right)\left(\frac{1-z^{N}}{1-z}\right)^{4}-N^{4}z^{N}\left(1+4z^{N}+z^{2N}\right)}{\left[z\left(\frac{1-z^{N}}{1-z}\right)^{2}-N^{2}z^{N}\right]^{2}}\end{eqnarray*}
.. math::
:nowrap:
\[ M\left(t\right)=\frac{1-e^{N\left(t-\lambda\right)}}{1-e^{t-\lambda}}\frac{1-e^{-\lambda}}{1-e^{-\lambda N}}\]
Planck (discrete exponential)
=============================
Named Planck because of its relationship to the black-body problem he
solved.
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;\lambda\right) & = & \left(1-e^{-\lambda}\right)e^{-\lambda k}\quad k\lambda\geq0\\ F\left(x;\lambda\right) & = & 1-e^{-\lambda\left(\left\lfloor x\right\rfloor +1\right)}\quad x\lambda\geq0\\ G\left(q;\lambda\right) & = & \left\lceil -\frac{1}{\lambda}\log\left[1-q\right]-1\right\rceil .\end{eqnarray*}
.. math::
:nowrap:
\begin{eqnarray*} \mu & = & \frac{1}{e^{\lambda}-1}\\ \mu_{2} & = & \frac{e^{-\lambda}}{\left(1-e^{-\lambda}\right)^{2}}\\ \gamma_{1} & = & 2\cosh\left(\frac{\lambda}{2}\right)\\ \gamma_{2} & = & 4+2\cosh\left(\lambda\right)\end{eqnarray*}
.. math::
:nowrap:
\[ M\left(t\right)=\frac{1-e^{-\lambda}}{1-e^{t-\lambda}}\]
.. math::
:nowrap:
\[ h\left[X\right]=\frac{\lambda e^{-\lambda}}{1-e^{-\lambda}}-\log\left(1-e^{-\lambda}\right)\]
Poisson
=======
The Poisson random variable counts the number of successes in :math:`n` independent Bernoulli trials in the limit as :math:`n\rightarrow\infty` and :math:`p\rightarrow0` where the probability of success in each trial is :math:`p` and :math:`np=\lambda\geq0` is a constant. It can be used to approximate the Binomial random
variable or in it's own right to count the number of events that occur
in the interval :math:`\left[0,t\right]` for a process satisfying certain "sparsity "constraints. The functions are
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;\lambda\right) & = & e^{-\lambda}\frac{\lambda^{k}}{k!}\quad k\geq0,\\ F\left(x;\lambda\right) & = & \sum_{n=0}^{\left\lfloor x\right\rfloor }e^{-\lambda}\frac{\lambda^{n}}{n!}=\frac{1}{\Gamma\left(\left\lfloor x\right\rfloor +1\right)}\int_{\lambda}^{\infty}t^{\left\lfloor x\right\rfloor }e^{-t}dt,\\ \mu & = & \lambda\\ \mu_{2} & = & \lambda\\ \gamma_{1} & = & \frac{1}{\sqrt{\lambda}}\\ \gamma_{2} & = & \frac{1}{\lambda}.\end{eqnarray*}
.. math::
:nowrap:
\[ M\left(t\right)=\exp\left[\lambda\left(e^{t}-1\right)\right].\]
Geometric
=========
The geometric random variable with parameter :math:`p\in\left(0,1\right)` can be defined as the number of trials required to obtain a success
where the probability of success on each trial is :math:`p` . Thus,
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;p\right) & = & \left(1-p\right)^{k-1}p\quad k\geq1\\ F\left(x;p\right) & = & 1-\left(1-p\right)^{\left\lfloor x\right\rfloor }\quad x\geq1\\ G\left(q;p\right) & = & \left\lceil \frac{\log\left(1-q\right)}{\log\left(1-p\right)}\right\rceil \\ \mu & = & \frac{1}{p}\\ \mu_{2} & = & \frac{1-p}{p^{2}}\\ \gamma_{1} & = & \frac{2-p}{\sqrt{1-p}}\\ \gamma_{2} & = & \frac{p^{2}-6p+6}{1-p}.\end{eqnarray*}
.. math::
:nowrap:
\begin{eqnarray*} M\left(t\right) & = & \frac{p}{e^{-t}-\left(1-p\right)}\end{eqnarray*}
Negative Binomial
=================
The negative binomial random variable with parameters :math:`n` and :math:`p\in\left(0,1\right)` can be defined as the number of *extra* independent trials (beyond :math:`n` ) required to accumulate a total of :math:`n` successes where the probability of a success on each trial is :math:`p.` Equivalently, this random variable is the number of failures
encoutered while accumulating :math:`n` successes during independent trials of an experiment that succeeds
with probability :math:`p.` Thus,
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;n,p\right) & = & \left(\begin{array}{c} k+n-1\\ n-1\end{array}\right)p^{n}\left(1-p\right)^{k}\quad k\geq0\\ F\left(x;n,p\right) & = & \sum_{i=0}^{\left\lfloor x\right\rfloor }\left(\begin{array}{c} i+n-1\\ i\end{array}\right)p^{n}\left(1-p\right)^{i}\quad x\geq0\\ & = & I_{p}\left(n,\left\lfloor x\right\rfloor +1\right)\quad x\geq0\\ \mu & = & n\frac{1-p}{p}\\ \mu_{2} & = & n\frac{1-p}{p^{2}}\\ \gamma_{1} & = & \frac{2-p}{\sqrt{n\left(1-p\right)}}\\ \gamma_{2} & = & \frac{p^{2}+6\left(1-p\right)}{n\left(1-p\right)}.\end{eqnarray*}
Recall that :math:`I_{p}\left(a,b\right)` is the incomplete beta integral.
Hypergeometric
==============
The hypergeometric random variable with parameters :math:`\left(M,n,N\right)` counts the number of "good "objects in a sample of size :math:`N` chosen without replacement from a population of :math:`M` objects where :math:`n` is the number of "good "objects in the total population.
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;N,n,M\right) & = & \frac{\left(\begin{array}{c} n\\ k\end{array}\right)\left(\begin{array}{c} M-n\\ N-k\end{array}\right)}{\left(\begin{array}{c} M\\ N\end{array}\right)}\quad N-\left(M-n\right)\leq k\leq\min\left(n,N\right)\\ F\left(x;N,n,M\right) & = & \sum_{k=0}^{\left\lfloor x\right\rfloor }\frac{\left(\begin{array}{c} m\\ k\end{array}\right)\left(\begin{array}{c} N-m\\ n-k\end{array}\right)}{\left(\begin{array}{c} N\\ n\end{array}\right)},\\ \mu & = & \frac{nN}{M}\\ \mu_{2} & = & \frac{nN\left(M-n\right)\left(M-N\right)}{M^{2}\left(M-1\right)}\\ \gamma_{1} & = & \frac{\left(M-2n\right)\left(M-2N\right)}{M-2}\sqrt{\frac{M-1}{nN\left(M-m\right)\left(M-n\right)}}\\ \gamma_{2} & = & \frac{g\left(N,n,M\right)}{nN\left(M-n\right)\left(M-3\right)\left(M-2\right)\left(N-M\right)}\end{eqnarray*}
where (defining :math:`m=M-n` )
.. math::
:nowrap:
\begin{eqnarray*} g\left(N,n,M\right) & = & m^{3}-m^{5}+3m^{2}n-6m^{3}n+m^{4}n+3mn^{2}\\ & & -12m^{2}n^{2}+8m^{3}n^{2}+n^{3}-6mn^{3}+8m^{2}n^{3}\\ & & +mn^{4}-n^{5}-6m^{3}N+6m^{4}N+18m^{2}nN\\ & & -6m^{3}nN+18mn^{2}N-24m^{2}n^{2}N-6n^{3}N\\ & & -6mn^{3}N+6n^{4}N+6m^{2}N^{2}-6m^{3}N^{2}-24mnN^{2}\\ & & +12m^{2}nN^{2}+6n^{2}N^{2}+12mn^{2}N^{2}-6n^{3}N^{2}.\end{eqnarray*}
Zipf (Zeta)
===========
A random variable has the zeta distribution (also called the zipf
distribution) with parameter :math:`\alpha>1` if it's probability mass function is given by
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;\alpha\right) & = & \frac{1}{\zeta\left(\alpha\right)k^{\alpha}}\quad k\geq1\end{eqnarray*}
where
.. math::
:nowrap:
\[ \zeta\left(\alpha\right)=\sum_{n=1}^{\infty}\frac{1}{n^{\alpha}}\]
is the Riemann zeta function. Other functions of this distribution are
.. math::
:nowrap:
\begin{eqnarray*} F\left(x;\alpha\right) & = & \frac{1}{\zeta\left(\alpha\right)}\sum_{k=1}^{\left\lfloor x\right\rfloor }\frac{1}{k^{\alpha}}\\ \mu & = & \frac{\zeta_{1}}{\zeta_{0}}\quad\alpha>2\\ \mu_{2} & = & \frac{\zeta_{2}\zeta_{0}-\zeta_{1}^{2}}{\zeta_{0}^{2}}\quad\alpha>3\\ \gamma_{1} & = & \frac{\zeta_{3}\zeta_{0}^{2}-3\zeta_{0}\zeta_{1}\zeta_{2}+2\zeta_{1}^{3}}{\left[\zeta_{2}\zeta_{0}-\zeta_{1}^{2}\right]^{3/2}}\quad\alpha>4\\ \gamma_{2} & = & \frac{\zeta_{4}\zeta_{0}^{3}-4\zeta_{3}\zeta_{1}\zeta_{0}^{2}+12\zeta_{2}\zeta_{1}^{2}\zeta_{0}-6\zeta_{1}^{4}-3\zeta_{2}^{2}\zeta_{0}^{2}}{\left(\zeta_{2}\zeta_{0}-\zeta_{1}^{2}\right)^{2}}.\end{eqnarray*}
.. math::
:nowrap:
\begin{eqnarray*} M\left(t\right) & = & \frac{\textrm{Li}_{\alpha}\left(e^{t}\right)}{\zeta\left(\alpha\right)}\end{eqnarray*}
where :math:`\zeta_{i}=\zeta\left(\alpha-i\right)` and :math:`\textrm{Li}_{n}\left(z\right)` is the :math:`n^{\textrm{th}}` polylogarithm function of :math:`z` defined as
.. math::
:nowrap:
\[ \textrm{Li}_{n}\left(z\right)\equiv\sum_{k=1}^{\infty}\frac{z^{k}}{k^{n}}\]
.. math::
:nowrap:
\[ \mu_{n}^{\prime}=\left.M^{\left(n\right)}\left(t\right)\right|_{t=0}=\left.\frac{\textrm{Li}_{\alpha-n}\left(e^{t}\right)}{\zeta\left(a\right)}\right|_{t=0}=\frac{\zeta\left(\alpha-n\right)}{\zeta\left(\alpha\right)}\]
Logarithmic (Log-Series, Series)
================================
The logarimthic distribution with parameter :math:`p` has a probability mass function with terms proportional to the Taylor
series expansion of :math:`\log\left(1-p\right)`
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;p\right) & = & -\frac{p^{k}}{k\log\left(1-p\right)}\quad k\geq1\\ F\left(x;p\right) & = & -\frac{1}{\log\left(1-p\right)}\sum_{k=1}^{\left\lfloor x\right\rfloor }\frac{p^{k}}{k}=1+\frac{p^{1+\left\lfloor x\right\rfloor }\Phi\left(p,1,1+\left\lfloor x\right\rfloor \right)}{\log\left(1-p\right)}\end{eqnarray*}
where
.. math::
:nowrap:
\[ \Phi\left(z,s,a\right)=\sum_{k=0}^{\infty}\frac{z^{k}}{\left(a+k\right)^{s}}\]
is the Lerch Transcendent. Also define :math:`r=\log\left(1-p\right)`
.. math::
:nowrap:
\begin{eqnarray*} \mu & = & -\frac{p}{\left(1-p\right)r}\\ \mu_{2} & = & -\frac{p\left[p+r\right]}{\left(1-p\right)^{2}r^{2}}\\ \gamma_{1} & = & -\frac{2p^{2}+3pr+\left(1+p\right)r^{2}}{r\left(p+r\right)\sqrt{-p\left(p+r\right)}}r\\ \gamma_{2} & = & -\frac{6p^{3}+12p^{2}r+p\left(4p+7\right)r^{2}+\left(p^{2}+4p+1\right)r^{3}}{p\left(p+r\right)^{2}}.\end{eqnarray*}
.. math::
:nowrap:
\begin{eqnarray*} M\left(t\right) & = & -\frac{1}{\log\left(1-p\right)}\sum_{k=1}^{\infty}\frac{e^{tk}p^{k}}{k}\\ & = & \frac{\log\left(1-pe^{t}\right)}{\log\left(1-p\right)}\end{eqnarray*}
Thus,
.. math::
:nowrap:
\[ \mu_{n}^{\prime}=\left.M^{\left(n\right)}\left(t\right)\right|_{t=0}=\left.\frac{\textrm{Li}_{1-n}\left(pe^{t}\right)}{\log\left(1-p\right)}\right|_{t=0}=-\frac{\textrm{Li}_{1-n}\left(p\right)}{\log\left(1-p\right)}.\]
Discrete Uniform (randint)
==========================
The discrete uniform distribution with parameters :math:`\left(a,b\right)` constructs a random variable that has an equal probability of being
any one of the integers in the half-open range :math:`[a,b).` If :math:`a` is not given it is assumed to be zero and the only parameter is :math:`b.` Therefore,
.. math::
:nowrap:
\begin{eqnarray*} p\left(k;a,b\right) & = & \frac{1}{b-a}\quad a\leq k<b\\ F\left(x;a,b\right) & = & \frac{\left\lfloor x\right\rfloor -a}{b-a}\quad a\leq x\leq b\\ G\left(q;a,b\right) & = & \left\lceil q\left(b-a\right)+a\right\rceil \\ \mu & = & \frac{b+a-1}{2}\\ \mu_{2} & = & \frac{\left(b-a-1\right)\left(b-a+1\right)}{12}\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & -\frac{6}{5}\frac{\left(b-a\right)^{2}+1}{\left(b-a-1\right)\left(b-a+1\right)}.\end{eqnarray*}
.. math::
:nowrap:
\begin{eqnarray*} M\left(t\right) & = & \frac{1}{b-a}\sum_{k=a}^{b-1}e^{tk}\\ & = & \frac{e^{bt}-e^{at}}{\left(b-a\right)\left(e^{t}-1\right)}\end{eqnarray*}
Discrete Laplacian
==================
Defined over all integers for :math:`a>0`
.. math::
:nowrap:
\begin{eqnarray*} p\left(k\right) & = & \tanh\left(\frac{a}{2}\right)e^{-a\left|k\right|},\\ F\left(x\right) & = & \left\{ \begin{array}{cc} \frac{e^{a\left(\left\lfloor x\right\rfloor +1\right)}}{e^{a}+1} & \left\lfloor x\right\rfloor <0,\\ 1-\frac{e^{-a\left\lfloor x\right\rfloor }}{e^{a}+1} & \left\lfloor x\right\rfloor \geq0.\end{array}\right.\\ G\left(q\right) & = & \left\{ \begin{array}{cc} \left\lceil \frac{1}{a}\log\left[q\left(e^{a}+1\right)\right]-1\right\rceil & q<\frac{1}{1+e^{-a}},\\ \left\lceil -\frac{1}{a}\log\left[\left(1-q\right)\left(1+e^{a}\right)\right]\right\rceil & q\geq\frac{1}{1+e^{-a}}.\end{array}\right.\end{eqnarray*}
.. math::
:nowrap:
\begin{eqnarray*} M\left(t\right) & = & \tanh\left(\frac{a}{2}\right)\sum_{k=-\infty}^{\infty}e^{tk}e^{-a\left|k\right|}\\ & = & C\left(1+\sum_{k=1}^{\infty}e^{-\left(t+a\right)k}+\sum_{1}^{\infty}e^{\left(t-a\right)k}\right)\\ & = & \tanh\left(\frac{a}{2}\right)\left(1+\frac{e^{-\left(t+a\right)}}{1-e^{-\left(t+a\right)}}+\frac{e^{t-a}}{1-e^{t-a}}\right)\\ & = & \frac{\tanh\left(\frac{a}{2}\right)\sinh a}{\cosh a-\cosh t}.\end{eqnarray*}
Thus,
.. math::
:nowrap:
\[ \mu_{n}^{\prime}=M^{\left(n\right)}\left(0\right)=\left[1+\left(-1\right)^{n}\right]\textrm{Li}_{-n}\left(e^{-a}\right)\]
where :math:`\textrm{Li}_{-n}\left(z\right)` is the polylogarithm function of order :math:`-n` evaluated at :math:`z.`
.. math::
:nowrap:
\[ h\left[X\right]=-\log\left(\tanh\left(\frac{a}{2}\right)\right)+\frac{a}{\sinh a}\]
Discrete Gaussian*
==================
Defined for all :math:`\mu` and :math:`\lambda>0` and :math:`k`
.. math::
:nowrap:
\[ p\left(k;\mu,\lambda\right)=\frac{1}{Z\left(\lambda\right)}\exp\left[-\lambda\left(k-\mu\right)^{2}\right]\]
where
.. math::
:nowrap:
\[ Z\left(\lambda\right)=\sum_{k=-\infty}^{\infty}\exp\left[-\lambda k^{2}\right]\]
.. math::
:nowrap:
\begin{eqnarray*} \mu & = & \mu\\ \mu_{2} & = & -\frac{\partial}{\partial\lambda}\log Z\left(\lambda\right)\\ & = & G\left(\lambda\right)e^{-\lambda}\end{eqnarray*}
where :math:`G\left(0\right)\rightarrow\infty` and :math:`G\left(\infty\right)\rightarrow2` with a minimum less than 2 near :math:`\lambda=1`
.. math::
:nowrap:
\[ G\left(\lambda\right)=\frac{1}{Z\left(\lambda\right)}\sum_{k=-\infty}^{\infty}k^{2}\exp\left[-\lambda\left(k+1\right)\left(k-1\right)\right]\]

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,19 @@
======================================
C/C++ integration (:mod:`scipy.weave`)
======================================
.. warning::
This documentation is work-in-progress and unorganized.
.. automodule:: scipy.weave
:members:
.. autosummary::
:toctree: generated/
inline
blitz
ext_tools
accelerate

Some files were not shown because too many files have changed in this diff Show more