Commit of all v20 ufpy code:

- code brought over from the following raytheon repos and directories:
  - awips2 repo:
    - awips2/pythonPackages
    - awips2/edexOsgi/com.raytheon.uf.common.alertviz/pythonPackages
    - awips2/edexOsgi/com.raytheon.uf.common.mpe/pythonPackages
    - awips2/edexOsgi/com.raytheon.uf.common.dataplugin.text/pythonPackages
    - awips2/edexOsgi/com.raytheon.uf.common.dataplugin.grid/pythonPackages
    - awips2/edexOsgi/com.raytheon.uf.common.activetable/pythonPackages
    - awips2/edexOsgi/com.raytheon.uf.common.management/pythonPackages
    - awips2/edexOsgi/com.raytheon.uf.common.dataplugin.gfe/pythonPackages
    - awips2/edexOsgi/com.raytheon.uf.common.dataplugin.radar/pythonPackages
    - awips2/edexOsgi/com.raytheon.uf.common.site/pythonPackages
  - awips2-core repo:
    - awips2-core/common/com.raytheon.uf.common.auth/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.message/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.localization/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.datastorage/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.pointdata/pythonPackages
    - awips2-core/common/com.raythoen.uf.common.pypies/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.dataaccess/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.dataplugin.level/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.serialization/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.time/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.dataplugin/pythonPackages
    - awips2-core/common/com.raytheon.uf.common.dataquery/pythonPackages
  - awips2-rpm repo: had to untar and unzip the thirft repo, then go into /lib/py and run `python setup.py build` and then copy in from the build/ subdirectory
    -foss/thrift-0.14.1/packaged/thrift-0.14.1/lib/py/build/lib.macosx-10.9-x86_64-cpython-38/thrift
This commit is contained in:
Shay Carter 2023-09-12 13:38:19 -06:00
parent cae26d16c3
commit 5f070d0655
202 changed files with 4059 additions and 19433 deletions

30
LICENSE
View file

@ -1,30 +0,0 @@
Copyright (c) 2017, Unidata Python AWIPS Developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of the MetPy Developers nor the names of any
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

6
README Normal file
View file

@ -0,0 +1,6 @@
The python packages dir exists solely for pure python packages.
NO JAVA/JEP CODE ALLOWED.
Each dir under this directory should be able to be copied to an installed python's
site-packages dir and function correctly, being able to be imported and used
as needed.

View file

@ -1,94 +0,0 @@
AWIPS Python Data Access Framework
==================================
|License| |PyPI| |Conda| |CondaDownloads| |circleci| |Travis| |LatestDocs|
|Codacy| |Scrutinizer| |CodeClimate| |PRWelcome|
.. |License| image:: https://img.shields.io/pypi/l/python-awips.svg
:target: https://pypi.python.org/pypi/python-awips/
:alt: License
.. |PyPI| image:: https://img.shields.io/pypi/v/python-awips.svg
:target: https://pypi.python.org/pypi/python-awips/
:alt: PyPI Package
.. |PyPIDownloads| image:: https://img.shields.io/pypi/dm/python-awips.svg
:target: https://pypi.python.org/pypi/python-awips/
:alt: PyPI Downloads
.. |LatestDocs| image:: https://readthedocs.org/projects/pip/badge/?version=latest
:target: http://python-awips.readthedocs.org/en/latest/
:alt: Latest Doc Build Status
.. |Travis| image:: https://travis-ci.org/Unidata/python-awips.svg?branch=main
:target: https://travis-ci.org/Unidata/python-awips
:alt: Travis Build Status
.. |Codacy| image:: https://api.codacy.com/project/badge/Grade/e281f05c69164779814cad93eb3585cc
:target: https://www.codacy.com/app/mjames/python-awips
:alt: Codacy issues
.. |CodeClimate| image:: https://codeclimate.com/github/Unidata/python-awips/badges/gpa.svg
:target: https://codeclimate.com/github/Unidata/python-awips
:alt: Code Climate
.. |Scrutinizer| image:: https://scrutinizer-ci.com/g/Unidata/python-awips/badges/quality-score.png?b=main
:target: https://scrutinizer-ci.com/g/Unidata/python-awips/?branch=main)
:alt: Scrutinizer Code Quality
.. |Conda| image:: https://anaconda.org/conda-forge/python-awips/badges/version.svg
:target: https://anaconda.org/conda-forge/python-awips
:alt: Conda Package
.. |PRWelcome|
image:: https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=round-square
:target: https://egghead.io/series/how-to-contribute-to-an-open-source-project-on-github
:alt: PRs Welcome
.. |circleci|
image:: https://img.shields.io/circleci/project/github/conda-forge/python-awips-feedstock/master.svg?label=noarch
:target: https://circleci.com/gh/conda-forge/python-awips-feedstock
:alt: circleci
.. |CondaDownloads|
image:: https://img.shields.io/conda/dn/conda-forge/python-awips.svg
:target: https://anaconda.org/conda-forge/python-awips
:alt: Conda Downloads
About
-----
The python-awips package provides a data access framework for requesting grid and geometry datasets from an AWIPS `EDEX <http://unidata.github.io/awips2/#edex>`_ server. AWIPS and python-awips packages are released and maintained by UCAR's `Unidata Program Center <http://www.unidata.ucar.edu/software/awips2/>`_ in Boulder, Colorado.
Install
-------
- pip install python-awips
or
- conda install -c conda-forge python-awips
Conda Environment
-----------------
- git clone https://github.com/Unidata/python-awips.git
- cd python-awips
- conda env create -f environment.yml
- conda activate python3-awips
- jupyter notebook examples
Documentation
-------------
* http://unidata.github.io/python-awips/
* http://nbviewer.jupyter.org/github/Unidata/python-awips/tree/main/examples/notebooks
License
-------
Unidata AWIPS source code and binaries (RPMs) are considered to be in the public domain, meaning there are no restrictions on any download, modification, or distribution in any form (original or modified). The Python AWIPS package contains no proprietery content and is therefore not subject to export controls as stated in the Master Rights licensing file and source code headers.

6
VERSIONS Normal file
View file

@ -0,0 +1,6 @@
This file contains a listing of the versions of every applicable package.
This file SHOULD be updated whenever a newer version of a particular package is checked
in.
dynamicserialize = RAYTHEON-OWNED (AWIPS II)
ufpy = RAYTHEON-OWNED (AWIPS II)

View file

@ -1,141 +0,0 @@
#
# Common methods for the a2gtrad and a2advrad scripts.
#
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 08/13/2014 3393 nabowle Initial creation to contain common
# code for a2*radStub scripts.
# 03/15/2015 mjames@ucar Edited/added to awips package as RadarCommon
#
#
def get_datetime_str(record):
"""
Get the datetime string for a record.
Args:
record: the record to get data for.
Returns:
datetime string.
"""
return str(record.getDataTime())[0:19].replace(" ", "_") + ".0"
def get_data_type(azdat):
"""
Get the radar file type (radial or raster).
Args:
azdat: Boolean.
Returns:
Radial or raster.
"""
if azdat:
return "radial"
return "raster"
def get_hdf5_data(idra):
rdat = []
azdat = []
depVals = []
threshVals = []
if idra:
for item in idra:
if item.getName() == "Data":
rdat = item
elif item.getName() == "Angles":
azdat = item
# dattyp = "radial"
elif item.getName() == "DependentValues":
depVals = item.getShortData()
elif item.getName() == "Thresholds":
threshVals = item.getShortData()
return rdat, azdat, depVals, threshVals
def get_header(record, headerFormat, xLen, yLen, azdat, description):
# Encode dimensions, time, mapping, description, tilt, and VCP
mytime = get_datetime_str(record)
dattyp = get_data_type(azdat)
if headerFormat:
msg = str(xLen) + " " + str(yLen) + " " + mytime + " " + \
dattyp + " " + str(record.getLatitude()) + " " + \
str(record.getLongitude()) + " " + \
str(record.getElevation()) + " " + \
str(record.getElevationNumber()) + " " + \
description + " " + str(record.getTrueElevationAngle()) + " " + \
str(record.getVolumeCoveragePattern()) + "\n"
else:
msg = str(xLen) + " " + str(yLen) + " " + mytime + " " + \
dattyp + " " + description + " " + \
str(record.getTrueElevationAngle()) + " " + \
str(record.getVolumeCoveragePattern()) + "\n"
return msg
def encode_thresh_vals(threshVals):
spec = [".", "TH", "ND", "RF", "BI", "GC", "IC", "GR", "WS", "DS",
"RA", "HR", "BD", "HA", "UK"]
nnn = len(threshVals)
j = 0
msg = ""
while j < nnn:
lo = threshVals[j] % 256
hi = threshVals[j] / 256
msg += " "
j += 1
if hi < 0:
if lo > 14:
msg += "."
else:
msg += spec[lo]
continue
if hi % 16 >= 8:
msg += ">"
elif hi % 8 >= 4:
msg += "<"
if hi % 4 >= 2:
msg += "+"
elif hi % 2 >= 1:
msg += "-"
if hi >= 64:
msg += "%.2f" % (lo*0.01)
elif hi % 64 >= 32:
msg += "%.2f" % (lo*0.05)
elif hi % 32 >= 16:
msg += "%.1f" % (lo*0.1)
else:
msg += str(lo)
msg += "\n"
return msg
def encode_dep_vals(depVals):
nnn = len(depVals)
j = 0
msg = []
while j < nnn:
msg.append(str(depVals[j]))
j += 1
return msg
def encode_radial(azVals):
azValsLen = len(azVals)
j = 0
msg = []
while j < azValsLen:
msg.append(azVals[j])
j += 1
return msg

View file

@ -1,231 +0,0 @@
#
# Classes for retrieving soundings based on gridded data from the Data Access
# Framework
#
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 06/24/15 #4480 dgilling Initial Creation.
#
from awips.dataaccess import DataAccessLayer
from dynamicserialize.dstypes.com.raytheon.uf.common.dataplugin.level import Level
from shapely.geometry import Point
def getSounding(modelName, weatherElements, levels, samplePoint, timeRange=None):
"""
Performs a series of Data Access Framework requests to retrieve a sounding object
based on the specified request parameters.
Args:
modelName: the grid model datasetid to use as the basis of the sounding.
weatherElements: a list of parameters to return in the sounding.
levels: a list of levels to sample the given weather elements at
samplePoint: a lat/lon pair to perform the sampling of data at.
timeRange: (optional) a list of times, or a TimeRange to specify
which forecast hours to use. If not specified, will default to all forecast hours.
Returns:
A _SoundingCube instance, which acts a 3-tiered dictionary, keyed
by DataTime, then by level and finally by weather element. If no
data is available for the given request parameters, None is returned.
"""
(locationNames, parameters, levels, envelope, timeRange) = \
__sanitizeInputs(modelName, weatherElements, levels, samplePoint, timeRange)
requestArgs = {'datatype': 'grid', 'locationNames': locationNames,
'parameters': parameters, 'levels': levels, 'envelope': envelope}
req = DataAccessLayer.newDataRequest(**requestArgs)
response = DataAccessLayer.getGeometryData(req, timeRange)
soundingObject = _SoundingCube(response)
return soundingObject
def changeEDEXHost(host):
"""
Changes the EDEX host the Data Access Framework is communicating with.
Args:
host: the EDEX host to connect to
"""
if host:
DataAccessLayer.changeEDEXHost(str(host))
def __sanitizeInputs(modelName, weatherElements, levels, samplePoint, timeRange):
locationNames = [str(modelName)]
parameters = __buildStringList(weatherElements)
levels = __buildStringList(levels)
envelope = Point(samplePoint)
return locationNames, parameters, levels, envelope, timeRange
def __buildStringList(param):
if __notStringIter(param):
return [str(item) for item in param]
else:
return [str(param)]
def __notStringIter(iterable):
if not isinstance(iterable, str):
try:
iter(iterable)
return True
except TypeError:
return False
class _SoundingCube(object):
"""
The top-level sounding object returned when calling ModelSounding.getSounding.
This object acts as a 3-tiered dict which is keyed by time then level
then parameter name. Calling times() will return all valid keys into this
object.
"""
def __init__(self, geometryDataObjects):
self._dataDict = {}
self._sortedTimes = []
if geometryDataObjects:
for geometryData in geometryDataObjects:
dataTime = geometryData.getDataTime()
level = geometryData.getLevel()
for parameter in geometryData.getParameters():
self.__addItem(parameter, dataTime, level, geometryData.getNumber(parameter))
def __addItem(self, parameter, dataTime, level, value):
timeLayer = self._dataDict.get(dataTime, _SoundingTimeLayer(dataTime))
self._dataDict[dataTime] = timeLayer
timeLayer._addItem(parameter, level, value)
if dataTime not in self._sortedTimes:
self._sortedTimes.append(dataTime)
self._sortedTimes.sort()
def __getitem__(self, key):
return self._dataDict[key]
def __len__(self):
return len(self._dataDict)
def times(self):
"""
Returns the valid times for this sounding.
Returns:
A list containing the valid DataTimes for this sounding in order.
"""
return self._sortedTimes
class _SoundingTimeLayer(object):
"""
The second-level sounding object returned when calling ModelSounding.getSounding.
This object acts as a 2-tiered dict which is keyed by level then parameter
name. Calling levels() will return all valid keys into this
object. Calling time() will return the DataTime for this particular layer.
"""
def __init__(self, dataTime):
self._dataTime = dataTime
self._dataDict = {}
def _addItem(self, parameter, level, value):
asString = str(level)
levelLayer = self._dataDict.get(asString, _SoundingTimeAndLevelLayer(self._dataTime, asString))
levelLayer._addItem(parameter, value)
self._dataDict[asString] = levelLayer
def __getitem__(self, key):
asString = str(key)
if asString in self._dataDict:
return self._dataDict[asString]
else:
raise KeyError("Level " + str(key) + " is not a valid level for this sounding.")
def __len__(self):
return len(self._dataDict)
def time(self):
"""
Returns the DataTime for this sounding cube layer.
Returns:
The DataTime for this sounding layer.
"""
return self._dataTime
def levels(self):
"""
Returns the valid levels for this sounding.
Returns:
A list containing the valid levels for this sounding in order of
closest to surface to highest from surface.
"""
sortedLevels = [Level(level) for level in list(self._dataDict.keys())]
sortedLevels.sort()
return [str(level) for level in sortedLevels]
class _SoundingTimeAndLevelLayer(object):
"""
The bottom-level sounding object returned when calling ModelSounding.getSounding.
This object acts as a dict which is keyed by parameter name. Calling
parameters() will return all valid keys into this object. Calling time()
will return the DataTime for this particular layer. Calling level() will
return the level for this layer.
"""
def __init__(self, time, level):
self._time = time
self._level = level
self._parameters = {}
def _addItem(self, parameter, value):
self._parameters[parameter] = value
def __getitem__(self, key):
return self._parameters[key]
def __len__(self):
return len(self._parameters)
def level(self):
"""
Returns the level for this sounding cube layer.
Returns:
The level for this sounding layer.
"""
return self._level
def parameters(self):
"""
Returns the valid parameters for this sounding.
Returns:
A list containing the valid parameter names.
"""
return list(self._parameters.keys())
def time(self):
"""
Returns the DataTime for this sounding cube layer.
Returns:
The DataTime for this sounding layer.
"""
return self._time

View file

@ -1,128 +0,0 @@
import os
import numpy
from datetime import datetime
from awips import ThriftClient
from dynamicserialize.dstypes.gov.noaa.nws.ncep.common.dataplugin.gempak.request import GetGridDataRequest
class GridDataRetriever:
def __init__(self, server, pluginName, modelId, cycle, forecast, level1, level2, vcoord, param, nnx, nny):
self.pluginName = pluginName
self.modelId = modelId
self.cycle = cycle
self.forecast = forecast
self.level1 = level1
self.level2 = level2
self.vcoord = vcoord
self.param = param
self.nx = nnx
self.ny = nny
self.host = os.getenv("DEFAULT_HOST", server)
self.port = os.getenv("DEFAULT_PORT", "9581")
self.client = ThriftClient.ThriftClient(self.host, self.port)
def getData(self):
""" Sends ThriftClient request and writes out received files."""
req = GetGridDataRequest()
req.setPluginName(self.pluginName)
req.setModelId(self.modelId)
dt = datetime.strptime(self.cycle, '%y%m%d/%H%M')
ct = datetime.strftime(dt, '%Y-%m-%d %H:%M:%S')
req.setReftime(ct)
req.setFcstsec(self.forecast)
if self.level1 == '-1':
f1 = -999999.0
else:
f1 = float(self.level1)
if self.level2 == '-1':
f2 = -999999.0
else:
f2 = float(self.level2)
vcoord = self.vcoord
if vcoord == 'SGMA':
if f1 >= 0.0:
f1 = f1 / 10000
if f2 >= 0.0:
f2 = f2 / 10000
elif vcoord == 'DPTH':
if f1 >= 0.0:
f1 = f1 / 100.0
if f2 >= 0.0:
f2 = f2 / 100.0
elif vcoord == 'POTV':
if f1 >= 0.0:
f1 = f1 / 1000.0
if f2 >= 0.0:
f2 = f2 / 1000.0
req.setLevel1(str(f1))
req.setLevel2(str(f2))
req.setVcoord(vcoord)
req.setParm(self.param)
resp = self.client.sendRequest(req)
# Get the dimensions of the grid
kx = int(self.nx)
ky = int(self.ny)
kxky = kx * ky
# Put the data into a NUMPY array
grid = numpy.asarray(resp.getFloatData())
# All grids need to be flipped from a GEMPAK point of view
# Reshape the array into 2D
grid = numpy.reshape(grid, (ky, kx))
# Flip the array in the up-down direction
grid = numpy.flipud(grid)
# Reshape the array back into 1D
grid = numpy.reshape(grid, kxky)
return [replacemissing(x) for x in grid]
def getgriddata(server, table, model, cycle, forecast, level1,
level2, vcoord, param, nnx, nny):
gir = GridDataRetriever(server, table, model, cycle, forecast,
level1, level2, vcoord, param, nnx, nny)
return gir.getData()
def getheader(server, table, model, cycle, forecast, level1,
level2, vcoord, param, nnx, nny):
idata = []
idata.append(0)
idata.append(0)
return idata
def replacemissing(x):
if x == -999999.0:
return -9999.0
return x
# This is the standard boilerplate that runs this script as a main
if __name__ == '__main__':
# Run Test
srv = 'edex-cloud.unidata.ucar.edu'
tbl = 'grid'
mdl = 'GFS20'
cyc = '131227/0000'
fcs = '43200'
lv1 = '500'
lv2 = '-1'
vcd = 'PRES'
prm = 'HGHT'
nx = '720'
ny = '361'
print(getheader(srv, tbl, mdl, cyc, fcs, lv1, lv2, vcd, prm, nx, ny))
print(getgriddata(srv, tbl, mdl, cyc, fcs, lv1, lv2, vcd, prm, nx, ny))

View file

@ -1,145 +0,0 @@
import os
import sys
from datetime import datetime
from operator import itemgetter
from awips import ThriftClient
from dynamicserialize.dstypes.gov.noaa.nws.ncep.common.dataplugin.gempak.request import GetGridInfoRequest
class GridInfoRetriever:
def __init__(self, server, pluginName, modelId, cycle=None, forecast=None):
self.pluginName = pluginName
self.modelId = modelId
self.cycle = cycle
self.forecast = forecast
self.host = os.getenv("DEFAULT_HOST", server)
self.port = os.getenv("DEFAULT_PORT", "9581")
self.client = ThriftClient.ThriftClient(self.host, self.port)
def getInfo(self):
import sys
""" Sends ThriftClient request and writes out received files."""
req = GetGridInfoRequest()
req.setPluginName(self.pluginName)
req.setModelId(self.modelId)
req.setReftime(self.cycle)
if len(self.cycle) > 2:
dt = datetime.strptime(self.cycle, '%y%m%d/%H%M')
ct = datetime.strftime(dt, '%Y-%m-%d %H:%M:%S')
req.setReftime(ct)
req.setFcstsec(self.forecast)
resp = self.client.sendRequest(req)
# Take care of bytestring encodings in python3
for i, rec in enumerate(resp):
resp[i] = {
key.decode() if isinstance(key, bytes) else key:
val.decode() if isinstance(val, bytes) else val
for key, val in rec.items()
}
sortresp = sorted(sorted(resp, key=itemgetter("reftime"), reverse=True), key=itemgetter("fcstsec"))
grids = []
count = 0
for record in sortresp:
s = '{:<12}'.format(record['param'])
if sys.byteorder == 'little':
parm1 = (ord(s[3]) << 24) + (ord(s[2]) << 16) + (ord(s[1]) << 8) + ord(s[0])
parm2 = (ord(s[7]) << 24) + (ord(s[6]) << 16) + (ord(s[5]) << 8) + ord(s[4])
parm3 = (ord(s[11]) << 24) + (ord(s[10]) << 16) + (ord(s[9]) << 8) + ord(s[8])
else:
parm1 = (ord(s[0]) << 24) + (ord(s[1]) << 16) + (ord(s[2]) << 8) + ord(s[3])
parm2 = (ord(s[4]) << 24) + (ord(s[5]) << 16) + (ord(s[6]) << 8) + ord(s[7])
parm3 = (ord(s[8]) << 24) + (ord(s[9]) << 16) + (ord(s[10]) << 8) + ord(s[11])
dt = datetime.strptime(record['reftime'], '%Y-%m-%d %H:%M:%S.%f')
dattim = dt.month * 100000000 + dt.day * 1000000 + (dt.year%100) * 10000 + dt.hour * 100 + dt.minute
fcsth = (int(record['fcstsec']) / 60) / 60
fcstm = (int(record['fcstsec']) / 60) % 60
fcst = 100000 + fcsth * 100 + fcstm
lv1 = float(record['level1'])
if lv1 == -999999.0:
lv1 = -1.0
lv2 = float(record['level2'])
if lv2 == -999999.0:
lv2 = -1.0
vcd = record['vcoord']
if vcd == 'NONE':
ivcd = 0
elif vcd == 'PRES':
ivcd = 1
elif vcd == 'THTA':
ivcd = 2
elif vcd == 'HGHT':
ivcd = 3
elif vcd == 'SGMA':
ivcd = 4
if lv1 >= 0.0:
lv1 = lv1 * 10000.0
if lv2 >= 0.0:
lv2 = lv2 * 10000.0
elif vcd == 'DPTH':
ivcd = 5
if lv1 >= 0.0:
lv1 = lv1 * 100.0
if lv2 >= 0.0:
lv2 = lv2 * 100.0
elif vcd == 'HYBL':
ivcd = 6
else:
v = '{:<4}'.format(vcd)
if sys.byteorder == 'little':
ivcd = (ord(v[3]) << 24) + (ord(v[2]) << 16) + (ord(v[1]) << 8) + ord(v[0])
else:
ivcd = (ord(v[0]) << 24) + (ord(v[1]) << 16) + (ord(v[2]) << 8) + ord(v[3])
if vcd == 'POTV':
if lv1 >= 0.0:
lv1 = lv1 * 1000.0
if lv2 >= 0.0:
lv2 = lv2 * 1000.0
grids.append(9999)
grids.append(dattim)
grids.append(fcst)
grids.append(0)
grids.append(0)
grids.append(int(lv1))
grids.append(int(lv2))
grids.append(ivcd)
grids.append(parm1)
grids.append(parm2)
grids.append(parm3)
count += 1
if count > 29998:
break
return grids
def getinfo(server, table, model, cycle, forecast):
gir = GridInfoRetriever(server, table, model, cycle, forecast)
return gir.getInfo()
def getrow(server, table, model, cycle, forecast):
idata = []
idata.append(9999)
idata.append(1)
return idata
# This is the standard boilerplate that runs this script as a main
if __name__ == '__main__':
# Run Test
srv = 'edex-cloud.unidata.ucar.edu'
tbl = 'grid'
mdl = 'NAM40'
print(getrow(srv, tbl, mdl))
print(getinfo(srv, tbl, mdl))

View file

@ -1,301 +0,0 @@
import os
import math
from awips import ThriftClient
from dynamicserialize.dstypes.gov.noaa.nws.ncep.common.dataplugin.gempak.request import GetGridNavRequest
from ctypes import *
EARTH_RADIUS = 6371200.0
DEG_TO_RAD = math.pi / 180.0
RAD_TO_DEG = 180.0 / math.pi
TWOPI = math.pi * 2.0
HALFPI = math.pi / 2.0
PI4TH = math.pi / 4.0
PI3RD = math.pi / 3.0
def createPolar(nsflag, clon, lat1, lon1, dx, dy, unit, nx, ny):
clonr = clon * DEG_TO_RAD
latr = lat1 * DEG_TO_RAD
lonr = lon1 * DEG_TO_RAD
if nsflag == 'N':
x1 = EARTH_RADIUS * math.tan(PI4TH - latr/2.0) * math.sin(lonr-clonr)
y1 = -1 * EARTH_RADIUS * math.tan(PI4TH - latr/2.0) * math.cos(lonr-clonr)
else:
x1 = EARTH_RADIUS * math.tan(PI4TH + latr/2.0) * math.sin(lonr-clonr)
y1 = EARTH_RADIUS * math.tan(PI4TH + latr/2.0) * math.cos(lonr-clonr)
if unit == 'm':
tdx = dx / (1 + math.sin(PI3RD))
tdy = dy / (1 + math.sin(PI3RD))
else:
tdx = (dx*1000.0) / (1 + math.sin(PI3RD))
tdy = (dy*1000.0) / (1 + math.sin(PI3RD))
x2 = x1 + tdx * (nx-1)
y2 = y1 + tdy * (ny-1)
xll = min(x1, x2)
yll = min(y1, y2)
xur = max(x1, x2)
yur = max(y1, y2)
if nsflag == 'N':
latll = (HALFPI - 2*math.atan2(math.hypot(xll, yll), EARTH_RADIUS)) * RAD_TO_DEG
rtemp = clonr + math.atan2(xll, -yll)
else:
latll = -1 * (HALFPI - 2*math.atan2(math.hypot(xll, yll), EARTH_RADIUS)) * RAD_TO_DEG
rtemp = clonr + math.atan2(xll, yll)
if rtemp > math.pi:
lonll = (rtemp-TWOPI) * RAD_TO_DEG
elif rtemp < -math.pi:
lonll = (rtemp+TWOPI) * RAD_TO_DEG
else:
lonll = rtemp * RAD_TO_DEG
if nsflag == 'N':
latur = (HALFPI - 2*math.atan2(math.hypot(xur, yur), EARTH_RADIUS)) * RAD_TO_DEG
rtemp = clonr + math.atan2(xur, -yur)
else:
latur = -1 * (HALFPI - 2*math.atan2(math.hypot(xur, yur), EARTH_RADIUS)) * RAD_TO_DEG
rtemp = clonr + math.atan2(xur, yur)
if rtemp > math.pi:
lonur = (rtemp-TWOPI) * RAD_TO_DEG
elif rtemp < -math.pi:
lonur = (rtemp+TWOPI) * RAD_TO_DEG
else:
lonur = rtemp * RAD_TO_DEG
return [latll, lonll, latur, lonur]
def createConic(nsflag, clon, lat1, lon1, dx, dy, unit, nx, ny, ang1, ang3):
clonr = clon * DEG_TO_RAD
latr = lat1 * DEG_TO_RAD
lonr = lon1 * DEG_TO_RAD
angle1 = HALFPI - (math.fabs(ang1) * DEG_TO_RAD)
angle2 = HALFPI - (math.fabs(ang3) * DEG_TO_RAD)
if ang1 == ang3:
cc = math.cos(angle1)
else:
cc = (math.log(math.sin(angle2)) - math.log(math.sin(angle1))) \
/ (math.log(math.tan(angle2/2.0)) - math.log(math.tan(angle1/2.0)))
er = EARTH_RADIUS / cc
if nsflag == 'N':
x1 = er * math.pow(math.tan((HALFPI-latr)/2.0), cc) * math.sin(cc*(lonr-clonr))
y1 = -1.0 * er * math.pow(math.tan((HALFPI-latr)/2.0), cc) * math.cos(cc*(lonr-clonr))
else:
x1 = er * math.pow(math.tan((HALFPI+latr)/2.0), cc) * math.sin(cc*(lonr-clonr))
y1 = er * math.pow(math.tan((HALFPI+latr)/2.0), cc) * math.cos(cc*(lonr-clonr))
alpha = math.pow(math.tan(angle1/2.0), cc) / math.sin(angle1)
if unit == 'm':
x2 = x1 + (nx-1) * alpha * dx
y2 = y1 + (ny-1) * alpha * dy
else:
x2 = x1 + (nx-1) * alpha * (dx*1000.0)
y2 = y1 + (ny-1) * alpha * (dy*1000.0)
xll = min(x1, x2)
yll = min(y1, y2)
xur = max(x1, x2)
yur = max(y1, y2)
if nsflag == 'N':
latll = (HALFPI - 2.0 * math.atan(math.pow(math.hypot(xll, yll)/er, (1/cc)))) * RAD_TO_DEG
rtemp = math.atan2(xll, -yll) * (1/cc) + clonr
else:
latll = (-1.0 * (HALFPI - 2.0 * math.atan(math.pow(math.hypot(xll, yll)/er, (1/cc))))) * RAD_TO_DEG
rtemp = math.atan2(xll, yll) * (1/cc) + clonr
if rtemp > math.pi:
lonll = (rtemp-TWOPI) * RAD_TO_DEG
elif rtemp < -math.pi:
lonll = (rtemp+TWOPI) * RAD_TO_DEG
else:
lonll = rtemp * RAD_TO_DEG
if nsflag == 'N':
latur = (HALFPI - 2.0 * math.atan(math.pow(math.hypot(xur, yur)/er, (1/cc)))) * RAD_TO_DEG
rtemp = math.atan2(xur, -yur) * (1/cc) + clonr
else:
latur = (-1.0 * (HALFPI - 2.0 * math.atan(math.pow(math.hypot(xur, yur)/er, (1/cc))))) * RAD_TO_DEG
rtemp = math.atan2(xur, yur) * (1/cc) + clonr
if rtemp > math.pi:
lonur = (rtemp-TWOPI) * RAD_TO_DEG
elif rtemp < -math.pi:
lonur = (rtemp+TWOPI) * RAD_TO_DEG
else:
lonur = rtemp * RAD_TO_DEG
return [latll, lonll, latur, lonur]
class StringConverter(Union):
_fields_ = [("char", c_char*4), ("int", c_int), ("float", c_float)]
class GridNavRetriever:
def __init__(self, server, pluginName, modelId, arrayLen):
self.pluginName = pluginName
self.modelId = modelId
self.arrayLen = arrayLen
self.host = os.getenv("DEFAULT_HOST", server)
self.port = os.getenv("DEFAULT_PORT", "9581")
self.client = ThriftClient.ThriftClient(self.host, self.port)
def getNavBlk(self):
""" Sends ThriftClient request and writes out received files."""
req = GetGridNavRequest()
req.setPluginName(self.pluginName)
req.setModelId(self.modelId)
resp = self.client.sendRequest(req)
for i, rec in enumerate(resp):
resp[i] = {
key.decode() if isinstance(key, bytes) else key:
val.decode() if isinstance(val, bytes) else val
for key, val in rec.items()
}
nav = []
for record in resp:
unit = record['spacingunit']
sk = record['spatialkey']
skarr = sk.split('/')
nx = float(skarr[1])
ny = float(skarr[2])
dx = float(skarr[3])
dy = float(skarr[4])
sc = StringConverter()
if record['projtype'] == 'LatLon':
sc.char = 'CED '
gemproj = 2.0
ang1 = 0.0
ang2 = 0.0
ang3 = 0.0
lllat = float(record['lowerleftlat'])
lllon = float(record['lowerleftlon'])
urlat = lllat + (dy * (ny-1))
urlon = lllon + (dx * (nx-1))
if lllon > 180:
lllon -= 360.0
if urlon > 180:
urlon -= 360.0
if record['projtype'] == 'Polar Stereographic':
sc.char = 'STR '
gemproj = 2.0
if float(record['standard_parallel_1']) < 0.0:
ang1 = -90.0
nsflag = 'S'
else:
ang1 = 90.0
nsflag = 'N'
ang2 = float(record['central_meridian'])
ang3 = 0.0
lat1 = float(record['lowerleftlat'])
lon1 = float(record['lowerleftlon'])
coords = createPolar(nsflag, ang2, lat1, lon1, dx, dy, unit, nx, ny)
lllat = coords[0]
lllon = coords[1]
urlat = coords[2]
urlon = coords[3]
if record['projtype'] == 'Lambert Conformal':
sc.char = 'LCC '
gemproj = 2.0
ang1 = float(skarr[7])
ang2 = float(record['central_meridian'])
ang3 = float(skarr[8])
if ang1 < 0.0:
nsflag = 'S'
else:
nsflag = 'N'
lat1 = float(record['lowerleftlat'])
lon1 = float(record['lowerleftlon'])
coords = createConic(nsflag, ang2, lat1, lon1, dx, dy, unit, nx, ny, ang1, ang3)
lllat = coords[0]
lllon = coords[1]
urlat = coords[2]
urlon = coords[3]
# Fill up the output array of floats
nav.append(gemproj)
nav.append(sc.float)
nav.append(1.0)
nav.append(1.0)
nav.append(nx)
nav.append(ny)
nav.append(lllat)
nav.append(lllon)
nav.append(urlat)
nav.append(urlon)
nav.append(ang1)
nav.append(ang2)
nav.append(ang3)
for i in range(13, int(self.arrayLen)):
nav.append(0.0)
return nav
def getAnlBlk(self):
anl = []
# Type
anl.append(2.0)
# Delta
anl.append(1.0)
# Extend area
anl.append(0.0)
anl.append(0.0)
anl.append(0.0)
anl.append(0.0)
# Grid area
anl.append(-90.0)
anl.append(-180.0)
anl.append(90.0)
anl.append(180.0)
# Data area
anl.append(-90.0)
anl.append(-180.0)
anl.append(90.0)
anl.append(180.0)
for i in range(18, int(self.arrayLen)):
anl.append(0.0)
return anl
def getnavb(server, table, model, arrlen):
gnr = GridNavRetriever(server, table, model, arrlen)
return gnr.getNavBlk()
def getanlb(server, table, model, arrlen):
gnr = GridNavRetriever(server, table, model, arrlen)
return gnr.getAnlBlk()
# This is the standard boilerplate that runs this script as a main
if __name__ == '__main__':
# Run Test
srv = 'edex-cloud.unidata.ucar.edu'
tbl = 'grid_info'
mdl = 'NAM40'
navlen = '256'
print(getnavb(srv, tbl, mdl, navlen))
anllen = '128'
print(getanlb(srv, tbl, mdl, anllen))

View file

@ -1,144 +0,0 @@
import os
from datetime import datetime
from awips import ThriftClient
from dynamicserialize.dstypes.com.raytheon.uf.common.time import DataTime
from dynamicserialize.dstypes.com.raytheon.uf.common.time import TimeRange
from dynamicserialize.dstypes.gov.noaa.nws.ncep.common.dataplugin.gempak.request import StationDataRequest
class StationDataRetriever:
""" Retrieves all data for a requested station and time """
def __init__(self, server, pluginName, stationId, refTime, parmList, partNumber):
self.pluginName = pluginName
self.stationId = stationId
self.refTime = refTime
self.parmList = parmList
self.partNumber = partNumber
self.host = os.getenv("DEFAULT_HOST", server)
self.port = os.getenv("DEFAULT_PORT", "9581")
self.client = ThriftClient.ThriftClient(self.host, self.port)
def getStationData(self):
""" Sends ThriftClient request and writes out received files."""
dtime = datetime.strptime(self.refTime, "%y%m%d/%H%M")
trange = TimeRange()
trange.setStart(dtime)
trange.setEnd(dtime)
dataTime = DataTime(refTime=dtime, validPeriod=trange)
req = StationDataRequest()
req.setPluginName(self.pluginName)
req.setStationId(self.stationId)
req.setRefTime(dataTime)
req.setParmList(self.parmList)
req.setPartNumber(self.partNumber)
resp = self.client.sendRequest(req)
for i, rec in enumerate(resp):
resp[i] = {
key.decode() if isinstance(key, bytes) else key:
val.decode() if isinstance(val, bytes) else val
for key, val in rec.items()
}
return resp
def getstationdata(server, table, stationId, refTime, parmList, partNumber):
sr = StationDataRetriever(server, table, stationId, refTime, parmList, partNumber)
lcldict = sr.getStationData()
rdata = []
for substr in parmList.split(','):
if substr in lcldict:
rdata.append(lcldict[substr])
else:
rdata.append(-9999.00)
return rdata
def getleveldata(server, table, stationId, refTime, parmList, partNumber):
sr = StationDataRetriever(server, table, stationId, refTime, parmList, partNumber)
lcldict = sr.getStationData()
numset = [1]
for substr in parmList.split(','):
if substr in lcldict:
pnum = len(lcldict[substr]) - 1
while pnum >= 0:
if lcldict[substr][pnum] != -9999.00:
break
pnum = pnum - 1
numset.append(pnum)
rdata = []
for jj in range(max(numset)):
for substr in parmList.split(','):
if substr in lcldict:
if lcldict[substr][jj] == -9999998.0:
rdata.append(-9999.0)
else:
rdata.append(lcldict[substr][jj])
else:
rdata.append(-9999.0)
return rdata
def getstationtext(server, table, stationId, refTime, parmList, partNumber):
sr = StationDataRetriever(server, table, stationId, refTime, parmList, partNumber)
lcldict = sr.getStationData()
if parmList in lcldict:
return lcldict[parmList]
else:
return ' '
def getheader(server, table, stationId, refTime, parmList, partNumber):
idata = []
idata.append(0)
return idata
# This is the standard boilerplate that runs this script as a main
if __name__ == '__main__':
# Run Test
srv = 'edex-cloud.unidata.ucar.edu'
key = '-'
print('OBS - METAR')
tbl = 'obs'
stn = 'KLGA'
time = '130823/1700'
parm = 'seaLevelPress,temperature,dewpoint,windSpeed,windDir'
part = '0'
print(getheader(srv, tbl, stn, time, parm, part))
print(getstationdata(srv, tbl, stn, time, parm, part))
parm = 'rawMETAR'
print(getstationtext(srv, tbl, stn, time, parm, part))
print('SFCOBS - SYNOP')
tbl = 'sfcobs'
stn = '72403'
time = '130823/1800'
parm = 'seaLevelPress,temperature,dewpoint,windSpeed,windDir'
part = '0'
print(getheader(srv, tbl, stn, time, parm, part))
print(getstationdata(srv, tbl, stn, time, parm, part))
parm = 'rawReport'
print(getstationtext(srv, tbl, stn, time, parm, part))
print('UAIR')
tbl = 'bufrua'
stn = '72469'
time = '130823/1200'
parm = 'prMan,htMan,tpMan,tdMan,wdMan,wsMan'
part = '2020'
print(getleveldata(srv, tbl, stn, time, parm, part))
parm = 'prSigT,tpSigT,tdSigT'
part = '2022'
print(getleveldata(srv, tbl, stn, time, parm, part))
parm = 'htSigW,wsSigW,wdSigW'
part = '2021'
print(getleveldata(srv, tbl, stn, time, parm, part))

View file

@ -1,93 +0,0 @@
import os
import sys
from awips import ThriftClient
from dynamicserialize.dstypes.gov.noaa.nws.ncep.common.dataplugin.gempak.request import GetStationsRequest
class StationRetriever:
""" Retrieves all requested stations """
def __init__(self, server, pluginName):
self.pluginName = pluginName
self.outdir = os.getcwd()
self.host = os.getenv("DEFAULT_HOST", server)
self.port = os.getenv("DEFAULT_PORT", "9581")
self.client = ThriftClient.ThriftClient(self.host, self.port)
def getStations(self):
""" Sends ThriftClient request and writes out received files."""
req = GetStationsRequest()
req.setPluginName(self.pluginName)
resp = self.client.sendRequest(req)
for i, rec in enumerate(resp):
resp[i] = {
key.decode() if isinstance(key, bytes) else key:
val.decode() if isinstance(val, bytes) else val
for key, val in rec.items()
}
stns = []
for item in resp:
stationstr = '{:<8}'.format(item.getStationId())
if sys.byteorder == 'little':
stnid = (ord(stationstr[3]) << 24) + (ord(stationstr[2]) << 16) + \
(ord(stationstr[1]) << 8) + ord(stationstr[0])
stnid2 = (ord(stationstr[7]) << 24) + (ord(stationstr[6]) << 16) + \
(ord(stationstr[5]) << 8) + ord(stationstr[4])
else:
stnid = (ord(stationstr[0]) << 24) + (ord(stationstr[1]) << 16) + \
(ord(stationstr[2]) << 8) + ord(stationstr[3])
stnid2 = (ord(stationstr[4]) << 24) + (ord(stationstr[5]) << 16) + \
(ord(stationstr[6]) << 8) + ord(stationstr[7])
if item.getState() is None:
stationstr = ' '
else:
stationstr = '{:<4}'.format(item.getState())
if sys.byteorder == 'little':
state = (ord(stationstr[3]) << 24) + (ord(stationstr[2]) << 16) \
+ (ord(stationstr[1]) << 8) + ord(stationstr[0])
else:
state = (ord(stationstr[0]) << 24) + (ord(stationstr[1]) << 16) \
+ (ord(stationstr[2]) << 8) + ord(stationstr[3])
stationstr = '{:<4}'.format(item.getCountry())
if sys.byteorder == 'little':
cntry = (ord(stationstr[3]) << 24) + (ord(stationstr[2]) << 16) \
+ (ord(stationstr[1]) << 8) + ord(stationstr[0])
else:
cntry = (ord(stationstr[0]) << 24) + (ord(stationstr[1]) << 16) \
+ (ord(stationstr[2]) << 8) + ord(stationstr[3])
stns.append(9999)
stns.append(stnid)
stns.append(item.getWmoIndex())
stns.append(int(item.getLatitude()*100))
stns.append(int(item.getLongitude()*100))
stns.append(int(item.getElevation()))
stns.append(state)
stns.append(cntry)
stns.append(stnid2)
stns.append(0)
return stns
def getstations(server, table, key, dummy, dummy2):
sr = StationRetriever(server, table)
return sr.getStations()
# This is the standard boilerplate that runs this script as a main
if __name__ == '__main__':
# Run Test
srv = 'edex-cloud.unidata.ucar.edu'
key = '-'
print('OBS - METAR')
tbl = 'obs'
print(getstations(srv, tbl, key))
print('SFCOBS - SYNOP')
tbl = 'sfcobs'
print(getstations(srv, tbl, key))

View file

@ -1,76 +0,0 @@
import os
from datetime import datetime
from awips import ThriftClient
from dynamicserialize.dstypes.java.util import GregorianCalendar
from dynamicserialize.dstypes.gov.noaa.nws.ncep.common.dataplugin.gempak.request import GetTimesRequest
class TimeRetriever:
""" Retrieves all requested times"""
def __init__(self, server, pluginName, timeField):
self.pluginName = pluginName
self.timeField = timeField
self.outdir = os.getcwd()
self.host = os.getenv("DEFAULT_HOST", server)
self.port = os.getenv("DEFAULT_PORT", "9581")
self.client = ThriftClient.ThriftClient(self.host, self.port)
def getTimes(self):
""" Sends ThriftClient request and writes out received files."""
req = GetTimesRequest()
req.setPluginName(self.pluginName)
req.setTimeField(self.timeField)
resp = self.client.sendRequest(req)
for i, rec in enumerate(resp):
resp[i] = {
key.decode() if isinstance(key, bytes) else key:
val.decode() if isinstance(val, bytes) else val
for key, val in rec.items()
}
timelist = []
for item in resp.getTimes():
if isinstance(item, GregorianCalendar):
tstamp = item.getTimeInMillis()
else:
tstamp = item.getTime()
time = datetime.utcfromtimestamp(tstamp/1000)
timelist.append(time)
timelist.sort(reverse=True)
times = []
for time in timelist:
times.append(9999)
times.append((time.year % 100) * 10000 + (time.month * 100) + time.day)
times.append((time.hour * 100) + time.minute)
# GEMPAK can only handle up to 200 times, which is 600 elements
# in this array -- [9999, DATE, TIME] -- repeated
return times[0:600]
def gettimes(server, table, key, dummy, dummy2):
tr = TimeRetriever(server, table, key)
return tr.getTimes()
# This is the standard boilerplate that runs this script as a main
if __name__ == '__main__':
srv = 'edex-cloud.unidata.ucar.edu'
print('OBS - METAR')
tbl = 'obs'
key = 'refHour'
print(gettimes(srv, tbl, key))
print('SFCOBS - SYNOP')
tbl = 'sfcobs'
key = 'refHour'
print(gettimes(srv, tbl, key))
print('BUFRUA')
tbl = 'bufrua'
key = 'dataTime.refTime'
print(gettimes(srv, tbl, key))

View file

@ -1,99 +0,0 @@
#!/usr/bin/env python
# Parse html tables from a given URL and output CSV.
# Note: To install a missing python module foo do "easy_install foo"
# (or the new way is "pip install foo" but you might have to do
# "easy_install pip" first)
from BeautifulSoup import BeautifulSoup
import scrape
import urllib.request, urllib.error, urllib.parse
import html.entities
import re
import sys
import unicodedata
# from http://stackoverflow.com/questions/1197981/convert-html-entities
def asciify2(s):
matches = re.findall("&#\d+;", s)
if len(matches) > 0:
hits = set(matches)
for hit in hits:
name = hit[2:-1]
try:
entnum = int(name)
s = s.replace(hit, chr(entnum))
except ValueError:
pass
matches = re.findall("&\w+;", s)
hits = set(matches)
amp = "&amp;"
if amp in hits:
hits.remove(amp)
for hit in hits:
name = hit[1:-1]
if name in html.entities.name2codepoint:
s = s.replace(hit, "")
s = s.replace(amp, "&")
return s
def opensoup(url):
request = urllib.request.Request(url)
request.add_header("User-Agent", "Mozilla/5.0")
# To mimic a real browser's user-agent string more exactly, if necessary:
# Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.14)
# Gecko/20080418 Ubuntu/7.10 (gutsy) Firefox/2.0.0.14
pagefile = urllib.request.urlopen(request)
soup = BeautifulSoup(pagefile)
pagefile.close()
return soup
def asciify(s):
return unicodedata.normalize('NFKD', s).encode('ascii', 'ignore')
# remove extra whitespace, including stripping leading and trailing whitespace.
def condense(s):
s = re.sub(r"\s+", " ", s, re.DOTALL)
return s.strip()
def stripurl(s):
s = re.sub(r"\<span\s+style\s*\=\s*\"display\:none[^\"]*\"[^\>]*\>[^\<]*\<\/span\>", "", s)
s = re.sub(r"\&\#160\;", " ", s)
return condense(re.sub(r"\<[^\>]*\>", " ", s))
# this gets rid of tags and condenses whitespace
def striptags(s):
s = re.sub(r"\<span\s+style\s*\=\s*\"display\:none[^\"]*\"[^\>]*\>[^\<]*\<\/span\>", "", s)
s = re.sub(r"\&\#160\;", " ", s)
return condense(s)
def getUrlArgs(parseUrl):
return re.search('grib2_table4-2-(\d+)-(\d+).shtml', parseUrl).groups()
if len(sys.argv) == 1:
print("Usage: ", sys.argv[0], " url [n]")
print(" (where n indicates which html table to parse)")
exit(1)
url = sys.argv[1]
soup = opensoup(url)
tables = soup.findAll("table")
for table in tables:
for r in table.findAll('tr'):
rl = []
for c in r.findAll(re.compile('td|th')):
rl.append(striptags(c.renderContents()))
if len(rl) > 1 and "href" in rl[1]:
print('! ' + stripurl(rl[1]))
scrapeUrl = 'http://www.nco.ncep.noaa.gov/pmb/docs/grib2/grib2_table4-2-' + \
getUrlArgs(rl[1])[0] + "-" + getUrlArgs(rl[1])[1] + '.shtml'
scrape.run(scrapeUrl)

View file

@ -1,106 +0,0 @@
#!/usr/bin/env python
# Parse html tables from a given URL and output CSV.
# Note: To install a missing python module foo do "easy_install foo"
# (or the new way is "pip install foo" but you might have to do
# "easy_install pip" first)
from BeautifulSoup import BeautifulSoup
import urllib.request, urllib.error, urllib.parse
import html.entities
import re
import sys
import unicodedata
# from http://stackoverflow.com/questions/1197981/convert-html-entities
def asciify2(s):
matches = re.findall("&#\d+;", s)
if len(matches) > 0:
hits = set(matches)
for hit in hits:
name = hit[2:-1]
try:
entnum = int(name)
s = s.replace(hit, chr(entnum))
except ValueError:
pass
matches = re.findall("&\w+;", s)
hits = set(matches)
amp = "&amp;"
if amp in hits:
hits.remove(amp)
for hit in hits:
name = hit[1:-1]
if name in html.entities.name2codepoint:
s = s.replace(hit, "")
s = s.replace(amp, "&")
return s
def opensoup(url):
request = urllib.request.Request(url)
request.add_header("User-Agent", "Mozilla/5.0")
# To mimic a real browser's user-agent string more exactly, if necessary:
# Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.14)
# Gecko/20080418 Ubuntu/7.10 (gutsy) Firefox/2.0.0.14
pagefile = urllib.request.urlopen(request)
soup = BeautifulSoup(pagefile)
pagefile.close()
return soup
def asciify(s):
return unicodedata.normalize('NFKD', s).encode('ascii', 'ignore')
# remove extra whitespace, including stripping leading and trailing whitespace.
def condense(s):
s = re.sub(r"\s+", " ", s, re.DOTALL)
return s.strip()
# this gets rid of tags and condenses whitespace
def striptags(s):
s = re.sub(r"\<span\s+style\s*\=\s*\"display\:none[^\"]*\"[^\>]*\>[^\<]*\<\/span\>", "", s)
s = re.sub(r"\&\#160\;", " ", s)
return condense(re.sub(r"\<[^\>]*\>", " ", s))
if len(sys.argv) == 1: # called with no arguments
print("Usage: ", sys.argv[0], " url [n]")
print(" (where n indicates which html table to parse)")
exit(1)
def getUrlArgs(parseUrl):
return re.search('grib2_table4-2-(\d+)-(\d+).shtml', parseUrl).groups()
def run(url):
soup = opensoup(url)
tables = soup.findAll("table")
for table in tables:
ct = 0
for r in table.findAll('tr'):
rl = []
for c in r.findAll(re.compile('td|th')):
rl.append(striptags(c.renderContents()))
if ct > 0:
rl[0] = getUrlArgs(url)[0].zfill(3) + " " + \
getUrlArgs(url)[1].zfill(3) + " " + rl[0].zfill(3) + " 000"
if len(rl) > 1:
if "Reserved" in rl[1]:
rl[0] = '!' + rl[0]
if "See Table" in rl[2] or "Code table" in rl[2]:
rl[2] = "cat"
rl[1] = rl[1][:32].ljust(32)
rl[2] = rl[2].ljust(20)
rl[3] = rl[3].ljust(12) + " 0 -9999.00"
if ct:
print(" ".join(rl))
ct += 1
if __name__ == '__main__':
run(sys.argv[1])

File diff suppressed because it is too large Load diff

View file

@ -1,59 +0,0 @@
#
# Test DAF support for profiler data
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 01/19/16 4795 mapeters Initial Creation.
# 04/11/16 5548 tgurney Cleanup
# 04/18/16 5548 tgurney More cleanup
#
#
from __future__ import print_function
from awips.dataaccess import DataAccessLayer as DAL
from awips.test.dafTests import baseDafTestCase
class ProfilerTestCase(baseDafTestCase.DafTestCase):
"""Test DAF support for profiler data"""
datatype = "profiler"
def testGetAvailableParameters(self):
req = DAL.newDataRequest(self.datatype)
self.runParametersTest(req)
def testGetAvailableLocations(self):
req = DAL.newDataRequest(self.datatype)
self.runLocationsTest(req)
def testGetAvailableTimes(self):
req = DAL.newDataRequest(self.datatype)
self.runTimesTest(req)
def testGetGeometryData(self):
req = DAL.newDataRequest(self.datatype)
req.setParameters("temperature", "pressure", "uComponent", "vComponent")
print("Testing getGeometryData()")
geomData = DAL.getGeometryData(req)
self.assertIsNotNone(geomData)
print("Number of geometry records: " + str(len(geomData)))
print("Sample geometry data:")
for record in geomData[:self.sampleDataLimit]:
print("level:", record.getLevel(), end="")
# One dimensional parameters are reported on the 0.0UNKNOWN level.
# 2D parameters are reported on MB levels from pressure.
if record.getLevel() == "0.0UNKNOWN":
print(" temperature=" + record.getString("temperature") + record.getUnit("temperature"), end="")
print(" pressure=" + record.getString("pressure") + record.getUnit("pressure"), end="")
else:
print(" uComponent=" + record.getString("uComponent") + record.getUnit("uComponent"), end="")
print(" vComponent=" + record.getString("vComponent") + record.getUnit("vComponent"), end="")
print(" geometry:", record.getGeometry())
print("getGeometryData() complete\n\n")

View file

@ -1,218 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
NBCONVERT = ipython nbconvert
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
.PHONY: clean
clean:
rm -rf $(BUILDDIR)/* source/examples/generated/*
.PHONY: html
html:
make clean
$(SPHINXBUILD) -vb html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
.PHONY: dirhtml
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
.PHONY: singlehtml
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
.PHONY: pickle
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
.PHONY: json
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
.PHONY: htmlhelp
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
.PHONY: qthelp
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/python-awips.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/python-awips.qhc"
.PHONY: applehelp
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
.PHONY: devhelp
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/python-awips"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/python-awips"
@echo "# devhelp"
.PHONY: epub
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
.PHONY: latex
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
.PHONY: latexpdf
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: latexpdfja
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: text
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
.PHONY: man
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
.PHONY: texinfo
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
.PHONY: info
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
.PHONY: gettext
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
.PHONY: changes
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
.PHONY: linkcheck
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
.PHONY: doctest
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
.PHONY: coverage
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
.PHONY: xml
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: pseudoxml
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View file

@ -1,263 +0,0 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source
set I18NSPHINXOPTS=%SPHINXOPTS% source
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
echo. coverage to run coverage check of the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
REM Check if sphinx-build is available and fallback to Python version if any
%SPHINXBUILD% 1>NUL 2>NUL
if errorlevel 9009 goto sphinx_python
goto sphinx_ok
:sphinx_python
set SPHINXBUILD=python -m sphinx.__init__
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
:sphinx_ok
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\python-awips.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\python-awips.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "coverage" (
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
if errorlevel 1 exit /b 1
echo.
echo.Testing of coverage in the sources finished, look at the ^
results in %BUILDDIR%/coverage/python.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

View file

@ -1,6 +0,0 @@
sphinx>=1.3
nbconvert>=4.1
enum34
jupyter
shapely
numpy

View file

@ -1,181 +0,0 @@
===================
About Unidata AWIPS
===================
AWIPS is a weather forecasting display and analysis package
developed by the National Weather Service and Raytheon. AWIPS is a
Java application consisting of a data-rendering client (CAVE, which runs
on Red Hat/CentOS Linux and Mac OS X) and a backend data server (EDEX,
which runs only on Linux)
AWIPS takes a unified approach to data ingest, and most data types
follow a standard path through the system. At a high level, data flow
describes the path taken by a piece of data from its source to its
display by a client system. This path starts with data requested and
stored by an `LDM <#ldm>`_ client and includes the decoding of the data
and storing of decoded data in a form readable and displayable by the
end user.
The AWIPS ingest and request processes are a highly distributed
system, and the messaging broken `Qpid <#qpid>`_ is used for
inter-process communication.
.. figure:: http://www.unidata.ucar.edu/software/awips2/images/awips2_coms.png
:align: center
:alt: image
image
License
-------
The AWIPS software package released by the Unidata Program Center is considered to
be in the public domain since it is released without proprietary code. As such, export
controls do not apply.  Any person is free to download, modify, distribute, or share
Unidata AWIPS in any form. Entities who modify or re-distribute Unidata AWIPS
software are encouraged to conduct their own FOSS/COTS entitlement/license review
to ensure that they remain compatible with the associated terms (see
FOSS_COTS_License.pdf at `https://github.com/Unidata/awips2 <https://github.com/Unidata/awips2>`_).
About AWIPS
-----------
The primary AWIPS application for data ingest, processing, and
storage is the Environmental Data EXchange (**EDEX**) server; the
primary AWIPS application for visualization/data manipulation is the
Common AWIPS Visualization Environment (**CAVE**) client, which is
typically installed on a workstation separate from other AWIPS
components.
In addition to programs developed specifically for AWIPS, AWIPS uses
several commercial off-the-shelf (COTS) and Free or Open Source software
(FOSS) products to assist in its operation. The following components,
working together and communicating, compose the entire AWIPS system.
EDEX
----
The main server for AWIPS. Qpid sends alerts to EDEX when data stored
by the LDM is ready for processing. These Qpid messages include file
header information which allows EDEX to determine the appropriate data
decoder to use. The default ingest server (simply named ingest) handles
all data ingest other than grib messages, which are processed by a
separate ingestGrib server. After decoding, EDEX writes metadata to the
database via Postgres and saves the processed data in HDF5 via PyPIES. A
third EDEX server, request, feeds requested data to CAVE clients. EDEX
ingest and request servers are started and stopped with the commands
``edex start`` and ``edex stop``, which runs the system script
``/etc/rc.d/init.d/edex_camel``
CAVE
----
Common AWIPS Visualization Environment. The data rendering and
visualization tool for AWIPS. CAVE contains of a number of different
data display configurations called perspectives. Perspectives used in
operational forecasting environments include **D2D** (Display
Two-Dimensional), **GFE** (Graphical Forecast Editor), and **NCP**
(National Centers Perspective). CAVE is started with the command
``/awips2/cave/cave.sh`` or ``cave.sh``
.. figure:: http://www.unidata.ucar.edu/software/awips2/images/Unidata_AWIPS2_CAVE.png
:align: center
:alt: CAVE
CAVE
Alertviz
--------
**Alertviz** is a modernized version of an AWIPS I application, designed
to present various notifications, error messages, and alarms to the user
(forecaster). AlertViz can be executed either independently or from CAVE
itself. In the Unidata CAVE client, Alertviz is run within CAVE and is
not required to be run separately. The toolbar is also **hidden from
view** and is accessed by right-click on the desktop taskbar icon.
LDM
---
`http://www.unidata.ucar.edu/software/ldm/ <http://www.unidata.ucar.edu/software/ldm/>`_
The **LDM** (Local Data Manager), developed and supported by Unidata, is
a suite of client and server programs designed for data distribution,
and is the fundamental component comprising the Unidata Internet Data
Distribution (IDD) system. In AWIPS, the LDM provides data feeds for
grids, surface observations, upper-air profiles, satellite and radar
imagery and various other meteorological datasets. The LDM writes data
directly to file and alerts EDEX via Qpid when a file is available for
processing. The LDM is started and stopped with the commands
``edex start`` and ``edex stop``, which runs the commands
``service edex_ldm start`` and ``service edex_ldm stop``
edexBridge
----------
edexBridge, invoked in the LDM configuration file
``/awips2/ldm/etc/ldmd.conf``, is used by the LDM to post "data
available" messaged to Qpid, which alerts the EDEX Ingest server that a
file is ready for processing.
Qpid
----
`http://qpid.apache.org <http://qpid.apache.org>`_
**Apache Qpid**, the Queue Processor Interface Daemon, is the messaging
system used by AWIPS to facilitate communication between services.
When the LDM receives a data file to be processed, it employs
**edexBridge** to send EDEX ingest servers a message via Qpid. When EDEX
has finished decoding the file, it sends CAVE a message via Qpid that
data are available for display or further processing. Qpid is started
and stopped by ``edex start`` and ``edex stop``, and is controlled by
the system script ``/etc/rc.d/init.d/qpidd``
PostgreSQL
----------
`http://www.postgresql.org <http://www.postgresql.org>`_
**PostgreSQL**, known simply as Postgres, is a relational database
management system (DBMS) which handles the storage and retrieval of
metadata, database tables and some decoded data. The storage and reading
of EDEX metadata is handled by the Postgres DBMS. Users may query the
metadata tables by using the termainal-based front-end for Postgres
called **psql**. Postgres is started and stopped by ``edex start`` and
``edex stop``, and is controlled by the system script
``/etc/rc.d/init.d/edex_postgres``
HDF5
----
`http://www.hdfgroup.org/HDF5/ <http://www.hdfgroup.org/HDF5/>`_
**Hierarchical Data Format (v.5)** is
the primary data storage format used by AWIPS for processed grids,
satellite and radar imagery and other products. Similar to netCDF,
developed and supported by Unidata, HDF5 supports multiple types of data
within a single file. For example, a single HDF5 file of radar data may
contain multiple volume scans of base reflectivity and base velocity as
well as derived products such as composite reflectivity. The file may
also contain data from multiple radars. HDF5 is stored in
``/awips2/edex/data/hdf5/``
PyPIES (httpd-pypies)
---------------------
**PyPIES**, Python Process Isolated Enhanced Storage, was created for
AWIPS to isolate the management of HDF5 Processed Data Storage from
the EDEX processes. PyPIES manages access, i.e., reads and writes, of
data in the HDF5 files. In a sense, PyPIES provides functionality
similar to a DBMS (i.e PostgreSQL for metadata); all data being written
to an HDF5 file is sent to PyPIES, and requests for data stored in HDF5
are processed by PyPIES.
PyPIES is implemented in two parts: 1. The PyPIES manager is a Python
application that runs as part of an Apache HTTP server, and handles
requests to store and retrieve data. 2. The PyPIES logger is a Python
process that coordinates logging. PyPIES is started and stopped by
``edex start`` and ``edex stop``, and is controlled by the system script
``/etc/rc.d/init.d/https-pypies``

View file

@ -1,7 +0,0 @@
=================
CombinedTimeQuery
=================
.. automodule:: awips.dataaccess.CombinedTimeQuery
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
===============
DataAccessLayer
===============
.. automodule:: awips.dataaccess.DataAccessLayer
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
=================
DateTimeConverter
=================
.. automodule:: awips.DateTimeConverter
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
===============================
IDataRequest (newDataRequest())
===============================
.. autoclass:: awips.dataaccess.IDataRequest
:members:
:special-members:

View file

@ -1,7 +0,0 @@
=========
IFPClient
=========
.. automodule:: awips.gfe.IFPClient
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
=============
ModelSounding
=============
.. automodule:: awips.dataaccess.ModelSounding
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
======================
PyData
======================
.. automodule:: awips.dataaccess.PyData
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
======================
PyGeometryData
======================
.. automodule:: awips.dataaccess.PyGeometryData
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
======================
PyGridData
======================
.. automodule:: awips.dataaccess.PyGridData
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
======================
RadarCommon
======================
.. automodule:: awips.RadarCommon
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
======================
ThriftClient
======================
.. automodule:: awips.ThriftClient
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
======================
ThriftClientRouter
======================
.. automodule:: awips.dataaccess.ThriftClientRouter
:members:
:undoc-members:

View file

@ -1,7 +0,0 @@
======================
TimeUtil
======================
.. automodule:: awips.TimeUtil
:members:
:undoc-members:

View file

@ -1,22 +0,0 @@
#################
API Documentation
#################
.. toctree::
:maxdepth: 2
DataAccessLayer
IDataRequest
PyData
PyGridData
PyGeometryData
ModelSounding
ThriftClientRouter
ThriftClient
TimeUtil
RadarCommon
IFPClient
DateTimeConverter
CombinedTimeQuery
* :ref:`genindex`

View file

@ -1,303 +0,0 @@
# -*- coding: utf-8 -*-
#
# python-awips documentation build configuration file, created by
# sphinx-quickstart on Tue Mar 15 15:59:23 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('../..'))
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.viewcode',
'sphinx.ext.autosectionlabel',
'notebook_gen_sphinxext'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'python-awips'
copyright = '2018, Unidata'
author = 'Unidata'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '18.1.7'
# The full version, including alpha/beta/rc tags.
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#html_theme = 'alabaster'
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'python-awipsdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'python-awips.tex', 'python-awips Documentation',
'Unidata', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'python-awips', 'python-awips Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'python-awips', 'python-awips Documentation',
author, 'python-awips', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# Set up mapping for other projects' docs
intersphinx_mapping = {
'matplotlib': ('http://matplotlib.org/', None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'scipy': ('http://docs.scipy.org/doc/scipy/reference/', None),
'pint': ('http://pint.readthedocs.org/en/stable/', None),
'python': ('http://docs.python.org', None)
}

View file

@ -1,129 +0,0 @@
====================
Available Data Types
====================
.. _awips.dataaccess.DataAccessLayer.getGeometryData(request, times=[]): api/DataAccessLayer.html#awips.dataaccess.DataAccessLayer.getGeometryData
.. _awips.dataaccess.DataAccessLayer.getGridData(request, times=[]): api/DataAccessLayer.html#awips.dataaccess.DataAccessLayer.getGridData
.. _RadarCommon.get_hdf5_data(idra): api/RadarCommon.html
satellite
---------
- 2-D NumPy Array
- returned by: `awips.dataaccess.DataAccessLayer.getGridData(request, times=[])`_
- example::
# Contrust a full satellite product tree
DataAccessLayer.changeEDEXHost("edex-cloud.unidata.ucar.edu)
request = DataAccessLayer.newDataRequest("satellite")
creatingEntities = DataAccessLayer.getIdentifierValues(request, "creatingEntity")
for entity in creatingEntities:
print(entity)
request = DataAccessLayer.newDataRequest("satellite")
request.addIdentifier("creatingEntity", entity)
availableSectors = DataAccessLayer.getAvailableLocationNames(request)
availableSectors.sort()
for sector in availableSectors:
print(" - " + sector)
request.setLocationNames(sector)
availableProducts = DataAccessLayer.getAvailableParameters(request)
availableProducts.sort()
for product in availableProducts:
print(" - " + product)
---
binlightning
------------
- Shapely Point::
POINT (-65.65293884277344 -16.94915580749512)
- returned by: `awips.dataaccess.DataAccessLayer.getGeometryData(request, times=[])`_
- example (GLM)::
request = DataAccessLayer.newDataRequest("binlightning")
request.addIdentifier("source", "GLMgr")
request.setParameters("intensity")
times = DataAccessLayer.getAvailableTimes(request)
response = DataAccessLayer.getGeometryData(request, times[-10:-1])
for ob in response:
geom = ob.getGeometry()
---
grid
----
- 2-D NumPy Array
- returned by: `awips.dataaccess.DataAccessLayer.getGridData(request, times=[])`_
- example::
request = DataAccessLayer.newDataRequest()
request.setDatatype("grid")
request.setLocationNames("RAP13")
request.setParameters("T")
request.setLevels("2.0FHAG")
cycles = DataAccessLayer.getAvailableTimes(request, True)
times = DataAccessLayer.getAvailableTimes(request)
fcstRun = DataAccessLayer.getForecastRun(cycles[-1], times)
response = DataAccessLayer.getGridData(request, [fcstRun[-1]])
for grid in response:
data = grid.getRawData()
lons, lats = grid.getLatLonCoords()
---
warning
-------
- Shapely MultiPolygon, Polygon::
MULTIPOLYGON ((-92.092348410 46.782322971, ..., -92.092348410 46.782322971),
(-90.948581075 46.992865960, ..., -90.948581075 46.992865960),
...
(-92.274543999 46.652773000, ..., -92.280511999 46.656933000),
(-92.285491999 46.660741000, ..., -92.285491999 46.660741000))
- returned by: `awips.dataaccess.DataAccessLayer.getGeometryData(request, times=[])`_
- example::
request = DataAccessLayer.newDataRequest()
request.setDatatype("warning")
request.setParameters('phensig')
times = DataAccessLayer.getAvailableTimes(request)
response = DataAccessLayer.getGeometryData(request, times[-50:-1])
for ob in response:
poly = ob.getGeometry()
site = ob.getLocationName()
pd = ob.getDataTime().getValidPeriod()
ref = ob.getDataTime().getRefTime()
---
radar
-----
- 2-D NumPy Array
- returned by: `awips.dataaccess.DataAccessLayer.getGridData(request, times=[])`_
- also returned by: `RadarCommon.get_hdf5_data(idra)`_
- example::
request = DataAccessLayer.newDataRequest("radar")
request.setLocationNames("kmhx")
request.setParameters("Digital Hybrid Scan Refl")
availableLevels = DataAccessLayer.getAvailableLevels(request)
times = DataAccessLayer.getAvailableTimes(request)
response = DataAccessLayer.getGridData(request, [times[-1]])
for image in response:
data = image.getRawData()
lons, lats = image.getLatLonCoords()

View file

@ -1,654 +0,0 @@
Development Guide
=================
The Data Access Framework allows developers to retrieve different types
of data without having dependencies on those types of data. It provides
a single, unified data type that can be customized by individual
implementing plug-ins to provide full functionality pertinent to each
data type.
Writing a New Factory
---------------------
Factories will most often be written in a dataplugin, but should always
be written in a common plug-in. This will allow for clean dependencies
from both CAVE and EDEX.
A new plug-ins data access class must implement IDataFactory. For ease
of use, abstract classes have been created to combine similar methods.
Data factories do not have to implement both types of data (grid and
geometry). They can if they choose, but if they choose not to, they
should do the following:
::
throw new UnsupportedOutputTypeException(request.getDatatype(), "grid");
This lets the code know that grid type is not supported for this data
factory. Depending on where the data is coming from, helpers have been
written to make writing a new data type factory easier. For example,
PluginDataObjects can use AbstractDataPluginFactory as a start and not
have to create everything from scratch.
Each data type is allowed to implement retrieval in any manner that is
felt necessary. The power of the framework means that the code
retrieving data does not have to know anything of the underlying
retrieval methods, only that it is getting data in a certain manner. To
see some examples of ways to retrieve data, reference
**SatelliteGridFactory** and **RadarGridFactory**.
Methods required for implementation:
**public DataTime[] getAvailableTimes(IDataRequest request)**
- This method returns an array of DataTime objects corresponding to
what times are available for the data being retrieved, based on the
parameters and identifiers being passed in.
**public DataTime[] getAvailableTimes(IDataRequest request, BinOffset
binOffset)**
- This method returns available times as above, only with a bin offset
applied.
Note: Both of the preceding methods can throw TimeAgnosticDataException
exceptions if times do not apply to the data type.
**public IGridData[] getGridData(IDataRequest request,
DataTime...times)**
- This method returns IGridData objects (an array) based on the request
and times to request for. There can be multiple times or a single
time.
**public IGridData[] getGridData(IDataRequest request, TimeRange
range)**
- Similar to the preceding method, this returns IGridData objects based
on a range of times.
**public IGeometryData[] getGeometryData(IDataRequest request, DataTime
times)**
- This method returns IGeometryData objects based on a request and
times.
**public IGeometryData[] getGeometryData(IDataRequest request, TimeRange
range)**
- Like the preceding method, this method returns IGeometryData objects
based on a range of times.
**public String[] getAvailableLocationNames(IDataRequest request)**
- This method returns location names that match the request. If this
does not apply to the data type, an IncompatibleRequestException
should be thrown.
Registering the Factory with the Framework
------------------------------------------
The following needs to be added in a spring file in the plug-in that
contains the new factory:
::
<bean id="radarGridFactory"
class="com.raytheon.uf.common.dataplugin.radar.dataaccess.RadarGridFactory" />
<bean factory-bean="dataAccessRegistry" factorymethod="register">
<constructor-arg value="radar"/>
<constructor-arg ref="radarGridFactory"/>
</bean>
This takes the RadarGridFactory and registers it with the registry and
allows it to be used any time the code makes a request for the data type
“radar.”
Retrieving Data Using the Factory
---------------------------------
For ease of use and more diverse use, there are multiple interfaces into
the Data Access Layer. Currently, there is a Python implementation and a
Java implementation, which have very similar method calls and work in a
similar manner. Plug-ins that want to use the data access framework to
retrieve data should include **com.raytheon.uf.common.dataaccess** as a
Required Bundle in their MANIFEST.MF.
To retrieve data using the Python interface :
::
from awips.dataaccess import DataAccessLayer
req = DataAccessLayer.newDataRequest()
req.setDatatype("grid")
req.setParameters("T")
req.setLevels("2FHAG")
req.addIdentifier("info.datasetId", "GFS40")
times = DataAccessLayer.getAvailableTimes(req)
data = DataAccessLayer.getGridData(req, times)
To retrieve data using the Java interface :
::
IDataRequest req = DataAccessLayer.newDataRequest();
req.setDatatype("grid");
req.setParameters("T");
req.setLevels("2FHAG");
req.addIdentifier("info.datasetId", "GFS40");
DataTime[] times = DataAccessLayer.getAvailableTimes(req)
IData data = DataAccessLayer.getGridData(req, times);
**newDataRequest()**
- This creates a new data request. Most often this is a
DefaultDataRequest, but saves for future implentations as well.
**setDatatype(String)**
- This is the data type being retrieved. This can be found as the value
that is registered when creating the new factory (See section above
**Registering the Factory with the Framework** [radar in that case]).
**setParameters(String...)**
- This can differ depending on data type. It is most often used as a
main difference between products.
**setLevels(String...)**
- This is often used to identify the same products on different
mathematical angles, heights, levels, etc.
**addIdentifier(String, String)**
- This differs based on data type, but is often used for more
fine-tuned querying.
Both methods return a similar set of data and can be manipulated by
their respective languages. See DataAccessLayer.py and
DataAccessLayer.java for more methods that can be called to retrieve
data and different parts of the data. Because each data type has
different parameters, levels, and identifiers, it is best to see the
actual data type for the available options. If it is undocumented, then
the best way to identify what parameters are to be used is to reference
the code.
Development Background
----------------------
In support of Hazard Services Raytheon Technical Services is building a
generic data access framework that can be called via JAVA or Python. The
data access framework code can be found within the AWIPS Baseline in
::
com.raytheon.uf.common.dataaccess
As of 2016, plugins have been written for grid, radar, satellite, Hydro
(SHEF), point data (METAR, SYNOP, Profiler, ACARS, AIREP, PIREP), maps
data, and other data types. The Factories for each can be found in the
following packages (you may need to look at the development baseline to
see these):
::
com.raytheon.uf.common.dataplugin.grid.dataaccess
com.raytheon.uf.common.dataplugin.radar.dataaccess
com.raytheon.uf.common.dataplugin.satellite.dataaccess
com.raytheon.uf.common.dataplugin.binlightning.dataaccess
com.raytheon.uf.common.dataplugin.sfc.dataaccess
com.raytheon.uf.common.dataplugin.sfcobs.dataaccess
com.raytheon.uf.common.dataplugin.acars.dataaccess
com.raytheon.uf.common.dataplugin.ffmp.dataaccess
com.raytheon.uf.common.dataplugin.bufrua.dataaccess
com.raytheon.uf.common.dataplugin.profiler.dataaccess
com.raytheon.uf.common.dataplugin.moddelsounding.dataaccess
com.raytheon.uf.common.dataplugin.ldadmesonet.dataaccess
com.raytheon.uf.common.dataplugin.binlightning.dataaccess
com.raytheon.uf.common.dataplugin.gfe.dataaccess
com.raytheon.uf.common.hydro.dataaccess
com.raytheon.uf.common.pointdata.dataaccess
com.raytheon.uf.common.dataplugin.maps.dataaccess
Additional data types may be added in the future. To determine what
datatypes are supported display the "type hierarchy" associated with the
classes
**AbstractGridDataPluginFactory**,
**AbstractGeometryDatabaseFactory**, and
**AbstractGeometryTimeAgnosticDatabaseFactory**.
The following content was taken from the design review document which is
attached and modified slightly.
Design/Implementation
---------------------
The Data Access Framework is designed to provide a consistent interface
for requesting and using geospatial data within CAVE or EDEX. Examples
of geospatial data are grids, satellite, radar, metars, maps, river gage
heights, FFMP basin data, airmets, etc. To allow for convenient use of
geospatial data, the framework will support two types of requests: grids
and geometries (points, polygons, etc). The framework will also hide
implementation details of specific data types from users, making it
easier to use data without worrying about how the data objects are
structured or retrieved.
A suggested mapping of some current data types to one of the two
supported data requests is listed below. This list is not definitive and
can be expanded. If a developer can dream up an interpretation of the
data in the other supported request type, that support can be added.
Grids
- Grib
- Satellite
- Radar
- GFE
Geometries
- Map (states, counties, zones, etc)
- Hydro DB (IHFS)
- Obs (metar)
- FFMP
- Hazard
- Warning
- CCFP
- Airmet
The framework is designed around the concept of each data type plugin
contributing the necessary code for the framework to support its data.
For example, the satellite plugin provides a factory class for
interacting with the framework and registers itself as being compatible
with the Data Access Framework. This concept is similar to how EDEX in
AWIPS expects a plugin developer to provide a decoder class and
record class and register them, but then automatically manages the rest
of the ingest process including routing, storing, and alerting on new
data. This style of plugin architecture effectively enables the
framework to expand its capabilities to more data types without having
to alter the framework code itself. This will enable software developers
to incrementally add support for more data types as time allows, and
allow the framework to expand to new data types as they become
available.
The Data Access Framework will not break any existing functionality or
APIs, and there are no plans to retrofit existing cosde to use the new
API at this time. Ideally code will be retrofitted in the future to
improve ease of maintainability. The plugin pecific code that hooks into
the framework will make use of existing APIs such as **IDataStore** and
**IServerRequest** to complete the requests.
The Data Access Framework can be understood as three parts:
- How users of the framework retrieve and use the data
- How plugin developers contribute support for new data types
- How the framework works when it receives a request
How users of the framework retrieve and use the data
----------------------------------------------------
When a user of the framework wishes to request data, they must
instantiate a request object and set some of the values on that request.
Two request interfaces will be supported, for detailed methods see
section "Detailed Code" below.
**IDataRequest**
**IGridRequest** extends **IDataRequest**
**IGeometryRequest** extends **IDataRequest**
For the request interfaces, default implementations of
**DefaultGridRequest** and **DefaultGeometryRequest** will be provided
to handle most cases. However, the use of interfaces allows for custom
special cases in the future. If necessary, the developer of a plugin can
write their own custom request implementation to handle a special case.
After the request object has been prepared, the user will pass it to the
Data Access Layer to receive a data object in return. See the "Detailed
Code" section below for detailed methods of the Data Access Layer. The
Data Access Layer will return one of two data interfaces.
**IData**
**IGridData** extends **IData**
**IGeometryData** extends **IData**
For the data interfaces, the use of interfaces effectively hides the
implementation details of specific data types from the user of the
framework. For example, the user receives an **IGridData** and knows the
data time, grid geometry, parameter, and level, but does not know that
the data is actually a **GFEGridData** vs **D2DGridData** vs
**SatelliteGridData**. This enables users of the framework to write
generic code that can support multiple data types.
For python users of the framework, the interfaces will be very similar
with a few key distinctions. Geometries will be represented by python
geometries from the open source Shapely project. For grids, the python
**IGridData** will have a method for requesting the raw data as a numpy
array, and the Data Access Layer will have methods for requesting the
latitude coordinates and the longitude coordinates of grids as numpy
arrays. The python requests and data objects will be pure python and not
JEP PyJObjects that wrap Java objects. A future goal of the Data Access
Framework is to provide support to python local apps and therefore
enable requests of data outside of CAVE and EDEX to go through the same
familiar interfaces. This goal is out of scope for this project but by
making the request and returned data objects pure python it will not be
a huge undertaking to add this support in the future.
How plugin developers contribute support for new datatypes
----------------------------------------------------------
When a developer wishes to add support for another data type to the
framework, they must implement one or both of the factory interfaces
within a common plugin. Two factory interfaces will be supported, for
detailed methods see below.
**IDataFactory**
**IGridFactory** extends **IDataFactory**
**IGeometryFactory** extends **IDataFactory**
For some data types, it may be desired to add support for both types of
requests. For example, the developer of grid data may want to provide
support for both grid requests and geometry requests. In this case the
developer would write two separate classes where one implements
**IGridFactory** and the other implements **IGeometryFactory**.
Furthermore, factories could be stacked on top of one another by having
factory implementations call into the Data Access Layer.
For example, a custom factory keyed to "derived" could be written for
derived parameters, and the implementation of that factory may then call
into the Data Access Layer to retrieve “grid” data. In this example the
raw data would be retrieved through the **GridDataFactory** while the
derived factory then applies the calculations before returning the data.
Implementations do not need to support all methods on the interfaces or
all values on the request objects. For example, a developer writing the
**MapGeometryFactory** does not need to support **getAvailableTimes()**
because map data such as US counties is time agnostic. In this case the
method should throw **UnsupportedOperationException** and the javadoc
will indicate this.
Another example would be the developer writing **ObsGeometryFactory**
can ignore the Level field of the **IDataRequest** as there are not
different levels of metar data, it is all at the surface. It is up to
the factory writer to determine which methods and fields to support and
which to ignore, but the factory writer should always code the factory
with the user requesting data in mind. If a user of the framework could
reasonably expect certain behavior from the framework based on the
request, the factory writer should implement support for that behavior.
Abstract factories will be provided and can be extended to reduce the
amount of code a factory developer has to write to complete some common
actions that will be used by multiple factories. The factory should be
capable of working within either CAVE or EDEX, therefore all of its
server specific actions (e.g. database queries) should go through the
Request/Handler API by using **IServerRequests**. CAVE can then send the
**IServerRequests** to EDEX with **ThriftClient** while EDEX can use the
**ServerRequestRouter** to process the **IServerRequests**, making the
code compatible regardless of which JVM it is running inside.
Once the factory code is written, it must be registered with the
framework as an available factory. This will be done through spring xml
in a common plugin, with the xml file inside the res/spring folder of
the plugin. Registering the factory will identify the datatype name that
must match what users would use as the datatype on the **IDataRequest**,
e.g. the word "satellite". Registering the factory also indicates to the
framework what request types are supported, i.e. grid vs geometry or
both.
An example of the spring xml for a satellite factory is provided below:
::
<bean id="satelliteFactory"
class="com.raytheon.uf.common.dataplugin.satellite.SatelliteFactory" />
<bean id="satelliteFactoryRegistered" factory-bean="dataFactoryRegistry" factory-method="register">
<constructor-arg value="satellite" />
<constructor-arg value="com.raytheon.uf.common.dataaccess.grid.IGridRequest" />
<constructor-arg value="satelliteFactory" />
</bean>
How the framework works when it receives a request
--------------------------------------------------
**IDataRequest** requires a datatype to be set on every request. The
framework will have a registry of existing factories for each data type
(grid and geometry). When the Data Access Layer methods are called, it
will first lookup in the registry for the factory that corresponds to
the datatype on the **IDataRequest**. If no corresponding factory is
found, it will throw an exception with a useful error message that
indicates there is no current support for that datatype request. If a
factory is found, it will delegate the processing of the request to the
factory. The factory will receive the request and process it, returning
the result back to the Data Access Layer which then returns it to the
caller.
By going through the Data Access Layer, the user is able to retrieve the
data and use it without understanding which factory was used, how the
factory retrieved the data, or what implementation of data was returned.
This effectively frees the framework and users of the framework from any
dependencies on any particular data types. Since these dependencies are
avoided, the specific **IDataFactory** and **IData** implementations can
be altered in the future if necessary and the code making use of the
framework will not need to be changed as long as the interfaces continue
to be met.
Essentially, the Data Access Framework is a service that provides data
in a consistent way, with the service capabilities being expanded by
plugin developers who write support for more data types. Note that the
framework itself is useless without plugins contributing and registering
**IDataFactories**. Once the framework is coded, developers will need to
be tasked to add the factories necessary to support the needed data
types.
Request interfaces
------------------
Requests and returned data interfaces will exist in both Java and
Python. The Java interfaces are listed below and the Python interfaces
will match the Java interfaces except where noted. Factories will only
be written in Java.
**IDataRequest**
- **void setDatatype(String datatype)** - the datatype name and
also the key to which factory will be used. Frequently pluginName
such as radar, satellite, gfe, ffmp, etc
- **void addIdentifier(String key, Object value)** - an identifier the
factory can use to determine which data to return, e.g. for grib data
key "modelName" and value “GFS40”
- **void setParameters(String... params)**
- **void setLevels(Level... levels)**
- **String getDatatype()**
- **Map getIdentifiers()**
- **String[] getParameters()**
- **Level[] getLevels()**
- Python Differences
- **Levels** will be represented as **Strings**
**IGridRequest extends IDataRequest**
- **void setStorageRequest(Request request)** - a datastorage request
that allows for slab, line, and point requests for faster performance
and less data retrieval
- **Request getStorageRequest()**
- Python Differences
- No support for storage requests
**IGeometryRequest extends IDataRequest**
- **void setEnvelope(Envelope env)** - a bounding box envelope to limit
the data that is searched through and returned. Not all factories may
support this.
- **setLocationNames(String... locationNames)** - a convenience of
requesting data by names such as ICAOs, airports, stationIDs, etc
- **Envelope getEnvelope()**
- **String[] getLocationNames()**
- Python Differences
- Envelope methods will use a **shapely.geometry.Polygon** instead of
**Envelopes** (shapely has no concept of envelopes and considers them
as rectangular polygons)
Data Interfaces
~~~~~~~~~~~~~~~
**IData**
- **Object getAttribute(String key)** - **getAttribute** provides a way
to get at attributes of the data that the interface does not provide,
allowing the user to get more info about the data without adding
dependencies on the specific data type plugin
- **DataTime getDataTime()** - some data may return null (e.g. maps)
- **Level getLevel()** - some data may return null
- Python Differences
- **Levels** will be represented by **Strings**
**IGridData extends IData**
- **String getParameter()**
- **GridGeometry2D getGridGeometry()**
- **Unit getUnit()** - some data may return null
- **DataDestination populateData(DataDestination destination)** - How
the user gets the raw data by passing in a **DataDestination** such
as **FloatArrayWrapper** or **ByteBufferWrapper**. This allows the
user to specify the way the raw data of the grid should be structured
in memory.
- **DataDestination populateData(DataDestination destination, Unit
unit)** - Same as the above method but also attempts to convert the
raw data to the specified unit when populating the
**DataDestination**.
- Python Differences
- **Units** will be represented by **Strings**
- **populateData()** methods will not exist, instead there will be
a **getRawData()** method that returns a numpy array in the native
type of the data
**IGeometryData extends IData**
- **Geometry getGeometry()**
- **Set getParameters()** - Gets the list of parameters included in
this data
- **String getString(String param)** - Gets the value of the parameter
as a String
- **Number getNumber(String param)** - Gets the value of the parameter
as a Number
- **Unit getUnit(String param)** - Gets the unit of the parameter,
may be null
- **Type getType(String param)** - Returns an enum of the raw type of
the parameter, such as Float, Int, or String
- **String getLocationName()** - Returns the location name of the piece
of data, typically to correlate if the request was made with
locationNames. May be null.
- Python Differences
- **Geometry** will be **shapely.geometry.Geometry**
- **getNumber()** will return the python native number of the data
- **Units** will be represented by **Strings**
- **getType()** will return the python type object
**DataAccessLayer** (in implementation, these methods delegate
processing to factories)
- **DataTime[] getAvailableTimes(IDataRequest request)**
- **DataTime[] getAvailableTimes(IDataRequest request, BinOffset
binOffset)**
- **IData[] getData(IDataRequest request, DataTime... times)**
- **IData[] getData(IDataRequest request, TimeRange timeRange)**
- **GridGeometry2D getGridGeometry(IGridRequest request)**
- **String[] getAvailableLocationNames(IGeometryRequest request)**
- Python Differences
- No support for **BinOffset**
- **getGridGeometry(IGridRequest)** will be replaced by
**getLatCoords(IGridRequest)** and **getLonCoords(IGridRequest)**
that will return numpy arrays of the lat or lon of every grid
cell
Factory Interfaces (Java only)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- **IDataFactory**
- **DataTime[] getAvailableTimes(R request)** - queries the
database and returns the times that match the request. Some factories
may not support this (e.g. maps).
- **DataTime[] getAvailableTimes(R request, BinOffset binOffset)** -
queries the database with a bin offset and returns the times that
match the request. Some factories may not support this.
- **D[] getData(R request, DataTime... times)** - Gets the data that
matches the request at the specified times.
- **D[] getData(R request, TimeRange timeRange)** - Gets the data that
matches the request and is within the time range.
**IGridDataFactory extends IDataFactory**
- **GridGeometry2D** **getGeometry(IGridRequest request)** - Returns
the grid geometry of the data that matches the request BEFORE making
the request. Useful for then making slab or line requests for subsets
of the data. Does not support moving grids, but moving grids dont
make subset requests either.
**IGeometryDataFactory extends IDataFactory**
- **getAvailableLocationNames(IGeometryRequest request)** - Convenience
method to retrieve available location names that match a request. Not
all factories may support this.

View file

@ -1,11 +0,0 @@
.. _examples-index:
######################
Data Plotting Examples
######################
.. toctree::
:maxdepth: 1
:glob:
generated/*

File diff suppressed because it is too large Load diff

View file

@ -1,72 +0,0 @@
==================================
Python AWIPS Data Access Framework
==================================
The python-awips package provides a data access framework for requesting meteorological and geographic datasets from an `EDEX <http://unidata.github.io/awips2/#edex>`_ server.
`AWIPS <http://unidata.github.io/awips2>`_ is a weather display and analysis package developed by the National Weather Service for operational forecasting. UCAR's `Unidata Program Center <http://www.unidata.ucar.edu/software/awips2/>`_ supports a non-operational open-source release of the AWIPS software (`EDEX <http://unidata.github.io/awips2/#edex>`_, `CAVE <http://unidata.github.io/awips2/#cave>`_, and `python-awips <https://github.com/Unidata/python-awips>`_).
.. _Jupyter Notebook: http://nbviewer.jupyter.org/github/Unidata/python-awips/tree/master/examples/notebooks
Pre-requisite Software
----------------------
In order to effictively use python-awips you'll need to have these installed already:
- python3
- conda
- git *(for the source code and examples installation)*
Package-Only Install
--------------------
If you already work with Python, you might just be interested in how to install the python-awips pacakge.
The package can be installed with either of the two well known package managers: **pip** and **conda**.
Pip Install
~~~~~~~~~~~
::
pip install python-awips
Conda Install
~~~~~~~~~~~~~
::
conda install -c conda-forge python-awips
Source Code with Examples Install
---------------------------------
Below are instructions on how to install the source code of python-awips, with all included example notebooks. This will create a new conda environment called ``python3-awips`` and start up a browser for the jupyter notebook examples.
::
git clone https://github.com/Unidata/python-awips.git
cd python-awips
conda env create -f environment.yml
conda activate python3-awips
python setup.py install --force
jupyter notebook examples
Questions -- Contact Us!
------------------------
Please feel free to reach out to us at our support email at **support-awips@unidata.ucar.edu**
.. toctree::
:maxdepth: 2
:hidden:
api/index
datatypes
examples/index
dev
gridparms
about

View file

@ -1,77 +0,0 @@
#
# Generation of RST from notebooks
#
import glob
import os
import os.path
import warnings
warnings.simplefilter('ignore')
from nbconvert.exporters import rst
def setup(app):
setup.app = app
setup.config = app.config
setup.confdir = app.confdir
app.connect('builder-inited', generate_rst)
return dict(
version='0.1',
parallel_read_safe=True,
parallel_write_safe=True
)
notebook_source_dir = '../../examples/notebooks'
generated_source_dir = 'examples/generated'
def nb_to_rst(nb_path):
"""convert notebook to restructured text"""
exporter = rst.RSTExporter()
out, resources = exporter.from_file(open(nb_path))
basename = os.path.splitext(os.path.basename(nb_path))[0]
imgdir = basename + '_files'
img_prefix = os.path.join(imgdir, basename + '_')
resources['metadata']['basename'] = basename
resources['metadata']['name'] = basename.replace('_', ' ')
resources['metadata']['imgdir'] = imgdir
base_url = ('http://nbviewer.ipython.org/github/Unidata/python-awips/blob/master/'
'examples/notebooks/')
out_lines = ['`Notebook <%s>`_' % (base_url + os.path.basename(nb_path))]
for line in out.split('\n'):
if line.startswith('.. image:: '):
line = line.replace('output_', img_prefix)
out_lines.append(line)
out = '\n'.join(out_lines)
return out, resources
def write_nb(dest, output, resources):
if not os.path.exists(dest):
os.makedirs(dest)
rst_file = os.path.join(dest,
resources['metadata']['basename'] + resources['output_extension'])
name = resources['metadata']['name']
with open(rst_file, 'w') as rst:
header = '=' * len(name)
rst.write(header + '\n')
rst.write(name + '\n')
rst.write(header + '\n')
rst.write(output)
imgdir = os.path.join(dest, resources['metadata']['imgdir'])
if not os.path.exists(imgdir):
os.makedirs(imgdir)
basename = resources['metadata']['basename']
for filename in resources['outputs']:
img_file = os.path.join(imgdir, filename.replace('output_', basename + '_'))
with open(img_file, 'wb') as img:
img.write(resources['outputs'][filename])
def generate_rst(app):
for fname in glob.glob(os.path.join(app.srcdir, notebook_source_dir, '*.ipynb')):
write_nb(os.path.join(app.srcdir, generated_source_dir), *nb_to_rst(fname))

View file

@ -1,26 +0,0 @@
#
# Adapter for com.raytheon.uf.common.dataplugin.gfe.GridDataHistory
#
# TODO: REWRITE THIS ADAPTER when serialization/deserialization of this
# class has been finalized.
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 03/29/11 dgilling Initial Creation.
#
from dynamicserialize.dstypes.com.raytheon.uf.common.dataplugin.gfe import GridDataHistory
ClassAdapter = 'com.raytheon.uf.common.dataplugin.gfe.GridDataHistory'
def serialize(context, history):
context.writeString(history.getCodedString())
def deserialize(context):
result = GridDataHistory(context.readString())
return result

View file

@ -21,9 +21,7 @@
# File auto-generated by PythonFileGenerator # File auto-generated by PythonFileGenerator
__all__ = [ __all__ = [
'com', 'com'
'gov',
'java'
] ]

View file

@ -21,7 +21,7 @@
# File auto-generated by PythonFileGenerator # File auto-generated by PythonFileGenerator
__all__ = [ __all__ = [
'raytheon', 'raytheon'
] ]

View file

@ -21,22 +21,7 @@
# File auto-generated by PythonFileGenerator # File auto-generated by PythonFileGenerator
__all__ = [ __all__ = [
'activetable', 'dataquery'
'alertviz',
'auth',
'dataaccess',
'dataplugin',
'dataquery',
'datastorage',
'localization',
'management',
'message',
'mpe',
'pointdata',
'pypies',
'serialization',
'site',
'time'
] ]

View file

@ -21,6 +21,7 @@
# File auto-generated by PythonFileGenerator # File auto-generated by PythonFileGenerator
__all__ = [ __all__ = [
'AuthenticationData',
'User', 'User',
'UserId' 'UserId'
] ]

View file

@ -31,7 +31,7 @@
# #
from awips.dataaccess import IDataRequest from ufpy.dataaccess import IDataRequest
from dynamicserialize.dstypes.org.locationtech.jts.geom import Envelope from dynamicserialize.dstypes.org.locationtech.jts.geom import Envelope
from dynamicserialize.dstypes.com.raytheon.uf.common.dataplugin.level import Level from dynamicserialize.dstypes.com.raytheon.uf.common.dataplugin.level import Level

View file

@ -31,7 +31,7 @@
# #
from awips.dataaccess import INotificationFilter from ufpy.dataaccess import INotificationFilter
import sys import sys
class DefaultNotificationFilter(INotificationFilter): class DefaultNotificationFilter(INotificationFilter):

View file

@ -21,14 +21,8 @@
# File auto-generated by PythonFileGenerator # File auto-generated by PythonFileGenerator
__all__ = [ __all__ = [
'events',
'gfe',
'grid',
'level',
'message', 'message',
'persist', 'persist'
'radar',
'text'
] ]

View file

@ -39,12 +39,12 @@ class GFERecord(PersistableDataObject):
self.dataTime = None self.dataTime = None
self.parmId = None self.parmId = None
if timeRange is not None: if timeRange is not None:
if isinstance(timeRange, TimeRange): if type(timeRange) is TimeRange:
self.dataTime = DataTime(refTime=timeRange.getStart(), validPeriod=timeRange) self.dataTime = DataTime(refTime=timeRange.getStart(), validPeriod=timeRange)
else: else:
raise TypeError("Invalid TimeRange object specified.") raise TypeError("Invalid TimeRange object specified.")
if parmId is not None: if parmId is not None:
if isinstance(parmId, ParmID.ParmID): if type(parmId) is ParmID.ParmID:
self.parmId = parmId self.parmId = parmId
self.parmName = parmId.getParmName() self.parmName = parmId.getParmName()
self.parmLevel = parmId.getParmLevel() self.parmLevel = parmId.getParmLevel()

View file

@ -36,9 +36,9 @@ class ParmID(object):
if (parmIdentifier is not None) and (dbId is not None): if (parmIdentifier is not None) and (dbId is not None):
self.parmName = parmIdentifier self.parmName = parmIdentifier
if isinstance(dbId, DatabaseID): if type(dbId) is DatabaseID:
self.dbId = dbId self.dbId = dbId
elif isinstance(dbId, str): elif type(dbId) is str:
self.dbId = DatabaseID(dbId) self.dbId = DatabaseID(dbId)
else: else:
raise TypeError("Invalid database ID specified.") raise TypeError("Invalid database ID specified.")

View file

@ -31,7 +31,7 @@
# 06/29/15 4480 dgilling Implement __hash__, __eq__, # 06/29/15 4480 dgilling Implement __hash__, __eq__,
# __str__ and rich comparison operators. # __str__ and rich comparison operators.
# 02/17/22 8608 mapeters Subclass PersistableDataObject # 02/17/22 8608 mapeters Subclass PersistableDataObject
# 08/31/23 srcarter@ucar From MJ - replace type with isinstance #
# #
import numpy import numpy
@ -71,15 +71,17 @@ class Level(PersistableDataObject):
return hashCode return hashCode
def __eq__(self, other): def __eq__(self, other):
if isinstance(self, type(other)): if type(self) != type(other):
return False
else:
return (self.masterLevel, self.levelonevalue, self.leveltwovalue) == \ return (self.masterLevel, self.levelonevalue, self.leveltwovalue) == \
(other.masterLevel, other.levelonevalue, other.leveltwovalue) (other.masterLevel, other.levelonevalue, other.leveltwovalue)
return False
def __ne__(self, other): def __ne__(self, other):
return not self.__eq__(other) return not self.__eq__(other)
def __lt__(self, other): def __lt__(self, other):
if not isinstance(self, type(other)): if type(self) != type(other):
return NotImplemented return NotImplemented
elif self.masterLevel.getName() != other.masterLevel.getName(): elif self.masterLevel.getName() != other.masterLevel.getName():
return NotImplemented return NotImplemented
@ -111,7 +113,7 @@ class Level(PersistableDataObject):
return False return False
def __le__(self, other): def __le__(self, other):
if not isinstance(self, type(other)): if type(self) != type(other):
return NotImplemented return NotImplemented
elif self.masterLevel.getName() != other.masterLevel.getName(): elif self.masterLevel.getName() != other.masterLevel.getName():
return NotImplemented return NotImplemented
@ -119,7 +121,7 @@ class Level(PersistableDataObject):
return self.__lt__(other) or self.__eq__(other) return self.__lt__(other) or self.__eq__(other)
def __gt__(self, other): def __gt__(self, other):
if not isinstance(self, type(other)): if type(self) != type(other):
return NotImplemented return NotImplemented
elif self.masterLevel.getName() != other.masterLevel.getName(): elif self.masterLevel.getName() != other.masterLevel.getName():
return NotImplemented return NotImplemented
@ -151,7 +153,7 @@ class Level(PersistableDataObject):
return False return False
def __ge__(self, other): def __ge__(self, other):
if not isinstance(self, type(other)): if type(self) != type(other):
return NotImplemented return NotImplemented
elif self.masterLevel.getName() != other.masterLevel.getName(): elif self.masterLevel.getName() != other.masterLevel.getName():
return NotImplemented return NotImplemented

View file

@ -30,7 +30,7 @@
# 06/29/15 4480 dgilling Implement __hash__, __eq__ # 06/29/15 4480 dgilling Implement __hash__, __eq__
# and __str__. # and __str__.
# 02/17/22 8608 mapeters Subclass PersistableDataObject # 02/17/22 8608 mapeters Subclass PersistableDataObject
# 08/31/23 srcarter@ucar From MJ - replace type with isinstance #
# #
from dynamicserialize.dstypes.com.raytheon.uf.common.dataplugin.persist import PersistableDataObject from dynamicserialize.dstypes.com.raytheon.uf.common.dataplugin.persist import PersistableDataObject
@ -49,7 +49,7 @@ class MasterLevel(PersistableDataObject):
return hash(self.name) return hash(self.name)
def __eq__(self, other): def __eq__(self, other):
if not isinstance(self, type(other)): if type(self) != type(other):
return False return False
else: else:
return self.name == other.name return self.name == other.name

View file

@ -27,11 +27,11 @@
# Jun 27, 2016 5725 tgurney Add NOT IN # Jun 27, 2016 5725 tgurney Add NOT IN
# Jul 22, 2016 2416 tgurney Add evaluate() # Jul 22, 2016 2416 tgurney Add evaluate()
# Jun 26, 2019 7888 tgurney Python 3 fixes # Jun 26, 2019 7888 tgurney Python 3 fixes
# Aug 31, 2023 srcarter@ucar Small formatting and logic changes #
# #
import re import re
from dynamicserialize.dstypes.com.raytheon.uf.common.time import DataTime from ...time import DataTime
class RequestConstraint(object): class RequestConstraint(object):
@ -212,7 +212,7 @@ class RequestConstraint(object):
return self._evalValue.match(value) is not None return self._evalValue.match(value) is not None
def _evalIsNull(self, value): def _evalIsNull(self, value):
return value is None or value == 'null' return value is None or 'null' == value
# DAF-specific stuff begins here ########################################## # DAF-specific stuff begins here ##########################################
@ -228,11 +228,11 @@ class RequestConstraint(object):
@staticmethod @staticmethod
def _stringify(value): def _stringify(value):
if isinstance(value, (int, bool, float)): if type(value) in {int, bool, float}:
return str(value) return str(value)
elif isinstance(value, str): elif type(value) is str:
return value return value
elif isinstance(value, bytes): elif type(value) is bytes:
return value.decode() return value.decode()
else: else:
# Collections are not allowed; they are handled separately. # Collections are not allowed; they are handled separately.
@ -249,7 +249,7 @@ class RequestConstraint(object):
except TypeError: except TypeError:
raise TypeError("value for IN / NOT IN constraint must be an iterable") raise TypeError("value for IN / NOT IN constraint must be an iterable")
stringValue = ', '.join(cls._stringify(item) for item in iterator) stringValue = ', '.join(cls._stringify(item) for item in iterator)
if not stringValue: if len(stringValue) == 0:
raise ValueError('cannot use IN / NOT IN with empty collection') raise ValueError('cannot use IN / NOT IN with empty collection')
obj = cls() obj = cls()
obj.setConstraintType(constraintType) obj.setConstraintType(constraintType)

View file

@ -1,78 +0,0 @@
from six import with_metaclass
import abc
class AbstractDataRecord(with_metaclass(abc.ABCMeta, object)):
def __init__(self):
self.name = None
self.dimension = None
self.sizes = None
self.maxSizes = None
self.props = None
self.minIndex = None
self.group = None
self.dataAttributes = None
self.fillValue = None
self.maxChunkSize = None
def getName(self):
return self.name
def setName(self, name):
self.name = name
def getDimension(self):
return self.dimension
def setDimension(self, dimension):
self.dimension = dimension
def getSizes(self):
return self.sizes
def setSizes(self, sizes):
self.sizes = sizes
def getMaxSizes(self):
return self.maxSizes
def setMaxSizes(self, maxSizes):
self.maxSizes = maxSizes
def getProps(self):
return self.props
def setProps(self, props):
self.props = props
def getMinIndex(self):
return self.minIndex
def setMinIndex(self, minIndex):
self.minIndex = minIndex
def getGroup(self):
return self.group
def setGroup(self, group):
self.group = group
def getDataAttributes(self):
return self.dataAttributes
def setDataAttributes(self, dataAttributes):
self.dataAttributes = dataAttributes
def getFillValue(self):
return self.fillValue
def setFillValue(self, fillValue):
self.fillValue = fillValue
def getMaxChunkSize(self):
return self.maxChunkSize
def setMaxChunkSize(self, maxChunkSize):
self.maxChunkSize = maxChunkSize

View file

@ -39,7 +39,6 @@
# plus misc cleanup # plus misc cleanup
# 09/13/19 7888 tgurney Python 3 division fixes # 09/13/19 7888 tgurney Python 3 division fixes
# 11/18/19 7881 tgurney Fix __hash__ # 11/18/19 7881 tgurney Fix __hash__
# 08/31/23 srcarter@ucar Small formatting fixes to match MJ's changes
import calendar import calendar
@ -60,8 +59,12 @@ _TIME = r'(\d{2}:\d{2}:\d{2})'
_MILLIS = '(?:\.(\d{1,3})(?:\d{1,4})?)?' _MILLIS = '(?:\.(\d{1,3})(?:\d{1,4})?)?'
REFTIME_PATTERN_STR = _DATE + '[ _]' + _TIME + _MILLIS REFTIME_PATTERN_STR = _DATE + '[ _]' + _TIME + _MILLIS
FORECAST_PATTERN_STR = r'(?:[ _]\((\d+)(?::(\d{1,2}))?\))?' FORECAST_PATTERN_STR = r'(?:[ _]\((\d+)(?::(\d{1,2}))?\))?'
VALID_PERIOD_PATTERN_STR = r'(?:\[' + REFTIME_PATTERN_STR + '--' + REFTIME_PATTERN_STR + r'\])?' VALID_PERIOD_PATTERN_STR = r'(?:\[' + REFTIME_PATTERN_STR + \
STR_PATTERN = re.compile(REFTIME_PATTERN_STR + FORECAST_PATTERN_STR + VALID_PERIOD_PATTERN_STR) '--' + REFTIME_PATTERN_STR + r'\])?'
STR_PATTERN = re.compile(
REFTIME_PATTERN_STR +
FORECAST_PATTERN_STR +
VALID_PERIOD_PATTERN_STR)
class DataTime(object): class DataTime(object):
@ -85,14 +88,18 @@ class DataTime(object):
self.fcstTime = 0 self.fcstTime = 0
self.refTime = refTime self.refTime = refTime
if validPeriod is not None and not isinstance(validPeriod, TimeRange): if validPeriod is not None and not isinstance(validPeriod, TimeRange):
raise ValueError("Invalid validPeriod object specified for DataTime.") raise ValueError(
"Invalid validPeriod object specified for DataTime.")
self.validPeriod = validPeriod self.validPeriod = validPeriod
self.utilityFlags = EnumSet('com.raytheon.uf.common.time.DataTime$FLAG') self.utilityFlags = EnumSet(
'com.raytheon.uf.common.time.DataTime$FLAG')
self.levelValue = numpy.float64(-1.0) self.levelValue = numpy.float64(-1.0)
if self.refTime is not None: if self.refTime is not None:
if isinstance(self.refTime, datetime.datetime): if isinstance(self.refTime, datetime.datetime):
self.refTime = int(calendar.timegm(self.refTime.utctimetuple()) * 1000) self.refTime = int(
calendar.timegm(
self.refTime.utctimetuple()) * 1000)
elif isinstance(self.refTime, time.struct_time): elif isinstance(self.refTime, time.struct_time):
self.refTime = int(calendar.timegm(self.refTime) * 1000) self.refTime = int(calendar.timegm(self.refTime) * 1000)
elif hasattr(self.refTime, 'getTime'): elif hasattr(self.refTime, 'getTime'):
@ -117,7 +124,8 @@ class DataTime(object):
fcstTimeMin = groups[4] fcstTimeMin = groups[4]
periodStart = groups[5], groups[6], (groups[7] or 0) periodStart = groups[5], groups[6], (groups[7] or 0)
periodEnd = groups[8], groups[9], (groups[10] or 0) periodEnd = groups[8], groups[9], (groups[10] or 0)
self.refTime = self._getTimeAsEpochMillis(rDate, rTime, rMillis) self.refTime = self._getTimeAsEpochMillis(
rDate, rTime, rMillis)
if fcstTimeHr is not None: if fcstTimeHr is not None:
self.fcstTime = int(fcstTimeHr) * 3600 self.fcstTime = int(fcstTimeHr) * 3600
@ -126,7 +134,8 @@ class DataTime(object):
if periodStart[0] is not None: if periodStart[0] is not None:
self.validPeriod = TimeRange() self.validPeriod = TimeRange()
periodStartTime = self._getTimeAsEpochMillis(*periodStart) periodStartTime = self._getTimeAsEpochMillis(
*periodStart)
self.validPeriod.setStart(periodStartTime // 1000) self.validPeriod.setStart(periodStartTime // 1000)
periodEndTime = self._getTimeAsEpochMillis(*periodEnd) periodEndTime = self._getTimeAsEpochMillis(*periodEnd)
self.validPeriod.setEnd(periodEndTime // 1000) self.validPeriod.setEnd(periodEndTime // 1000)

View file

@ -37,10 +37,10 @@
# values to your EnumSet. # values to your EnumSet.
## ##
import collections.abc import collections
class EnumSet(collections.abc.MutableSet): class EnumSet(collections.MutableSet):
def __init__(self, enumClassName, iterable=[]): def __init__(self, enumClassName, iterable=[]):
self.__enumClassName = enumClassName self.__enumClassName = enumClassName

View file

@ -1,28 +0,0 @@
name: python3-awips
channels:
- https://conda.anaconda.org/conda-forge
dependencies:
- python=3
- numpy
- nomkl
- matplotlib
- cartopy
- jupyter
- netcdf4
- owslib
- metpy
- pint
- h5py
- sphinx>=1.3
- sphinx_rtd_theme
- nbconvert>=4.1
- siphon
- xarray
- ffmpeg
- pytest
- shapely
- six
- pip
- jupyter_contrib_nbextensions
- python-awips

View file

@ -1,322 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"top\"></a>\n",
"<div style=\"width:1000 px\">\n",
"\n",
"<div style=\"float:right; width:98 px; height:98px;\">\n",
"<img src=\"https://docs.unidata.ucar.edu/images/logos/unidata_logo_vertical_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n",
"</div>\n",
"\n",
"# YOUR NOTEBOOK TITLE\n",
"**Python-AWIPS Tutorial Notebook**\n",
"\n",
"<div style=\"clear:both\"></div>\n",
"</div>\n",
"\n",
"---\n",
"\n",
"Be sure to commit a small example output image from your code and link to it HERE (in the source), or link to an external image (logo, etc.) via URL. Remove this line after! Provide a brief `alt` text as well to benefit those with image display issues or potentially making use of screen readers.\n",
"<div style=\"float:right; width:250 px\"><img src=\"../images/[image name]_preview.png\" alt=\"[image text]\" style=\"height: 300px;\"></div>\n",
"\n",
"\n",
"# Objectives\n",
"\n",
"* Use this section to \"tag\" the key interactions with your notebook, restate in plain language your notebooks overall goal\n",
"* Supplement with brief description of the skills/tools that will be highlighted or smaller objectives that will be completed\n",
"* Generate numbers for function inputs with [NumPy](https://numpy.org/doc/stable/)\n",
"* Calculate $y = f(x) = sin(x)$ with [NumPy](https://numpy.org/doc/stable/) (don't forget math text!)\n",
"* Demonstrate visualizing this with [Matplotlib](https://matplotlib.org/)\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"toc": true
},
"source": [
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n",
"<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Imports\" data-toc-modified-id=\"Imports-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Imports</a></span></li><li><span><a href=\"#Your-first-objective\" data-toc-modified-id=\"Your-first-objective-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Your first objective</a></span></li><li><span><a href=\"#Objective-number-two\" data-toc-modified-id=\"Objective-number-two-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Objective number two</a></span><ul class=\"toc-item\"><li><span><a href=\"#Highlight-boxes-can-be-useful!\" data-toc-modified-id=\"Highlight-boxes-can-be-useful!-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Highlight boxes can be useful!</a></span></li><li><span><a href=\"#Don't-forget,-in-markdown-cells\" data-toc-modified-id=\"Don't-forget,-in-markdown-cells-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Don't forget, in markdown cells</a></span><ul class=\"toc-item\"><li><span><a href=\"#you-have-access-to\" data-toc-modified-id=\"you-have-access-to-3.2.1\"><span class=\"toc-item-num\">3.2.1&nbsp;&nbsp;</span>you have access to</a></span><ul class=\"toc-item\"><li><span><a href=\"#further-subsection-headings\" data-toc-modified-id=\"further-subsection-headings-3.2.1.1\"><span class=\"toc-item-num\">3.2.1.1&nbsp;&nbsp;</span>further subsection headings</a></span></li></ul></li></ul></li></ul></li><li><span><a href=\"#Objective,-the-third\" data-toc-modified-id=\"Objective,-the-third-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Objective, the third</a></span></li><li><span><a href=\"#See-Also\" data-toc-modified-id=\"See-Also-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>See Also</a></span><ul class=\"toc-item\"><li><span><a href=\"#Related-Notebooks\" data-toc-modified-id=\"Related-Notebooks-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Related Notebooks</a></span></li><li><span><a href=\"#Additional-Documention\" data-toc-modified-id=\"Additional-Documention-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Additional Documention</a></span></li></ul></li></ul></div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports\n",
"We will begin by importing the important packages to be used throughout! Instructors, generally keep your imports to before your objectives unless there is an important takeaway from what you're importing or the way you're doing it. Give a brief description if necessary in more introductory notebooks."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"#top\">Top</a>\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Your first objective\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Begin your learning narrative here with your first individual learning objective. Be sure to use markdown cells to guide your notebook, and explain your larger and more complicated objectives in plain language as necessary. Keep code comments to a minimum, but include them if they clearly help explain a quick decision or syntax quirk. Of course, if you find yourself needing more and more and more markdown as you go, you may need to trim down or split up your notebook!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**NOTICE** copy this cell (after removing this text from source) to include at the end of any new sections you illustrate for a clear visual break and offer a path to the start of the notebook. You can use the markdown `---` separator to clearly separate your sections.\n",
"\n",
"<a href=\"#top\">Top</a>\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Objective number two\n",
"\n",
"Continue into your next objective, again defined by what unique piece of information you want your student to take away, or what pieces of code you want them to be able to take advantage of."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x = np.linspace(0, 2*np.pi, 5)\n",
"x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Highlight boxes can be useful!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert-info\">\n",
"<b>Tip:</b> My Tip\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert-warning\">\n",
"<b>Tip:</b> My Warning\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert-danger\">\n",
"<b>Tip:</b> My Error\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert-success\">\n",
"<b>Tip:</b> My Success\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y = np.sin(x)\n",
"y"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Don't forget, in markdown cells\n",
"#### you have access to\n",
"##### further subsection headings\n",
"\n",
"to help break up the organization of your sections as necessary. Don't let them get too long, however!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"labels = [f'{val / np.pi} $\\pi$' for val in x]\n",
"\n",
"plt.plot(x, y)\n",
"plt.scatter(x, y)\n",
"plt.xticks(x, labels);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"#top\">Top</a>\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Objective, the third\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x_10 = np.linspace(0, 2*np.pi, 10)\n",
"y_10 = np.sin(x_10)\n",
"\n",
"x_100 = np.linspace(0, 2*np.pi, 100)\n",
"y_100 = np.sin(x_100)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.plot(x_10, y_10, label=\"x=10\")\n",
"plt.plot(x_100, y_100, label=\"x=100\")\n",
"plt.xticks(x, labels)\n",
"plt.legend();"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"and so on, and so forth! Don't forget to follow your narrative, and by the end feel free to sneak in some (not so scary!) previews or natural extensions to your notebook pointing the learner where they might go next. And of course, thank you!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"#top\">Top</a>\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## See Also\n",
"\n",
"Finally, reserve a small space at the end for any citations/references, links to external resources (e.g. other websites, courses, activities), or suggestions for potentially 2-3 of our other notebooks that may complement or make for a logical prequel/sequel to your notebook. We will have direction for this on the website as well, but keep this in mind here at the end!\n",
"\n",
"### Related Notebooks\n",
"\n",
"Other notebooks that have similar content or ideas\n",
"- [notebook 1](link)\n",
"- [notebook 2](link)\n",
"\n",
"### Additional Documention\n",
"\n",
"Add documention links and/or supporting scientific information"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"#top\">Top</a>\n",
"\n",
"---"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.5"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": true,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": true
}
},
"nbformat": 4,
"nbformat_minor": 4
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 267 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 366 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 269 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 420 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 228 KiB

View file

@ -1,82 +0,0 @@
```python
#!python
from awips.dataaccess import DataAccessLayer
from shapely.geometry import Polygon,Point
from datetime import datetime
#Initiate a new DataRequest
#Get all of the states from the database
print "Requesting all states from DAF"
t1=datetime.utcnow()
request = DataAccessLayer.newDataRequest()
request.setDatatype("maps")
request.addIdentifier("geomField","the_geom")
request.addIdentifier("table","mapdata.states")
request.setParameters("state","fips","name")
states = DataAccessLayer.getGeometryData(request, None)
t2=datetime.utcnow()
tdelta = t2-t1
print 'DAF query to get all states took %i.%i seconds' %(tdelta.seconds,tdelta.microseconds)
#Create a polygon object...for example this is a polygon around Oklahoma City
polygon = Polygon([(-97.82,35.63),(-97.21,35.63),(-97.21,35.15),(-97.82,35.15),(-97.82,35.63)])
#Now lets filter the states down to the one that contains the above polygon
#We will use the python built in filter method to accomplish this
#the expression below goes through each state object, gets it's geometry, and calls
#it's contains method. It in essence creates a new list, state_contain_polygon, with
#only the states where the contains method evaluates to true.
t1=datetime.utcnow()
state_contain_polygon = filter(lambda state: state.getGeometry().contains(polygon),states)
t2= datetime.utcnow()
tdelta = t2-t1
print '\nFilter state objects to one that contains polygon took %i.%i seconds' %(tdelta.seconds,tdelta.microseconds)
print state_contain_polygon
print "Polygon is in the state of",state_contain_polygon[0].getString('name')
#Lets also create a point object...this one is located in the state of Iowa
point = Point(-93.62,41.60)
#Now lets see what state our point is in
t1=datetime.utcnow()
state_contain_point = filter(lambda state: state.getGeometry().contains(point),states)
t2= datetime.utcnow()
tdelta = t2-t1
print '\nFilter state objects to one that contains point took %i.%i seconds ' %(tdelta.seconds,tdelta.microseconds)
print state_contain_point
print "Point is in the state of",state_contain_point[0].getString('name')
#One last example...this time for an intersection. Lets find all of the states this polygon intersects
#This polygon is the same as above just extended it further south to 33.15 degrees
polygon2 = Polygon([(-97.82,35.63),(-97.21,35.63),(-97.21,33.15),(-97.82,33.15),(-97.82,35.63)])
t1=datetime.utcnow()
state_intersect_polygon = filter(lambda state: state.getGeometry().intersects(polygon2),states)
t2= datetime.utcnow()
tdelta = t2-t1
print '\nFilter state objects to the ones that intersect polygon took %i.%i seconds ' %(tdelta.seconds,tdelta.microseconds)
print state_intersect_polygon
for state in state_intersect_polygon:
print "Polygon intersects the state of",state.getString('name')
```
```python
Requesting all states from DAF
DAF query to get all states took 21.915029 seconds
```
```python
Filter state objects to one that contains polygon took 0.382097 seconds
[<awips.dataaccess.PyGeometryData.PyGeometryData object at 0x2bebdd0>]
Polygon is in the state of Oklahoma
```
```python
Filter state objects to one that contains point took 0.2028 seconds
[<awips.dataaccess.PyGeometryData.PyGeometryData object at 0x2beb9d0>]
Point is in the state of Iowa
```
```python
Filter state objects to the ones that intersect polygon took 0.4032 seconds
[<awips.dataaccess.PyGeometryData.PyGeometryData object at 0x2beb610>, <awips.dataaccess.PyGeometryData.PyGeometryData object at 0x2bebdd0>]
Polygon intersects the state of Texas
Polygon intersects the state of Oklahoma
```

View file

@ -1,49 +0,0 @@
```python
#!/awips2/python/bin/python
from awips.dataaccess import DataAccessLayer
import numpy as np
request = DataAccessLayer.newDataRequest()
request.setDatatype("satellite")
request.setLocationNames("East CONUS")
request.setParameters("Imager 6.7-6.5 micron IR (WV)")
t = DataAccessLayer.getAvailableTimes(request)
print t[-1].getRefTime()
response = DataAccessLayer.getGridData(request, times=[t[-1]])
print response
data = response[0]
print 'Units are in', data.getUnit()
lon,lat = data.getLatLonCoords()
print 'Parameter we requested is',data.getParameter()
print data.getRawData()
```
```python
May 04 15 18:45:19 GMT
```
```python
[<awips.dataaccess.PyGridData.PyGridData object at 0x157d550>]
```
```python
Units are in None
```
```python
Parameter we requested is Imager 6.7-6.5 micron IR (WV)
```
```python
[[ 186. 185. 186. ..., 180. 181. 181.]
[ 186. 185. 186. ..., 180. 181. 181.]
[ 186. 186. 185. ..., 180. 181. 181.]
...,
[ 0. 0. 0. ..., 145. 145. 145.]
[ 0. 0. 0. ..., 145. 145. 145.]
[ 0. 0. 0. ..., 145. 145. 144.]]
```

View file

@ -1,69 +0,0 @@
```python
#!python
#!/awips2/python/bin/python
from awips.dataaccess import DataAccessLayer
#Initiate a new DataRequest
b = DataAccessLayer.newDataRequest()
#Set the datatype to maps so it knows what plugin to route the request too
b.setDatatype("maps")
#setParameters indicates the columns from table that we want returned along with our geometry
b.setParameters("state","fips")
#Add a couple of identifiers to indicate the geomField and table we are querying
b.addIdentifier("geomField","the_geom")
b.addIdentifier("table","mapdata.states")
#getAvailableLocationNames method will return a list of all available locations
#based off the table we have specified previously. LocationNames mean different
#things to different plugins beware...radar is icao, satellite is sector, etc
a = DataAccessLayer.getAvailableLocationNames(b)
print a
#Use setLocationNames to set the states we want data from
b.setLocationNames("Oklahoma","Texas","Kansas")
#Finally lets request some data. There are two types of data (Grid, Geometry) here we are
#requesting geometry data and therefore use the getGeometryData method. We pass it our DataRequest object
#that has all of our parameters and None for the DataTime object argument since maps are time agnostic
#This returns a list of awips.dataaccess.PyGeometryData.PyGeometryData objects.
c = DataAccessLayer.getGeometryData(b, None)
print c
#Now lets loop through our list of PyGeometryData objects of states and look at some data
for shape in c:
#Lets print the locationname for this object
print 'Location name is',shape.getLocationName()
#getGeometry returns a shapely geometry object for this state. Using shapely allows
#us to perform postgis type operations outside the database (contains,within,etc). If
#not familiar with shapely recommend you look at the documentation available online.
#This is a 3rd party python module so just Google search python shapely to find the docs
mpoly = shape.getGeometry()
#These next few items allow us to access the column data we requested when we set the
#parameters
print 'Parameters requested are',shape.getParameters()
print 'state column is',shape.getString('state')
print 'fips column is',shape.getString('fips')
```
```python
['Alabama', 'Alaska', 'American Samoa', 'Arizona', 'Arkansas', 'California', 'Colorado', 'Connecticut', 'Delaware', 'District of Columbia', 'Florida', 'Georgia', 'Guam', 'Hawaii', 'Idaho', 'Illinois', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana', 'Maine', 'Maryland', 'Massachusetts', 'Michigan', 'Minnesota', 'Mississippi', 'Missouri', 'Montana', 'Nebraska', 'Nevada', 'New Hampshire', 'New Jersey', 'New Mexico', 'New York', 'North Carolina', 'North Dakota', 'Ohio', 'Oklahoma', 'Oregon', 'Pennsylvania', 'Puerto Rico', 'Rhode Island', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah', 'Vermont', 'Virgin Islands', 'Virginia', 'Washington', 'West Virginia', 'Wisconsin', 'Wyoming']
```
```python
[<awips.dataaccess.PyGeometryData.PyGeometryData object at 0x1ec4410>, <awips.dataaccess.PyGeometryData.PyGeometryData object at 0x1ec4510>, <awips.dataaccess.PyGeometryData.PyGeometryData object at 0x1ec4550>]
```
```python
Location name is Texas
Parameters requested are ['state', 'fips']
state column is TX
fips column is 48
Location name is Kansas
Parameters requested are ['state', 'fips']
state column is KS
fips column is 20
Location name is Oklahoma
Parameters requested are ['state', 'fips']
state column is OK
fips column is 40
```

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

53
prep.sh
View file

@ -1,53 +0,0 @@
#!/bin/bash
# python-awips prep script
# author: mjames@ucar.edu
#
# should be /awips2/repo/python-awips or ~/python-awips
dir="$( cd "$(dirname "$0")" ; pwd -P )"
# Find plugin-contributed files and add them to the site packages.
find /awips2/repo/awips2-core/common/ -path '*/pythonPackages/dynamicserialize' \
-exec cp {} -rv ${dir} \;
find /awips2/repo/awips2-builds/edexOsgi/ -path '*/pythonPackages/dynamicserialize' \
-exec cp {} -rv ${dir} \;
#bash %{_baseline_workspace}/build.edex/opt/tools/update_dstypes.sh %{_build_root}/awips2/python/lib/python2.7/site-packages/dynamicserialize
# Update __init__.py files under dynamicserialize/dstypes/ to include
# all contributed python packages and modules within __all__ in the packages'
# __init__.py
echo "Updating dynamicserialize/dstypes"
# Update __all__ for every package under dstypes
for package in $(find dynamicserialize/dstypes -name __init__.py -printf '%h ')
do
pushd $package > /dev/null
# find non-hidden packages
subpackages=$(find . -maxdepth 1 -type d ! -name ".*" -printf '%f\n' | sort)
# find non-hidden python modules
modules=$(find . -maxdepth 1 -type f \( -name "*.py" ! -name "__init__.py" ! -name ".*" \) -printf '%f\n' | sed 's/\.py//' | sort)
# join subpackages and modules into a single list, modules first
all=("${subpackages[@]}" "${modules[@]}")
joined=$(printf ",\n \'%s\'" "${all[@]}")
#replace the current __all__ definition with the rebuilt __all__, which now includes all contributed packages and modules.
#-0777 allows us to match the multi-line __all__ definition
perl -0777 -p -i -e "s/__all__ = \[[^\]]*\]/__all__ = \[$(echo \"${joined:1}\")\n \]/g" __init__.py
popd > /dev/null
done
echo "Done"
#find ${dir} -type f | xargs sed -i 's/[ \t]*$//'
git grep -l 'ufpy' | xargs sed -i 's/ufpy/awips/g'
#find ${dir} -type f | xargs sed -i '/# This software was developed and \/ or modified by Raytheon Company,/,/# further licensing information./d'
# update import strings for python3 compliance

96
pypies/pypies.cfg Normal file
View file

@ -0,0 +1,96 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# config file for pypies
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 10/07/10 njensen Initial Creation.
# 01/17/13 1490 bkowal Added tcp_logger configuration
# 09/24/21 8608 mapeters Added fullDiskThreshold
#
#
[edex_data]
hdf5dir=/awips2/edex/data/hdf5
# Disk space usage percentage that should be considered full disk. All writes
# will be blocked once this disk space usage is reached.
fullDiskThreshold=99
[loggers]
keys=root,minutes,hours
[tcp_logger]
# default is based on logging.handlers.DEFAULT_TCP_LOGGING_PORT
# at the time this change was made.
logging_port=9020
[handlers]
keys=pypiesHandler,minutesHandler,hoursHandler
[formatters]
keys=pypiesFormatter
[logger_root]
level=INFO
handlers=pypiesHandler
[logger_minutes]
level=INFO
handlers=minutesHandler
propagate=0
qualname=minute
[logger_hours]
level=INFO
handlers=hoursHandler
propagate=0
qualname=hourly
[handler_pypiesHandler]
logFileDir=/awips2/pypies/logs
class=handlers.TimedRotatingFileHandler
level=NOTSET
formatter=pypiesFormatter
args=('%(logFileDir)s/pypies.log', 'midnight', 1, 7,)
[handler_minutesHandler]
logFileDir=/awips2/pypies/logs
class=handlers.TimedRotatingFileHandler
level=NOTSET
formatter=pypiesFormatter
args=('%(logFileDir)s/pypiesMinuteStats.log', 'midnight', 1, 7,)
[handler_hoursHandler]
logFileDir=/awips2/pypies/logs
class=handlers.TimedRotatingFileHandler
level=NOTSET
formatter=pypiesFormatter
args=('%(logFileDir)s/pypiesHourlyStats.log', 'midnight', 1, 7,)
[formatter_pypiesFormatter]
format=%(levelname)s %(asctime)s %(message)s
dateFmt=
class=logging.Formatter

View file

@ -0,0 +1,82 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# Interface for data stores
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 06/16/10 njensen Initial Creation.
# 07/30/15 1574 nabowle Add deleteOrphanFiles
# Jun 25, 2019 7885 tgurney string.join fix (Python 3)
#
#
#
import sys
import pypies
import string, inspect, traceback
class IDataStore:
def __init__(self):
pass
def store(self, request):
raise pypies.NotImplementedException()
def delete(self, request):
raise pypies.NotImplementedException()
def retrieve(self, request):
raise pypies.NotImplementedException()
def getDatasets(self, request):
raise pypies.NotImplementedException()
def retrieveDatasets(self, request):
raise pypies.NotImplementedException()
def retrieveGroups(self, request):
raise pypies.NotImplementedException()
def deleteFiles(self, request):
raise pypies.NotImplementedException()
def createDataset(self, request):
raise pypies.NotImplementedException()
def deleteOrphanFiles(self, request):
raise pypies.NotImplementedException()
def _exc():
t, v, tb = sys.exc_info()
return ' '.join(traceback.format_exception(t, v, tb))
def _line():
return inspect.currentframe().f_back.f_back.f_lineno
def _file():
return inspect.currentframe().f_back.f_back.f_code.co_filename

View file

@ -0,0 +1,113 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# Manages locking files for writing
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 07/01/10 njensen Initial Creation.
# 09/23/10 njensen Rewrote locking to use fcntl
# Jun 25, 2019 7885 tgurney Python 3 fixes
# Jan 17, 2022 8729 rjpeter Add method cleanUp
#
#
import fcntl, time, os, logging
from pypies import logger
from pypies import timeMap
MAX_TIME_TO_WAIT = 120 # seconds
def dirCheck(filename):
d = os.path.dirname(filename)
if not os.path.exists(d):
try:
os.makedirs(d)
except:
pass # could error with dir already exists based on a race condition
if not os.path.exists(filename):
# for some unknown reason, can't get locks when i open it with create flag,
# so we quickly create it and then close it so we can reopen it with RDWR
# for the lock (according to chammack this is due to underlying linux calls)
fd = os.open(filename, os.O_CREAT)
os.close(fd)
def getLock(filename, mode):
t0 = time.time()
dirCheck(filename)
gotLock = False
startTime = time.time()
nowTime = time.time()
if mode == 'w' or mode == 'a':
lockOp = fcntl.LOCK_EX
fmode = os.O_RDWR
else:
lockOp = fcntl.LOCK_SH
fmode = os.O_RDONLY
fd = os.open(filename, fmode)
lockOp = lockOp | fcntl.LOCK_NB
if logger.isEnabledFor(logging.DEBUG):
logger.debug(str(os.getpid()) +" Attempting to get lock on " + str(fd) + " mode " + mode + " " + filename)
while not gotLock and (nowTime - startTime) < MAX_TIME_TO_WAIT:
try:
fcntl.lockf(fd, lockOp)
gotLock = True
except:
time.sleep(.1)
nowTime = time.time()
if gotLock:
if logger.isEnabledFor(logging.DEBUG):
logger.debug('Got lock on ' + str(fd))
else:
if logger.isEnabledFor(logging.DEBUG):
logger.debug(str(os.getpid()) + " failed to get lock")
os.close(fd)
t1=time.time()
if 'getLock' in timeMap:
timeMap['getLock']+=t1-t0
else:
timeMap['getLock']=t1-t0
return gotLock, fd
def releaseLock(fd):
t0=time.time()
fcntl.lockf(fd, fcntl.LOCK_UN)
os.close(fd)
if logger.isEnabledFor(logging.DEBUG):
logger.debug('Released lock on ' + str(fd))
t1=time.time()
if 'releaseLock' in timeMap:
timeMap['releaseLock']+=t1-t0
else:
timeMap['releaseLock']=t1-t0
def cleanUp(basePath):
#no op
return

View file

@ -0,0 +1,306 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# Manages locking files for reading and writing
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 10/04/10 njensen Initial creation
# Jun 25, 2019 7885 tgurney Python 3 fixes
# Jun 08, 2021 8729 rjpeter Utilize RAM Drive
# Jan 17, 2022 8729 rjpeter Add cleanUp to handle empty directories
#
import time, os, logging
from pypies import logger
from pypies import timeMap
MAX_TIME_TO_WAIT = 120 # seconds
ORPHAN_TIMEOUT = 150 # seconds
MAX_SLEEP_TIME = 0.025
MIN_SLEEP_TIME = 0.005
readLockAppend = "_read"
writeLockAppend = "_write"
# RAM DRIVE location for quick access for lock files
lockDirPrepend = "/awips2/hdf5_locks/"
# check if directory exists and if not make it
def _dirCheck(filename):
d = os.path.dirname(filename)
if not os.path.exists(d):
try:
os.makedirs(d)
except OSError as e:
if e.errno != 17: # could error with dir already exists based on a race condition
raise e
def getLock(filename, mode):
t0 = time.time()
# rest of hdf5 depends on lock manager to make root dir for file
_dirCheck(filename)
gotLock, fpath = _getLockInternal(filename, mode)
if gotLock:
if logger.isEnabledFor(logging.DEBUG):
logger.debug('Got lock on ' + str(filename))
else:
if logger.isEnabledFor(logging.DEBUG):
logger.debug(str(os.getpid()) + " failed to get lock")
t1=time.time()
if 'getLock' in timeMap:
timeMap['getLock']+=t1-t0
else:
timeMap['getLock']=t1-t0
return gotLock, fpath
def _getLockInternal(filename, mode):
gotLock = False
lockPath = None
startTime = time.time()
if logger.isEnabledFor(logging.DEBUG):
logger.debug(str(os.getpid()) +" Attempting to get lock on mode " + mode + " " + filename)
# verify root lock dir exists
lockPath = lockDirPrepend + filename
_dirCheck(lockPath)
readLockDir = lockPath + readLockAppend
writeLockDir = lockPath + writeLockAppend
count = 0
timeElapsed = time.time() - startTime
while not gotLock and timeElapsed < MAX_TIME_TO_WAIT:
count += 1
if count % 100 == 0:
# check for orphans every 100 tries
_checkForOrphans(lockPath)
if mode == 'r':
if os.path.exists(writeLockDir):
time.sleep(_getSleepTime(timeElapsed))
timeElapsed = time.time() - startTime
else:
try:
os.mkdir(readLockDir)
except OSError as e:
# error 17 is dir already exists, which we can ignore
if e.errno != 17:
raise e
try:
# Need to check to make sure write lock wasn't created
# between last check and our dir creation
if os.path.exists(writeLockDir):
os.rmdir(readLockDir)
continue
except OSError as e:
continue #Ignore error
try:
f = open(readLockDir + '/' + str(os.getpid()) + '.pid', 'w')
gotLock = True
lockPath = f.name
except IOError as e:
if e.errno != 2:
# 2 indicates directory is gone, could have had another
# process remove the read dir before the file was created
raise e
else: # mode is 'w' or 'a'
if os.path.exists(writeLockDir):
time.sleep(_getSleepTime(timeElapsed))
timeElapsed = time.time() - startTime
else:
# make the write lock to signal reads to start queuing up
try:
os.mkdir(writeLockDir)
except OSError as e:
if e.errno == 17:
continue # different process grabbed the write lock before this one
else:
raise e
# reset start time since we got the write lock. now just need to wait
# for any ongoing reads to finish
startTime = time.time()
timeElapsed = time.time() - startTime
while os.path.exists(readLockDir) and timeElapsed < MAX_TIME_TO_WAIT:
count += 1
if count % 100 == 0:
_checkForOrphans(lockPath)
time.sleep(_getSleepTime(timeElapsed))
timeElapsed = time.time() - startTime
if not os.path.exists(readLockDir):
gotLock = True
lockPath = writeLockDir
elif timeElapsed >= MAX_TIME_TO_WAIT:
# the read lock never got released, so we don't have the write lock
# release the write lock to try and lessen impact
releaseLock(writeLockDir)
raise RuntimeError("Unable to get write lock, read locks not releasing: " + readLockDir)
else :
# read lock was created after we checked in while loop but
# read process checked write dir before we created it, release and continue
releaseLock(writeLockDir)
return gotLock, lockPath
def _getSleepTime(timeWaiting):
sleepTime = MAX_SLEEP_TIME
if timeWaiting > 0.0:
y = 1.0 / timeWaiting
sleepTime = y / 100.0
if sleepTime < MIN_SLEEP_TIME:
sleepTime = MIN_SLEEP_TIME
elif sleepTime > MAX_SLEEP_TIME:
sleepTime = MAX_SLEEP_TIME
if 'approxLockSleepTime' in timeMap:
timeMap['approxLockSleepTime']+=sleepTime
else:
timeMap['approxLockSleepTime']=sleepTime
return sleepTime
def releaseLock(lockPath):
t0=time.time()
if lockPath.endswith('.pid'):
# it was a read
os.remove(lockPath)
dirpath = lockPath[0:lockPath.rfind('/')]
try:
if len(os.listdir(dirpath)) == 0:
os.rmdir(dirpath)
except OSError as e:
if e.errno != 2 and e.errno != 39:
# error 2 is no directory exists, implying a different read release
# raced and removed it first
# error 39 is directory is not empty, implying a different read
# added a pid file after we checked the dir's number of files
logger.warn('Unable to remove read lock ' + dirpath + ': ' + str(e))
else:
# it was a write
os.rmdir(lockPath)
if logger.isEnabledFor(logging.DEBUG):
logger.debug('Released lock on ' + str(lockPath))
t1=time.time()
if 'releaseLock' in timeMap:
timeMap['releaseLock']+=t1-t0
else:
timeMap['releaseLock']=t1-t0
# Remove empty lock directories
def cleanUp(basePath):
startTime = time.time()
deletedCount = 0
for root, dirs, files in os.walk(os.path.join(lockDirPrepend, basePath), topdown=False, followlinks=False):
for d in dirs:
fullpath = os.path.join(root, d)
try:
modTime = os.path.getmtime(fullpath)
except Exception:
# Could not check dir time, most likely lock dir that was deleted, skip to next dir
continue
# only delete directories that haven't been modified in the last 10 minutes
if (startTime - modTime > 600):
try:
os.rmdir(fullpath)
deletedCount += 1
except OSError as e:
if (e.errno != os.errno.ENOTEMPTY and e.errno != os.errno.ENOENT):
logger.warn('Unable to remove empty lock directory ' + fullpath + ': ' + str(e))
endTime = time.time()
logger.info('Deleted {} empty lock directories in {:.3f} ms'.format(deletedCount, (endTime-startTime) * 1000))
def _checkForOrphans(lockPath):
if logger.isEnabledFor(logging.DEBUG):
logger.debug('Checking for orphan locks on ' + lockPath)
readLockDir = lockPath + readLockAppend
writeLockDir = lockPath + writeLockAppend
orphanRemoved = False
nowTime = time.time()
# check for read lock orphans
if os.path.exists(readLockDir):
for f in os.listdir(readLockDir):
fullpath = readLockDir + '/' + f
try:
statinfo = os.stat(fullpath)
if nowTime - ORPHAN_TIMEOUT > statinfo.st_mtime:
logger.warn("Orphan lock " + fullpath + " found and will be removed.")
os.remove(fullpath)
orphanRemoved = True
except OSError as e:
if e.errno != 2:
# 2 indicates file was removed by other process looking for orphans
logger.error("Error removing orphaned lock: " + str(e))
try:
if len(os.listdir(readLockDir)) == 0:
logger.warn("Orphan lock " + readLockDir + " found and will be removed.")
os.rmdir(readLockDir)
except OSError as e:
if e.errno != 2 and e.errno != 39:
# error 2 is no directory exists, implying a different read release
# raced and removed it first
# error 39 is directory is not empty, implying a different read
# added a pid file after we checked the dir's number of files
logger.error('Unable to remove orphaned read lock ' + readLockDir + ': ' + str(e))
# check for write lock orphans
if os.path.exists(writeLockDir):
try:
statinfo = os.stat(writeLockDir)
if nowTime - ORPHAN_TIMEOUT > statinfo.st_mtime:
logger.warn("Orphan lock " + writeLockDir + " found and will be removed.")
os.rmdir(writeLockDir)
orphanRemoved = True
except OSError as e:
# 2 indicates no such directory, assuming another process removed it
if e.errno != 2:
logger.error('Unable to remove orphaned lock: ' + str(e))
if 'orphanCheck' in timeMap:
timeMap['orphanCheck']+=(time.time() - nowTime)
else:
timeMap['orphanCheck']=(time.time() - nowTime)
return orphanRemoved

122
pypies/pypies/__init__.py Normal file
View file

@ -0,0 +1,122 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# __init__.py for pypies package
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 05/27/10 njensen Initial Creation.
# 01/17/13 1490 bkowal Retrieve the hdf5 root directory
# and the logging tcp port from
# configuration.
# Jun 25, 2019 7885 tgurney Python 3 fixes
#
#
from . import IDataStore
from . import config
import sys, os
import pypies.config.pypiesConfigurationManager
def configure():
pypiesConfigurationManager = pypies.config.pypiesConfigurationManager.PypiesConfigurationManager()
if (not pypiesConfigurationManager.hasConfigurationBeenLoaded()):
# this case is unlikely
print('Failed to load the pypies configuration!')
sys.exit(-1)
return pypiesConfigurationManager
def getLogger(scp):
import logging
import logging.handlers
loggingPort = int(scp.get('tcp_logger', 'logging_port'))
logger = logging.getLogger('pypies')
logger.setLevel(logging.INFO)
socketHandler = logging.handlers.SocketHandler('localhost', loggingPort)
# don't bother with a formatter, since a socket handler sends the event as
# an unformatted pickle
logger.addHandler(socketHandler)
return logger
def getHdf5Dir(scp):
# determine the edex hdf5 root
hdf5Dir = scp.get('edex_data', 'hdf5dir')
# add a trailing directory separator (when necessary)
if (not hdf5Dir.endswith('/')):
hdf5Dir = hdf5Dir + '/'
if not os.path.exists(hdf5Dir):
os.makedirs(hdf5Dir)
infoMessage = 'using hdf5 directory: ' + hdf5Dir
logger.info(infoMessage)
return hdf5Dir
def getFullDiskThreshold(scp):
threshold = int(scp.get('edex_data', 'fullDiskThreshold'))
infoMessage = f'using full disk threshold: {threshold}%'
logger.info(infoMessage)
return threshold
pypiesCM = configure()
scp = pypiesCM.getConfiguration()
logger = getLogger(scp)
timeMap = {}
hdf5Dir = getHdf5Dir(scp)
fullDiskThreshold = getFullDiskThreshold(scp)
def pypiesWrapper(request):
from . import handlers
return lambda request: handlers.PypiesHandler(request)
class NotImplementedException(Exception):
def __init__(self, message=None):
self.message = message
def __str__(self):
if self.message:
return self.message
else:
return ""
class StorageException(Exception):
def __init__(self, message=None):
self.message = message
def __str__(self):
if self.message:
return self.message
else:
return ""

View file

@ -0,0 +1,33 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# __init__.py for hdf5 implementation
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 01/10/13 bkowal Initial Creation.
#
#
#

View file

@ -0,0 +1,69 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# Configuration for pypies logging
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 01/10/13 bkowal Initial Creation.
# 01/17/13 #1490 bkowal The location of pypies.cfg is
# now retrieved from the environment.
# Jun 25, 2019 7885 tgurney Python 3 fixes
#
import os, configparser
class PypiesConfigurationManager:
def __init__(self):
self.__configLoaded = False
self.__initConfigLocation()
if (not self.__configLoc):
raise RuntimeError("No pypies.cfg found")
self.__loadConfig()
def __initConfigLocation(self):
self.__configLoc = os.environ["PYPIES_CFG"]
if not os.path.exists(self.__configLoc):
print("Unable to find pypies.cfg at ", self.__configLoc)
self.__configLoc = None
else:
print("Found pypies.cfg at ", self.__configLoc)
def __loadConfig(self):
self.__scp = configparser.SafeConfigParser()
self.__scp.read(self.__configLoc)
self.__configLoaded = True
def getConfigurationLocation(self):
return self.__configLoc
def hasConfigurationBeenLoaded(self):
return self.__configLoaded
def getConfiguration(self):
return self.__scp

147
pypies/pypies/handlers.py Normal file
View file

@ -0,0 +1,147 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# Main processing module of pypies. Receives the http request through WSGI,
# deserializes the request, processes it, and serializes the response
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 08/17/10 njensen Initial Creation.
# 01/11/13 bkowal Pypies will now read the hdf5 root from configuration
# 01/17/13 1490 bkowal Relocated the configuration of pypies
# 06/12/13 2102 njensen Raise uncaught exceptions to force http code 500
# 11/06/14 3549 njensen Log time to receive data
# 07/30/15 1574 nabowle Handle DeleteOrphansRequest
# 11/15/16 5992 bsteffen Log size
# Jun 25, 2019 7885 tgurney Python 3 fixes
# Sep 28, 2021 8608 mapeters Add special handling for certain error types
#
#
from werkzeug import Request, Response
import errno
import time, logging
import pypies
from pypies import IDataStore
import dynamicserialize
from dynamicserialize.dstypes.com.raytheon.uf.common.pypies.request import *
from dynamicserialize.dstypes.com.raytheon.uf.common.pypies.response import *
logger = pypies.logger
timeMap = pypies.timeMap
hdf5Dir = pypies.hdf5Dir
from pypies.impl import H5pyDataStore
datastore = H5pyDataStore.H5pyDataStore()
datastoreMap = {
StoreRequest: (datastore.store, "StoreRequest"),
RetrieveRequest: (datastore.retrieve, "RetrieveRequest"),
DatasetNamesRequest: (datastore.getDatasets, "DatasetNamesRequest"),
DatasetDataRequest: (datastore.retrieveDatasets, "DatasetDataRequest"),
GroupsRequest: (datastore.retrieveGroups, "GroupsRequest"),
DeleteRequest: (datastore.delete, "DeleteRequest"),
DeleteFilesRequest: (datastore.deleteFiles, "DeleteFilesRequest"),
CreateDatasetRequest: (datastore.createDataset, "CreateDatasetRequest"),
RepackRequest: (datastore.repack, "RepackRequest"),
CopyRequest: (datastore.copy, "CopyRequest"),
DeleteOrphansRequest: (datastore.deleteOrphanFiles, "DeleteOrphansRequest")
}
@Request.application
def pypies_response(request):
timeMap.clear()
try:
startTime = time.time()
try:
data=request.data
timeMap['receiveData']=time.time()-startTime
timeMap['size']=len(data)
startTime = time.time()
obj = dynamicserialize.deserialize(data)
except:
msg = 'Error deserializing request: ' + IDataStore._exc()
logger.error(msg)
resp = ErrorResponse()
resp.setError(msg)
return __prepareResponse(resp)
timeMap['deserialize']=time.time()-startTime
# add the hdf5 directory path to the file name
filename = hdf5Dir + obj.getFilename()
obj.setFilename(filename)
clz = obj.__class__
if logger.isEnabledFor(logging.DEBUG):
logger.debug(str(clz) + ": " + obj.getFilename())
success = False
if clz in datastoreMap:
try:
resp = datastoreMap[clz][0](obj)
success = True
except Exception as e:
msg = 'Error processing ' + datastoreMap[clz][1] +' on file ' + obj.getFilename() + ': ' + IDataStore._exc()
logger.error(msg)
resp = ErrorResponse(msg, __getType(e))
else:
msg = 'IDataStore unable to process request of type ' + str(obj.__class__)
logger.error(msg)
resp = ErrorResponse()
resp.setError(msg)
startSerialize = time.time()
httpResp = __prepareResponse(resp)
if success:
endTime = time.time()
timeMap['serialize'] = endTime - startSerialize
timeMap['total'] = endTime - startTime
logger.info({'request':datastoreMap[clz][1], 'time':timeMap, 'file':obj.getFilename()})
return httpResp
except:
# Absolutely should not reach this, if we do, need to fix code
logger.error("Uncaught exception! " + IDataStore._exc())
# want to re-raise the error as that will cause PyPIES to return http error code 500
raise
def __prepareResponse(resp):
try:
serializedResp = dynamicserialize.serialize(resp)
except:
resp = ErrorResponse()
errorMsg = 'Error serializing response: ' + IDataStore._exc()
logger.error(errorMsg)
resp.setError(errorMsg)
# hopefully the error response serializes ok, if not you're kind of screwed
serializedResp = dynamicserialize.serialize(resp)
return Response(serializedResp)
def __getType(exception):
if isinstance(exception, PermissionError):
# subclass of OSError for errno EPERM and EACCES
return 'PERMISSIONS'
elif isinstance(exception, OSError):
if exception.errno == errno.ENOSPC:
return 'DISK_SPACE'
return 'OTHER'

View file

@ -0,0 +1,140 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# Semi-port from Java, builds the result storage records from a read
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 07/21/10 njensen Initial Creation.
# 09/19/13 2309 bsteffen Fix group name in returned
# records.
# Nov 14, 2013 2393 bclement removed interpolation
# Apr 24, 2015 4425 nabowle Add DoubleDataRecord
# Jul 27, 2015 4402 njensen return fill value of None if fill_time_never
# Feb 16, 2016 3857 tgurney Handle lowercase compression type (e.g. "lzf")
# Feb 16, 2016 3857 tgurney Add min index to slab retrieval response
# Sep 19, 2018 7435 ksunil Eliminate compression/decompression on HDF5
# May 22, 2019 7847 bsteffen Always return NONE for compression type of requested data.
# Jun 25, 2019 7885 tgurney Python 3 fixes
# Jan 28, 2020 7985 ksunil Removed the compression changes introduced in 7435
# Jun 08, 2022 8866 mapeters Set max sizes on record in createStorageRecord()
#
import numpy
import pypies
import logging
import time
from h5py import h5d
from dynamicserialize.dstypes.com.raytheon.uf.common.datastorage import *
from dynamicserialize.dstypes.com.raytheon.uf.common.datastorage.records import *
logger = pypies.logger
timeMap = pypies.timeMap
typeToClassMap = {
numpy.int8: ByteDataRecord,
numpy.int16: ShortDataRecord,
numpy.int32: IntegerDataRecord,
numpy.int64: LongDataRecord,
numpy.float32: FloatDataRecord,
numpy.float64: DoubleDataRecord,
numpy.object_: StringDataRecord,
numpy.string_: StringDataRecord
}
def createStorageRecord(rawData, ds, req):
"""
Create and return new storage record.
Args:
rawData: numpy.ndarray object
ds: h5py Dataset object
req: Request object
Returns:
*DataRecord object, depends on type of data requested.
"""
t0 = time.time()
t = typeToClassMap[rawData.dtype.type]
inst = t()
name = ds.name
parentName = '/'
slashIndex = name.rfind('/')
if slashIndex > -1:
parentName = name[0:slashIndex].replace('::', '/')
name = name[slashIndex+1:]
inst.setName(name)
inst.setGroup(parentName)
inst.putDataObject(rawData)
inst.setDimension(len(ds.shape))
if logger.isEnabledFor(logging.DEBUG):
logger.debug("rawData.shape " + str(rawData.shape))
sizes = [int(x) for x in reversed(rawData.shape)]
inst.setSizes(sizes)
if ds.maxshape:
maxSizes = [int(x) if x else 0 for x in reversed(ds.maxshape)]
inst.setMaxSizes(maxSizes)
fillValue = None
if (ds._dcpl.get_fill_time() != h5d.FILL_TIME_NEVER and
rawData.dtype.type != numpy.object_ and
rawData.dtype.type != numpy.string_):
fillValue = numpy.zeros((1,), rawData.dtype)
ds._dcpl.get_fill_value(fillValue)
inst.setFillValue(fillValue[0])
attrs = {}
for key in ds.attrs.keys():
try:
val = ds.attrs[key]
if type(val) is numpy.ndarray:
val = val[0]
attrs[key] = val
except TypeError:
# Skip over attributes that h5py cannot map into numpy datatypes
pass
inst.setDataAttributes(attrs)
props = StorageProperties()
if ds.chunks:
props.setChunked(True)
else:
props.setChunked(False)
props.setCompression('NONE')
inst.setProps(props)
minIndex = req.getMinIndexForSlab()
if minIndex is not None:
inst.setMinIndex(numpy.array(list(minIndex), numpy.int64))
t1 = time.time()
if 'createRecord' in timeMap:
timeMap['createRecord'] += t1 - t0
else:
timeMap['createRecord'] = t1 - t0
return inst

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,130 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# Semi-port from Java HDF5OpManager, handles read requests
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 07/20/10 njensen Initial Creation.
# 02/20/13 DR 15662 M.Porricelli Modified __do2DPointRequest
# to check for null points
# 09/19/2018 7435 ksunil Eliminate compression/decompression on HDF5
# 04/30/2019 7814 dgilling Make compatible with h5py 2.x.
# Jun 25, 2019 7885 tgurney Python 3 fixes
# 09/17/2019 7930 randerso Fix slab request performance
# Oct 30, 2019 7962 tgurney Fix 2D point requests
# Nov 5, 2019 7885 tgurney Fix 1D point requests
# 01/28/2020 7985 ksunil Removed the compression changes introduced in 7435
# Jul 20, 2021 8594 dgilling Prevent 1D point requests from modifying state of
# input request.
#
import numpy, pypies, logging, time
from pypies import NotImplementedException
import h5py
logger = pypies.logger
timeMap = pypies.timeMap
def read(ds, request):
t0=time.time()
rt = request.getType()
if logger.isEnabledFor(logging.DEBUG):
logger.debug('requestType=' + rt)
result = None
indices = request.getIndices()
if rt == 'ALL':
if ds.len():
result = ds[()]
else:
result = numpy.zeros((0,), ds.dtype.type)
elif rt == 'POINT':
points = request.getPoints()
ndims = len(ds.shape)
if ndims == 1:
indices = []
for pt in points:
indices.append(pt.getX())
result = __do1DPointRequest(ds, indices)
elif ndims == 2:
result = __do2DPointRequest(ds, points)
elif rt == 'XLINE':
# if a line query was used, but it's only 1d, this is really
# a point query. We could use hyperslabs to do this, but
# it would be a lot slower than a regular point query.
if len(ds.shape) == 1:
result = __do1DPointRequest(ds, indices)
else:
result = ds[:, sorted(indices)]
elif rt == 'YLINE':
# if a line query was used, but it's only 1d, this is really
# a point query. We could use hyperslabs to do this, but
# it would be a lot slower than a regular point query.
if len(ds.shape) == 1:
result = __do1DPointRequest(ds, indices)
else:
result = ds[sorted(indices)]
elif rt == 'SLAB':
minIndex = request.getMinIndexForSlab()
maxIndex = request.getMaxIndexForSlab()
# Get all sizes and slices in reverse order
slices = []
numIndices = len(minIndex)
# Get all sizes and slices in reverse order
for i in reversed(range(numIndices)):
slices.append(slice(minIndex[i], maxIndex[i]))
# Resize data to desired slab size
result = ds[tuple(slices)]
else:
raise NotImplementedException('Only read requests supported are ' +
'ALL, POINT, XLINE, YLINE, and SLAB')
t1=time.time()
if 'read' in timeMap:
timeMap['read']+=t1-t0
else:
timeMap['read']=t1-t0
return result
def __do1DPointRequest(ds, indices):
points = numpy.array(indices, copy=True)
points.resize(len(indices), 1)
sel = h5py._hl.selections.PointSelection(ds.shape)
sel.set(points)
return ds[sel]
def __do2DPointRequest(ds, points):
arr = numpy.asarray(tuple((pt.getY(), pt.getX()) for pt in points if pt))
sel = h5py._hl.selections.PointSelection(ds.shape)
sel.set(arr)
return ds[sel]

View file

@ -0,0 +1,35 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# __init__.py for hdf5 implementation
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# 05/27/10 njensen Initial Creation.
#
#
#

Some files were not shown because too many files have changed in this diff Show more