Compare commits

...

No commits in common. "unidata_20.3.2" and "master_18.1.1_orphan" have entirely different histories.

8198 changed files with 462330 additions and 681908 deletions

BIN
.DS_Store vendored

Binary file not shown.

1
.gitattributes vendored Normal file
View file

@ -0,0 +1 @@
./localization/localization.OAX/utility/common_static/site/OAX/shapefiles/FFMP/FFMP_aggr_basins.shp filter=lfs diff=lfs merge=lfs -text

View file

@ -1,56 +0,0 @@
name: publish mkdocs to github pages
on:
workflow_dispatch:
push:
branches:
- unidata_20.3.2
paths:
- 'docs/**'
- 'mkdocs.yml'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Python and mkdocs
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Update pip
run: |
# install pip=>20.1 to use "pip cache dir"
python3 -m pip install --upgrade pip
- name: Create mkdocs_requirements.txt
run: |
echo "markdown==3.3.4" >> mkdocs_requirements.txt
echo "mkdocs==1.3.0" >> mkdocs_requirements.txt
echo "mkdocs-unidata" >> mkdocs_requirements.txt
echo "fontawesome_markdown" >> mkdocs_requirements.txt
- name: Get pip cache dir
id: pip-cache
run: echo "::set-output name=dir::$(pip cache dir)"
- name: Cache dependencies
uses: actions/cache@v1
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/mkdocs_requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install python dependencies
run: python3 -m pip install -r ./mkdocs_requirements.txt
- run: mkdocs build
- name: Deploy to gh-pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site

View file

@ -1,45 +0,0 @@
name: update station info v20
on:
workflow_dispatch:
schedule:
- cron: "0 7 * * *"
jobs:
update_ndm:
runs-on: ubuntu-latest
environment:
name: VLAB
steps:
# Install svn since it is no longer included by default in ubuntu-latest (ubuntu-24.04 image)
- name: Install svn package
run: |
sudo apt-get update
sudo apt-get install subversion
# Checkout this repo
# this gets the latest code (and is run on the default branch)
- name: Checkout awips2
uses: actions/checkout@v3
with:
ref: unidata_20.3.2
# Do individual pulls for all the files in the ndm directory
- name: Pull latest from vlab svn repo
run: |
cd rpms/awips2.edex/Installer.edex/ndm/
for file in *; do
svn export --force https://vlab.noaa.gov/svn/awips-ndm/trunk/"$file" --username ${{ secrets.VLAB_UNAME }} --password ${{ secrets.VLAB_PASS }}
done
# Check in all the new files
# Only do a git add/commit/push if files have changed
- name: Update existing NDM files for awips2 repo
run: |
date=`date +%Y%m%d-%H:%M:%S`
git config user.name $GITHUB_ACTOR
git config user.email $GITHUB_ACTOR@users.noreply.github.com
change=`git diff`
if [[ ! -z "$change" ]]
then
git add --all
git commit -m "New NDM updates on $date - autogenerated"
git push
fi

6
.gitignore vendored
View file

@ -10,8 +10,4 @@ bin-test/
*.pyc
*.o
*.orig
__pycache__
build/awips-ade/RPMS/
build/logs/
cave/com.raytheon.viz.ui.personalities.awips/splash.bmp
dist/el7*

13
LICENSE
View file

@ -1,13 +0,0 @@
Copyright 2021 University Corporation for Atmospheric Research
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -1,45 +0,0 @@
# NSF Unidata AWIPS
[https://www.unidata.ucar.edu/software/awips/](https://www.unidata.ucar.edu/software/awips/)
[![GitHub release](https://img.shields.io/github/release/Unidata/awips2/all.svg)]()
The Advanced Weather Interactive Processing System (AWIPS) is a meteorological software package. It is used for decoding, displaying, and analyzing data, and was originally developed for the National Weather Service (NWS) by Raytheon. There is a program at UCAR called the NSF Unidata Program Center (UCP) which develops and supports a modified non-operational version of AWIPS for use in research and education by academic institutions. This is released as open source software, free to download and use.
AWIPS takes a unified approach to data ingest, where most data ingested into the system comes through the LDM client pulling data feeds from the [NSF Unidata IDD](https://www.unidata.ucar.edu/projects/#idd). Various raw data and product files (netCDF, grib, BUFR, ASCII text, gini, AREA) are decoded and stored as HDF5 files and Postgres database entries by [EDEX](docs/install/install-edex), which serves products and data over http.
Unidata supports two data visualization frameworks: [CAVE](docs/install/install-cave) (an Eclipse-built Java application which runs on Linux, Mac, and Windows), and [python-awips](docs/python/overview) (a programatic API written as a python package).
> **Note**: Our version of CAVE is a **non-operational** version. It does not support some features of NWS AWIPS. Warnings and alerts cannot be issued from our builds of CAVE. Additional functionality may not be available as well.
![CAVE](https://unidata.github.io/awips2/images/Unidata_AWIPS2_CAVE.png)
---
## License
NSF Unidata AWIPS source code and binaries (RPMs) are considered to be in the public domain, meaning there are no restrictions on any download, modification, or distribution in any form (original or modified). NSF Unidata AWIPS license information can be found [here](./LICENSE).
---
## AWIPS Data in the Cloud
NSF Unidata and XSEDE Jetstream have partnered to offer an EDEX data server in the cloud, open to the community. Select the server in the Connectivity Preferences dialog, or enter **`edex-cloud.unidata.ucar.edu`** (without *http://* before, or *:9581/services* after).
![EDEX in the cloud](docs/images/connectWindow.png)
# Documentation - http://unidata.github.io/awips2/
Popular Pages:
* [NSF Unidata AWIPS User Manual](http://unidata.github.io/awips2/)
* [How to Install CAVE](http://unidata.github.io/awips2/install/install-cave)
* [How to Install EDEX](http://unidata.github.io/awips2/install/install-edex)
* [Common Problems with AWIPS](http://unidata.github.io/awips2/appendix/common-problems)
* [Educational Resources](http://unidata.github.io/awips2/appendix/educational-resources)
* [python-awips Data Access Framework](http://unidata.github.io/python-awips/)
* [awips2-users Mailing List Archives](https://www.unidata.ucar.edu/mailing_lists/archives/awips2-users/)
* [(click to subscribe)](mailto:awips2-users-join@unidata.ucar.edu)

View file

@ -1,7 +0,0 @@
[awips2repo]
name=AWIPS II Repository
baseurl=https://downloads.unidata.ucar.edu/awips2/current/linux/rpms/el7/
enabled=1
protect=0
gpgcheck=0
proxy=_none_

View file

@ -1,461 +0,0 @@
#!/bin/bash
# about: AWIPS install manager
# devorg: Unidata Program Center
# author: Michael James, Tiffany Meyer
# maintainer: <support-awips@unidata.ucar.edu>
# Date Updated: 7/5/2023
# use: ./awips_install.sh (--cave|--edex|--database|--ingest|--help)
dir="$( cd "$(dirname "$0")" ; pwd -P )"
usage="$(basename "$0") [-h] (--cave|--edex|--database|--ingest) #script to install Unidata AWIPS components.\n
-h, --help show this help text\n
--cave install CAVE for x86_64 Linux\n
--edex, --server install EDEX Standaone Server x86_64 Linux\n
--database install EDEX Request/Database x86_64 Linux\n
--ingest install EDEX Ingest Node Server x86_64 Linux\n"
function stop_edex_services {
for srvc in edex_ldm edex_camel qpidd httpd-pypies edex_postgres ; do
if [ -f /etc/init.d/$srvc ]; then
service $srvc stop
fi
done
}
function check_yumfile {
if [[ $(grep "release 7" /etc/redhat-release) ]]; then
repofile=awips2.repo
else
echo "You need to be running CentOS7 or RedHat7"
exit
fi
if [ -f /etc/yum.repos.d/awips2.repo ]; then
date=$(date +%Y%m%d-%H:%M:%S)
cp /etc/yum.repos.d/awips2.repo /etc/yum.repos.d/awips2.repo-${date}
fi
wget_url="https://downloads.unidata.ucar.edu/awips2/20.3.2/linux/${repofile}"
echo "wget -O /etc/yum.repos.d/awips2.repo ${wget_url}"
wget -O /etc/yum.repos.d/awips2.repo ${wget_url}
sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/awips2.repo
yum clean all --enablerepo=awips2repo --disablerepo="*" 1>> /dev/null 2>&1
yum --enablerepo=awips2repo clean metadata
}
function check_limits {
if [[ ! $(grep awips /etc/security/limits.conf) ]]; then
echo "Checking /etc/security/limits.conf for awips: Not found. Adding..."
printf "awips soft nproc 65536\nawips soft nofile 65536\n" >> /etc/security/limits.conf
fi
}
function check_epel {
if [[ ! $(rpm -qa | grep epel-release) ]]; then
yum install epel-release -y
yum clean all
fi
}
function check_wget {
if ! [[ $(rpm -qa | grep ^wget) ]]; then
# install wget if not installed
yum install wget -y
fi
}
function check_rsync {
if ! [[ $(rpm -qa | grep ^rsync) ]]; then
# install rsync if not installed
yum install rsync -y
fi
}
function check_netcdf {
if [[ $(rpm -qa | grep netcdf-AWIPS) ]]; then
# replaced by epel netcdf(-devel) pkgs in 17.1.1-5 so force remove
yum remove netcdf-AWIPS netcdf netcdf-devel -y
fi
}
function check_git {
if ! [[ $(rpm -qa | grep ^git-[12]) ]]; then
# install git if not installed
yum install git -y
fi
}
function check_cave {
if [[ $(rpm -qa | grep awips2-cave-20) ]]; then
echo $'\n'CAVE is currently installed and needs to be removed before installing.
pkill cave.sh
pkill -f 'cave/cave.sh'
remove_cave
fi
check_edex
if [[ $(rpm -qa | grep awips2-cave-18) ]]; then
while true; do
read -p "Version 18.* of CAVE is currently installed and needs to be removed before installing the Beta Version 20.* of CAVE. Do you wish to remove CAVE? (Please type yes or no) `echo $'\n> '`" yn
case $yn in
[Yy]* ) remove_cave; break;;
[Nn]* ) echo "Exiting..."; exit;;
* ) echo "Please answer yes or no"
esac
done
fi
}
function check_cave {
if [[ $(rpm -qa | grep awips2-cave) ]]; then
echo $'\n'CAVE is currently installed and needs to be removed before installing.
pkill cave.sh
pkill -f 'cave/run.sh'
remove_cave
fi
}
function remove_cave {
yum groupremove awips2-cave -y
if [[ $(rpm -qa | grep awips2-cave) ]]; then
echo "
=================== FAILED ===========================
Something went wrong with the un-install of CAVE
and packages are still installed. Once the CAVE
group has been successfully uninstalled, you can try
running this script again.
Try running a \"yum grouplist\" to see if the AWIPS
CAVE group is still installed and then do a
\"yum groupremove [GROUP NAME]\".
ex. yum groupremove 'AWIPS EDEX Server'
You may also need to run \"yum groups mark
remove [GROUP NAME]\"
ex. yum groups mark remove 'AWIPS CAVE'"
exit
else
dir=cave
echo "Removing /awips2/$dir"
rm -rf /awips2/$dir
rm -rf /home/awips/caveData
fi
}
function check_edex {
if [[ $(rpm -qa | grep awips2-edex) ]]; then
echo "found EDEX RPMs installed. The current EDEX needs to be removed before installing."
check_remove_edex
else
if [ -d /awips2/database/data/ ]; then
echo "cleaning up /awips2/database/data/ for new install..."
rm -rf /awips2/database/data/
fi
fi
for dir in /awips2/tmp /awips2/data_store ; do
if [ ! -d $dir ]; then
echo "creating $dir"
mkdir -p $dir
chown awips:fxalpha $dir
fi
done
if getent passwd awips &>/dev/null; then
echo -n ''
else
echo
echo "--- user awips does not exist"
echo "--- installation will continue but EDEX services may not run as intended"
fi
}
function check_remove_edex {
while true; do
read -p "Do you wish to remove EDEX? (Please type yes or no) `echo $'\n> '`" yn
case $yn in
[Yy]* ) remove_edex; break;;
[Nn]* ) echo "Exiting..."; exit;;
* ) echo "Please answer yes or no"
esac
done
}
function calcLogSpace {
a=("$@")
logDiskspace=0
for path in "${a[@]}" ; do
if [ -d $path ] || [ -f $path ]; then
out=`du -sk $path | cut -f1`
logDiskspace=$((logDiskspace + $out))
fi
done
logDiskspace=$(echo "scale=8;$logDiskspace*.000000953674316" | bc)
}
function calcConfigSpace {
a=("$@")
configDiskspace=0
for path in "${a[@]}" ; do
if [ -d $path ] || [ -f $path ]; then
out=`du -sk $path | cut -f1`
configDiskspace=$((configDiskspace + $out))
fi
done
configDiskspace=$(echo "scale=8;$configDiskspace*.000000953674316" | bc)
}
function backupLogs {
a=("$@")
log_backup_dir=${backup_dir}/awips2_backup_${ver}_${date}/logs
if [[ ! -d ${log_backup_dir} ]]; then
mkdir -p ${log_backup_dir}
fi
echo "Backing up to $log_backup_dir"
for path in "${a[@]}" ; do
if [ -d $path ] || [ -f $path ]; then
rsync -apR $path $log_backup_dir
fi
done
}
function backupConfigs {
a=("$@")
config_backup_dir=${backup_dir}/awips2_backup_${ver}_${date}/configs
if [[ ! -d $config_backup_dir ]]; then
mkdir -p $config_backup_dir
fi
echo "Backing up to $config_backup_dir"
for path in "${a[@]}" ; do
if [ -d $path ] || [ -f $path ]; then
rsync -apR $path $config_backup_dir
fi
done
}
function remove_edex {
logPaths=("/awips2/edex/logs" "/awips2/httpd_pypies/var/log/httpd/" "/awips2/database/data/pg_log/" "/awips2/qpid/log/" "/awips2/ldm/logs/")
configPaths=("/awips2/database/data/pg_hba*conf" "/awips2/edex/data/utility" "/awips2/edex/bin" "/awips2/ldm/etc" "/awips2/ldm/dev" "/awips2/edex/conf" "/awips2/edex/etc" "/usr/bin/edex" "/etc/init*d/edexServiceList" "/var/spool/cron/awips")
while true; do
read -p "`echo $'\n'`Please make a selction for what you would like backed up. If you choose not to back up files you will lose all your configurations:
1. logs
2. configs
3. both logs and configs
4. none
`echo $'\n> '`" backup_ans
#User chooses to back of files
if [[ $backup_ans =~ [1-3] ]]; then
echo "ANSWER: $backup_ans"
while true; do
read -p "`echo $'\n'`What location do you want your files backed up to? `echo $'\n> '`" backup_dir
if [ ! -d $backup_dir ]; then
echo "$backup_dir does not exist, enter a path that exists"
else
#Check to see if user has enough space to backup
backupspace=`df -k --output=avail "$backup_dir" | tail -n1`
backupspace=$(echo "scale=8;$backupspace*.000000953674316" | bc)
date=$(date +'%Y%m%d-%H:%M:%S')
echo "Checking to see which version of AWIPS is installed..."
rpm=`rpm -qa | grep awips2-[12]`
IFS='-' str=(${rpm})
IFS=. str2=(${str[2]})
vers="${str[1]}-${str2[0]}"
ver="${vers//[.]/-}"
if [ $backup_ans = 1 ]; then
calcLogSpace "${logPaths[@]}"
#Don't let user backup data if there isn't enough space
if (( $(echo "$logDiskspace > $backupspace" | bc ) )); then
printf "You do not have enough disk space to backup this data to $backup_dir. You only have %.2f GB free and need %.2f GB.\n" $backupspace $logDiskspace
#Backup logs
else
backupLogs "${logPaths[@]}"
printf "%.2f GB of logs were backed up to $backup_dir \n" "$logDiskspace"
fi
elif [ $backup_ans = 2 ]; then
calcConfigSpace "${configPaths[@]}"
#Don't let user backup data if there isn't enough space
if (( $(echo "$configDiskspace > $backupspace" | bc ) )); then
printf "You do not have enough disk space to backup this data to $backup_dir. You only have %.2f GB free and need %.2f GB.\n" $backupspace $configDiskspace
#Backup logs
else
backupConfigs "${configPaths[@]}"
printf "%.2f GB of configs were backed up to $backup_dir \n" "$configDiskspace"
fi
elif [ $backup_ans = 3 ]; then
calcLogSpace "${logPaths[@]}"
calcConfigSpace "${configPaths[@]}"
configLogDiskspace=$( echo "$logDiskspace+$configDiskspace" | bc)
#Don't let user backup data if there isn't enough space
if (( $(echo "$configLogDiskspace > $backupspace" | bc ) )); then
printf "You do not have enough disk space to backup this data to $backup_dir . You only have %.2f GB free and need %.2f GB.\n" $backupspace $configLogDiskspace
#Backup logs
else
backupLogs "${logPaths[@]}"
backupConfigs "${configPaths[@]}"
printf "%.2f GB of logs and configs were backed up to $backup_dir \n" "$configLogDiskspace"
fi
fi
break
fi
done
break
#User chooses not to back up any files
elif [ $backup_ans = 4 ]; then
while true; do
read -p "`echo $'\n'`Are you sure you don't want to back up any AWIPS configuration or log files? Type \"yes\" to confirm, \"no\" to select a different backup option, or \"quit\" to exit` echo $'\n> '`" answer
answer=$(echo $answer | tr '[:upper:]' '[:lower:]')
if [ $answer = yes ] || [ $answer = y ]; then
break 2 ;
elif [ $answer = quit ] || [ $answer = q ]; then
exit;
elif [ $answer = no ] || [ $answer = n ]; then
break
fi
done
#User did not make a valid selection
else
echo "Please make a valid selection (1, 2, 3, or 4)"
fi
done
FILE="/opt/bin/logarchival/edex_upgrade.pl"
if test -f "$FILE"; then
echo "Running /opt/bin/logarchival/edex_upgrade.pl and logging to /home/awips/crons/logarchival/general"
/opt/bin/logarchival/edex_upgrade.pl >> /home/awips/crons/logarchival/general
fi
if [[ $(rpm -qa | grep awips2-cave) ]]; then
echo "CAVE is also installed, now removing EDEX and CAVE"
pkill cave.sh
pkill -f 'cave/run.sh'
rm -rf /home/awips/caveData
else
echo "Now removing EDEX"
fi
yum groupremove awips2-server awips2-database awips2-ingest awips2-cave -y
yum remove awips2-* -y
if [[ $(rpm -qa | grep awips2 | grep -v cave) ]]; then
echo "
=================== FAILED ===========================
Something went wrong with the un-install of EDEX
and packages are still installed. Once the EDEX
groups have been successfully uninstalled, you can try
running this script again.
Try running a \"yum grouplist\" to see which AWIPS
group is still installed and then do a
\"yum groupremove [GROUP NAME]\".
ex. yum groupremove 'AWIPS EDEX Server'
You may also need to run \"yum groups mark
remove [GROUP NAME]\"
ex. yum groups mark remove 'AWIPS EDEX Server'"
exit
else
awips2_dirs=("cave" "data" "database" "data_store" "edex" "hdf5" "httpd_pypies" "java" "ldm" "postgres" "psql" "pypies" "python" "qpid" "tmp" "tools" "yajsw")
for dir in ${awips2_dirs[@]}; do
if [ $dir != dev ] ; then
echo "Removing /awips2/$dir"
rm -rf /awips2/$dir
fi
done
fi
}
function check_users {
if ! getent group "fxalpha" >/dev/null 2>&1; then
groupadd fxalpha
fi
if ! id "awips" >/dev/null 2>&1; then
useradd -G fxalpha awips
fi
}
function server_prep {
check_users
check_yumfile
stop_edex_services
check_limits
check_netcdf
check_wget
check_rsync
check_edex
check_git
check_epel
}
function disable_ndm_update {
crontab -u awips -l >cron_backup
crontab -u awips -r
sed -i -e 's/30 3 \* \* \* \/bin\/perl \/awips2\/dev\/updateNDM.pl/#30 3 \* \* \* \/bin\/perl \/awips2\/dev\/updateNDM.pl/' cron_backup
crontab -u awips cron_backup
rm cron_backup
}
function cave_prep {
check_users
check_yumfile
check_cave
check_netcdf
check_wget
check_epel
rm -rf /home/awips/caveData
}
if [ $# -eq 0 ]; then
key="-h"
else
key="$1"
fi
case $key in
--cave)
cave_prep
yum groupinstall awips2-cave -y 2>&1 | tee -a /tmp/awips-install.log
sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/awips2.repo
echo "CAVE has finished installing, the install log can be found in /tmp/awips-install.log"
;;
--server|--edex)
server_prep
yum groupinstall awips2-server -y 2>&1 | tee -a /tmp/awips-install.log
sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/awips2.repo
sed -i 's/@LDM_PORT@/388/' /awips2/ldm/etc/registry.xml
echo "EDEX server has finished installing, the install log can be found in /tmp/awips-install.log"
;;
--database)
server_prep
yum groupinstall awips2-database -y 2>&1 | tee -a /tmp/awips-install.log
disable_ndm_update
sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/awips2.repo
sed -i 's/@LDM_PORT@/388/' /awips2/ldm/etc/registry.xml
echo "EDEX database has finished installing, the install log can be found in /tmp/awips-install.log"
;;
--ingest)
server_prep
yum groupinstall awips2-ingest -y 2>&1 | tee -a /tmp/awips-install.log
disable_ndm_update
sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/awips2.repo
sed -i 's/@LDM_PORT@/388/' /awips2/ldm/etc/registry.xml
echo "EDEX ingest has finished installing, the install log can be found in /tmp/awips-install.log"
;;
-h|--help)
echo -e $usage
exit
;;
esac
PATH=$PATH:/awips2/edex/bin/
exit

View file

@ -1,463 +0,0 @@
#!/bin/bash
# about: AWIPS install manager
# devorg: Unidata Program Center
# author: Michael James, Tiffany Meyer
# maintainer: <support-awips@unidata.ucar.edu>
# Date Updated: 2/16/2024
# use: ./awips_install.sh (--cave|--edex|--database|--ingest|--help)
dir="$( cd "$(dirname "$0")" ; pwd -P )"
usage="$(basename "$0") [-h] (--cave|--edex|--database|--ingest) #script to install Unidata AWIPS components.\n
-h, --help show this help text\n
--cave install CAVE for x86_64 Linux\n
--edex, --server install EDEX Standaone Server x86_64 Linux\n
--database install EDEX Request/Database x86_64 Linux\n
--ingest install EDEX Ingest Node Server x86_64 Linux\n"
function stop_edex_services {
for srvc in edex_ldm edex_camel qpidd httpd-pypies edex_postgres ; do
if [ -f /etc/init.d/$srvc ]; then
service $srvc stop
fi
done
}
function check_yumfile {
if [[ $(grep "release 7" /etc/redhat-release) ]]; then
repofile=awips2.repo
else
echo "You need to be running CentOS7 or RedHat7"
exit
fi
if [ -f /etc/yum.repos.d/awips2.repo ]; then
date=$(date +%Y%m%d-%H:%M:%S)
cp /etc/yum.repos.d/awips2.repo /etc/yum.repos.d/awips2.repo-${date}
fi
wget_url="https://downloads.unidata.ucar.edu/awips2/current/linux/${repofile}"
#echo "wget -O /etc/yum.repos.d/awips2.repo ${wget_url}"
#wget -O /etc/yum.repos.d/awips2.repo ${wget_url}
sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/awips2.repo
yum --enablerepo=awips2repo --disablerepo="*" --disableexcludes=main clean all 1>> /dev/null 2>&1
yum --enablerepo=awips2repo --disableexcludes=main clean metadata
}
function check_limits {
if [[ ! $(grep awips /etc/security/limits.conf) ]]; then
echo "Checking /etc/security/limits.conf for awips: Not found. Adding..."
printf "awips soft nproc 65536\nawips soft nofile 65536\n" >> /etc/security/limits.conf
fi
}
function check_epel {
if [[ ! $(rpm -qa | grep epel-release) ]]; then
yum install epel-release -y
yum clean all
fi
}
function check_wget {
if ! [[ $(rpm -qa | grep ^wget) ]]; then
# install wget if not installed
yum install wget -y
fi
}
function check_rsync {
if ! [[ $(rpm -qa | grep ^rsync) ]]; then
# install rsync if not installed
yum install rsync -y
fi
}
function check_netcdf {
if [[ $(rpm -qa | grep netcdf-AWIPS) ]]; then
# replaced by epel netcdf(-devel) pkgs in 17.1.1-5 so force remove
yum remove netcdf-AWIPS netcdf netcdf-devel -y
fi
}
function check_git {
if ! [[ $(rpm -qa | grep ^git-[12]) ]]; then
# install git if not installed
yum install git -y
fi
}
function check_wgrib2 {
if ! [[ $(rpm -qa | grep ^wgrib2) ]]; then
# install wgrib2 if not installed
yum install wgrib2 -y
fi
}
function check_cave {
if [[ $(rpm -qa | grep awips2-cave-20) ]]; then
echo $'\n'CAVE is currently installed and needs to be removed before installing.
pkill cave.sh
pkill -f 'cave/cave.sh'
remove_cave
fi
if [[ $(rpm -qa | grep awips2-cave-18) ]]; then
while true; do
pkill run.sh
pkill -f 'cave/run.sh'
read -p "Version 18.* of CAVE is currently installed and needs to be removed before installing the Beta Version 20.* of CAVE. Do you wish to remove CAVE? (Please type yes or no) `echo $'\n> '`" yn
case $yn in
[Yy]* ) remove_cave; break;;
[Nn]* ) echo "Exiting..."; exit;;
* ) echo "Please answer yes or no"
esac
done
fi
}
function remove_cave {
yum --disableexcludes=main groupremove awips2-cave -y
#yum remove awips2-* -y
if [[ $(rpm -qa | grep awips2-cave) ]]; then
echo "
=================== FAILED ===========================
Something went wrong with the un-install of CAVE
and packages are still installed. Once the CAVE
group has been successfully uninstalled, you can try
running this script again.
Try running a \"yum grouplist\" to see if the AWIPS
CAVE group is still installed and then do a
\"yum groupremove [GROUP NAME]\".
ex. yum groupremove 'AWIPS EDEX Server'
You may also need to run \"yum groups mark
remove [GROUP NAME]\"
ex. yum groups mark remove 'AWIPS CAVE'"
exit
else
dir=cave
echo "Removing /awips2/$dir"
rm -rf /awips2/$dir
rm -rf /home/awips/caveData
fi
}
function check_edex {
if [[ $(rpm -qa | grep awips2-edex) ]]; then
echo "found EDEX RPMs installed. The current EDEX needs to be removed before installing."
check_remove_edex
else
if [ -d /awips2/database/data/ ]; then
echo "cleaning up /awips2/database/data/ for new install..."
rm -rf /awips2/database/data/
fi
fi
for dir in /awips2/tmp /awips2/data_store ; do
if [ ! -d $dir ]; then
echo "creating $dir"
mkdir -p $dir
chown awips:fxalpha $dir
fi
done
if getent passwd awips &>/dev/null; then
echo -n ''
else
echo
echo "--- user awips does not exist"
echo "--- installation will continue but EDEX services may not run as intended"
fi
}
function check_remove_edex {
while true; do
read -p "Do you wish to remove EDEX? (Please type yes or no) `echo $'\n> '`" yn
case $yn in
[Yy]* ) remove_edex; break;;
[Nn]* ) echo "Exiting..."; exit;;
* ) echo "Please answer yes or no"
esac
done
}
function calcLogSpace {
a=("$@")
logDiskspace=0
for path in "${a[@]}" ; do
if [ -d $path ] || [ -f $path ]; then
out=`du -sk $path | cut -f1`
logDiskspace=$((logDiskspace + $out))
fi
done
logDiskspace=$(echo "scale=8;$logDiskspace*.000000953674316" | bc)
}
function calcConfigSpace {
a=("$@")
configDiskspace=0
for path in "${a[@]}" ; do
if [ -d $path ] || [ -f $path ]; then
out=`du -sk $path | cut -f1`
configDiskspace=$((configDiskspace + $out))
fi
done
configDiskspace=$(echo "scale=8;$configDiskspace*.000000953674316" | bc)
}
function backupLogs {
a=("$@")
log_backup_dir=${backup_dir}/awips2_backup_${ver}_${date}/logs
if [[ ! -d ${log_backup_dir} ]]; then
mkdir -p ${log_backup_dir}
fi
echo "Backing up to $log_backup_dir"
for path in "${a[@]}" ; do
if [ -d $path ] || [ -f $path ]; then
rsync -apR $path $log_backup_dir
fi
done
}
function backupConfigs {
a=("$@")
config_backup_dir=${backup_dir}/awips2_backup_${ver}_${date}/configs
if [[ ! -d $config_backup_dir ]]; then
mkdir -p $config_backup_dir
fi
echo "Backing up to $config_backup_dir"
for path in "${a[@]}" ; do
if [ -d $path ] || [ -f $path ]; then
rsync -apR $path $config_backup_dir
fi
done
}
function remove_edex {
logPaths=("/awips2/edex/logs" "/awips2/httpd_pypies/var/log/httpd/" "/awips2/database/data/pg_log/" "/awips2/qpid/log/" "/awips2/ldm/logs/")
configPaths=("/awips2/database/data/pg_hba*conf" "/awips2/edex/data/utility" "/awips2/edex/bin" "/awips2/ldm/etc" "/awips2/ldm/dev" "/awips2/edex/conf" "/awips2/edex/etc" "/usr/bin/edex" "/etc/init*d/edexServiceList" "/var/spool/cron/awips")
while true; do
read -p "`echo $'\n'`Please make a selction for what you would like backed up. If you choose not to back up files you will lose all your configurations:
1. logs
2. configs
3. both logs and configs
4. none
`echo $'\n> '`" backup_ans
#User chooses to back of files
if [[ $backup_ans =~ [1-3] ]]; then
echo "ANSWER: $backup_ans"
while true; do
read -p "`echo $'\n'`What location do you want your files backed up to? `echo $'\n> '`" backup_dir
if [ ! -d $backup_dir ]; then
echo "$backup_dir does not exist, enter a path that exists"
else
#Check to see if user has enough space to backup
backupspace=`df -k --output=avail "$backup_dir" | tail -n1`
backupspace=$(echo "scale=8;$backupspace*.000000953674316" | bc)
date=$(date +'%Y%m%d-%H:%M:%S')
echo "Checking to see which version of AWIPS is installed..."
rpm=`rpm -qa | grep awips2-[12]`
IFS='-' str=(${rpm})
IFS=. str2=(${str[2]})
vers="${str[1]}-${str2[0]}"
ver="${vers//[.]/-}"
if [ $backup_ans = 1 ]; then
calcLogSpace "${logPaths[@]}"
#Don't let user backup data if there isn't enough space
if (( $(echo "$logDiskspace > $backupspace" | bc ) )); then
printf "You do not have enough disk space to backup this data to $backup_dir. You only have %.2f GB free and need %.2f GB.\n" $backupspace $logDiskspace
#Backup logs
else
backupLogs "${logPaths[@]}"
printf "%.2f GB of logs were backed up to $backup_dir \n" "$logDiskspace"
fi
elif [ $backup_ans = 2 ]; then
calcConfigSpace "${configPaths[@]}"
#Don't let user backup data if there isn't enough space
if (( $(echo "$configDiskspace > $backupspace" | bc ) )); then
printf "You do not have enough disk space to backup this data to $backup_dir. You only have %.2f GB free and need %.2f GB.\n" $backupspace $configDiskspace
#Backup logs
else
backupConfigs "${configPaths[@]}"
printf "%.2f GB of configs were backed up to $backup_dir \n" "$configDiskspace"
fi
elif [ $backup_ans = 3 ]; then
calcLogSpace "${logPaths[@]}"
calcConfigSpace "${configPaths[@]}"
configLogDiskspace=$( echo "$logDiskspace+$configDiskspace" | bc)
#Don't let user backup data if there isn't enough space
if (( $(echo "$configLogDiskspace > $backupspace" | bc ) )); then
printf "You do not have enough disk space to backup this data to $backup_dir . You only have %.2f GB free and need %.2f GB.\n" $backupspace $configLogDiskspace
#Backup logs
else
backupLogs "${logPaths[@]}"
backupConfigs "${configPaths[@]}"
printf "%.2f GB of logs and configs were backed up to $backup_dir \n" "$configLogDiskspace"
fi
fi
break
fi
done
break
#User chooses not to back up any files
elif [ $backup_ans = 4 ]; then
while true; do
read -p "`echo $'\n'`Are you sure you don't want to back up any AWIPS configuration or log files? Type \"yes\" to confirm, \"no\" to select a different backup option, or \"quit\" to exit` echo $'\n> '`" answer
answer=$(echo $answer | tr '[:upper:]' '[:lower:]')
if [ $answer = yes ] || [ $answer = y ]; then
break 2 ;
elif [ $answer = quit ] || [ $answer = q ]; then
exit;
elif [ $answer = no ] || [ $answer = n ]; then
break
fi
done
#User did not make a valid selection
else
echo "Please make a valid selection (1, 2, 3, or 4)"
fi
done
FILE="/opt/bin/logarchival/edex_upgrade.pl"
if test -f "$FILE"; then
echo "Running /opt/bin/logarchival/edex_upgrade.pl and logging to /home/awips/crons/logarchival/general"
/opt/bin/logarchival/edex_upgrade.pl >> /home/awips/crons/logarchival/general
fi
if [[ $(rpm -qa | grep awips2-cave) ]]; then
echo "CAVE is also installed, now removing EDEX and CAVE"
pkill cave.sh
pkill -f 'cave/run.sh'
rm -rf /home/awips/caveData
else
echo "Now removing EDEX"
fi
yum --disableexcludes=main groupremove awips2-server awips2-database awips2-ingest awips2-cave -y
yum --disableexcludes=main remove awips2-* -y
if [[ $(rpm -qa | grep awips2 | grep -v cave) ]]; then
echo "
=================== FAILED ===========================
Something went wrong with the un-install of EDEX
and packages are still installed. Once the EDEX
groups have been successfully uninstalled, you can try
running this script again.
Try running a \"yum grouplist\" to see which AWIPS
group is still installed and then do a
\"yum groupremove [GROUP NAME]\".
ex. yum groupremove 'AWIPS EDEX Server'
You may also need to run \"yum groups mark
remove [GROUP NAME]\"
ex. yum groups mark remove 'AWIPS EDEX Server'"
exit
else
awips2_dirs=("cave" "data" "database" "data_store" "edex" "etc" "hdf5" "hdf5_locks" "httpd_pypies" "ignite" "java" "ldm" "netcdf" "postgres" "psql" "pypies" "python" "qpid" "tmp" "tools" "yajsw")
for dir in ${awips2_dirs[@]}; do
if [ $dir != dev ] ; then
echo "Removing /awips2/$dir"
rm -rf /awips2/$dir
fi
done
fi
}
function check_users {
if ! getent group "fxalpha" >/dev/null 2>&1; then
groupadd fxalpha
fi
if ! id "awips" >/dev/null 2>&1; then
useradd -G fxalpha awips
fi
}
function server_prep {
check_users
check_yumfile
stop_edex_services
check_limits
check_epel
check_netcdf
check_wget
check_rsync
check_edex
check_git
check_wgrib2
}
function disable_ndm_update {
crontab -u awips -l >cron_backup
crontab -u awips -r
sed -i -e 's/30 3 \* \* \* \/bin\/perl \/awips2\/dev\/updateNDM.pl/#30 3 \* \* \* \/bin\/perl \/awips2\/dev\/updateNDM.pl/' cron_backup
crontab -u awips cron_backup
rm cron_backup
}
function cave_prep {
check_users
check_yumfile
check_cave
check_netcdf
check_wget
check_epel
rm -rf /home/awips/caveData
}
if [ $# -eq 0 ]; then
key="-h"
else
key="$1"
fi
case $key in
--cave)
cave_prep
yum --disableexcludes=main groupinstall awips2-cave -y 2>&1 | tee -a /tmp/awips-install.log
sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/awips2.repo
echo "CAVE has finished installing, the install log can be found in /tmp/awips-install.log"
;;
--server|--edex)
server_prep
yum --disableexcludes=main install awips2-*post* -y
yum --disableexcludes=main groupinstall awips2-server -y 2>&1 | tee -a /tmp/awips-install.log
sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/awips2.repo
sed -i 's/@LDM_PORT@/388/' /awips2/ldm/etc/registry.xml
echo "EDEX server has finished installing, the install log can be found in /tmp/awips-install.log"
;;
--database)
server_prep
yum --disableexcludes=main groupinstall awips2-database -y 2>&1 | tee -a /tmp/awips-install.log
disable_ndm_update
sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/awips2.repo
sed -i 's/@LDM_PORT@/388/' /awips2/ldm/etc/registry.xml
echo "EDEX database has finished installing, the install log can be found in /tmp/awips-install.log"
;;
--ingest)
server_prep
yum --disableexcludes=main groupinstall awips2-ingest -y 2>&1 | tee -a /tmp/awips-install.log
disable_ndm_update
sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/awips2.repo
sed -i 's/@LDM_PORT@/388/' /awips2/ldm/etc/registry.xml
echo "EDEX ingest has finished installing, the install log can be found in /tmp/awips-install.log"
;;
-h|--help)
echo -e $usage
exit
;;
esac
PATH=$PATH:/awips2/edex/bin/
exit

View file

@ -1,23 +0,0 @@
FROM tiffanym13/awips-devel-20.3.2-1:el7
ENV VERSION 20.3.2
ENV RELEASE 1
MAINTAINER Tiffany Meyer<tiffanym@ucar.edu>
USER root
COPY el7-dev.repo /etc/yum.repos.d/awips2.repo
RUN groupadd fxalpha && useradd -G fxalpha awips
RUN mkdir -p /home/awips/dev/build/rpmbuild/RPMS/
ADD RPMS /home/awips/dev/build/rpmbuild/RPMS
RUN yum -y clean all
RUN yum install awips2-ant awips2-eclipse awips2-hdf5-devel awips2-maven awips2-python-cheroot awips2-python-contextlib2 awips2-python-cython awips2-python-jaraco.functools awips2-python-more-itertools awips2-python-pkgconfig awips2-python-portend awips2-python-pycairo awips2-python-pygobject awips2-python-setuptools_scm_git_archive awips2-python-setuptools_scm awips2-python-tempora awips2-python-zc.lockfile awips2-python-numpy awips2-python-dateutil awips2-python-pyparsing awips2-python-pbr awips2-python-mock awips2-python-numexpr awips2-python-thrift awips2-python-setuptools awips2-hdf5 awips2-python-six awips2-python-pytz awips2-netcdf-devel awips2-qpid-proton -y
RUN mkdir -p /awips2/jenkins/buildspace/workspace/AWIPS2-UPC_build/baseline && mkdir -p /awips2/jenkins/buildspace/workspace/tmp
RUN mkdir -p /awips2/jenkins/build/rpms/awips2_latest/{x86_64,noarch}/
RUN chown -R awips:fxalpha /awips2/jenkins/
ENTRYPOINT ["/bin/bash"]

View file

@ -1,23 +0,0 @@
FROM tiffanym13/awips-devel-20.3.2-2:el7
ENV VERSION 20.3.2
ENV RELEASE 2
MAINTAINER Tiffany Meyer<tiffanym@ucar.edu>
USER root
COPY el7-dev.repo /etc/yum.repos.d/awips2.repo
RUN groupadd fxalpha && useradd -G fxalpha awips
RUN mkdir -p /home/awips/dev/unidata_20.3.2/awips2/dist/el7-dev-20231212/
ADD el7-dev-20231212 /home/awips/dev/unidata_20.3.2/awips2/dist/el7-dev-20231212
RUN yum -y clean all
RUN yum groupinstall awips2-ade -y
RUN mkdir -p /awips2/jenkins/buildspace/workspace/AWIPS2-UPC_build/baseline && mkdir -p /awips2/jenkins/buildspace/workspace/tmp
RUN mkdir -p /awips2/jenkins/build/rpms/awips2_latest/{x86_64,noarch}/
RUN chown -R awips:fxalpha /awips2/jenkins/
ENTRYPOINT ["/bin/bash"]

View file

@ -1,22 +0,0 @@
FROM centos:7
ENV VERSION 20.3.2-1
ENV RELEASE 1
MAINTAINER Tiffany Meyer<tiffanym@ucar.edu>
USER root
RUN yum update yum -y
RUN yum groupinstall "Development tools" -y
RUN yum install epel-release -y
RUN yum clean all -y
ENV systemDeps="wget rsync git net-tools gzip libtool"
ENV rpmDeps="gcc-c++ gcc-gfortran rpm-build createrepo expat-devel lua-devel cyrus-sasl-devel cyrus-sasl-plain cyrus-sasl-md5 nss-devel nspr-devel libxml2-devel openldap-devel cmake"
ENV pythonDeps="tk-devel tcl-devel readline-devel bzip2-devel openssl-devel compat-libf2c-34"
ENV awipsDeps="netcdf netcdf-devel"
RUN yum install $systemDeps $rpmDeps $pythonDeps $awipsDeps -y
RUN yum update -y
ENTRYPOINT ["/bin/bash"]

View file

@ -1,22 +0,0 @@
FROM centos:7
ENV VERSION 20.3.2-2
ENV RELEASE 2
MAINTAINER Tiffany Meyer<tiffanym@ucar.edu>
USER root
RUN yum update yum -y
RUN yum groupinstall "Development tools" -y
RUN yum install epel-release -y
RUN yum clean all -y
ENV systemDeps="wget rsync git net-tools gzip libtool"
ENV rpmDeps="gcc-c++ gcc-gfortran rpm-build createrepo expat-devel lua-devel cyrus-sasl-devel cyrus-sasl-plain cyrus-sasl-md5 nss-devel nspr-devel libxml2-devel openldap-devel cmake"
ENV pythonDeps="tk-devel tcl-devel readline-devel bzip2-devel openssl-devel compat-libf2c-34"
ENV awipsDeps="netcdf netcdf-devel"
RUN yum install $systemDeps $rpmDeps $pythonDeps $awipsDeps -y
RUN yum update -y
ENTRYPOINT ["/bin/bash"]

View file

@ -1,26 +0,0 @@
#!/bin/bash
dir="$( cd "$(dirname "$0")" ; pwd -P )"
pushd $dir
. ../buildEnvironment.sh
if [ -z "$1" ]; then
echo "supply type (el7)"
exit
fi
os_version=$1
existing=$(docker images |grep awips-ade | grep $1 | awk '{ print $3 }')
if [ ! -z "$existing" ]; then
docker rmi $existing
fi
img="20.3.2-2"
pushd /awips2/repo/awips2-builds/build/awips-ade
docker build -t tiffanym13/awips-ade-${img} -f Dockerfile.awips-ade-${img}.${os_version} .
dockerID=$(docker images | grep awips-ade | awk '{print $3}' | head -1 )
#docker tag $dockerID unidata/awips-ade:${AWIPSII_VERSION}-${os_version}
docker tag $dockerID tiffanym13/awips-ade-${img}:${AWIPSII_VERSION}-${os_version}
docker rmi tiffanym13/awips-ade-${img}:latest
#docker rmi tiffanym13/awips-ade-${img}:${AWIPSII_VERSION}-${os_version}
docker push tiffanym13/awips-ade-${img}:${AWIPSII_VERSION}-${os_version}

View file

@ -1,23 +0,0 @@
#!/bin/bash
dir="$( cd "$(dirname "$0")" ; pwd -P )"
pushd $dir
. ../buildEnvironment.sh
img="awips-devel-20.3.2-2"
if [ -z "$1" ]; then
echo "supply type (el7)"
exit
fi
os_version=$1
existing=$(sudo docker images |grep ${img} | grep $1 | awk '{ print $3 }')
if [ ! -z "$existing" ]; then
docker rmi $existing
fi
pushd /awips2/repo/awips2-builds/build/awips-ade
docker build -t tiffanym13/${img} -f Dockerfile.${img}.${os_version} .
dockerID=$(docker images | grep ${img} | grep latest | awk '{print $3}' | head -1 )
docker tag $dockerID tiffanym13/${img}:${os_version}
docker rmi tiffanym13/${img}:latest
docker push tiffanym13/${img}:${os_version}

View file

@ -1,9 +0,0 @@
[awips2repo]
name=AWIPS II Repository
#baseurl=http://www.unidata.ucar.edu/repos/yum/18.2.1-ade
#baseurl=file:///home/awips/dev/build/rpmbuild/RPMS
baseurl=file:///home/awips/dev/unidata_20.3.2/awips2/dist/el7-dev-20231212
enabled=1
protect=0
gpgcheck=0
proxy=_none_

View file

@ -1,93 +0,0 @@
#!/bin/bash -v
set -xe
# Determine where we are
path_to_script=`readlink -f $0`
dir=$(dirname $path_to_script)
source ${dir}/buildEnvironment.sh
export _script_dir=${dir}
echo "Running build.sh from ${_script_dir}"
echo " JENKINS_WORKSPACE = ${JENKINS_WORKSPACE}"
cd ${dir}
logdir=${dir}/logs
if [ ! -d $logdir ]; then
mkdir -p $logdir
fi
START_TIME=`date "+%s"`
timestamp=`date +%Y_%m_%d_%H:%M:%S`
# Cleanup before building CAVE rpms
if [[ ${2} = "buildCAVE" ]]; then
rm -rf ${JENKINS_HOME}/buildspace/workspace/AWIPS2-UPC_build/baseline/
rm -rf ${JENKINS_HOME}/buildspace/workspace/tmp/${USER}/
fi
echo "BUILD_DIR = $BUILD_DIR"
echo "BUILD_WORKSPACE = $BUILD_WORKSPACE"
echo "BASELINE = $BASELINE"
echo "WORKSPACE = $WORKSPACE"
echo "AWIPSII_VERSION = $AWIPSII_VERSION"
echo "AWIPSII_RELEASE = $AWIPSII_RELEASE"
echo "AWIPSII_TOP_DIR = $AWIPSII_TOP_DIR"
echo "UFRAME_ECLIPSE = $UFRAME_ECLIPSE"
echo "AWIPSII_STATIC_FILES = $AWIPSII_STATIC_FILES"
echo "AWIPSII_BUILD_ROOT = $AWIPSII_BUILD_ROOT"
# Prepare the rpm build directory structure
mkdir -p ${AWIPSII_TOP_DIR}/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
pushd . > /dev/null 2>&1
cd ${BASELINE}
mkdir -p ${WORKSPACE}
if [[ ${2} = "buildCAVE" ]]; then
RSYNC_DIRS=`cat $dir/rsync.dirs |grep -v nativelib | grep -v awips2-rpm`
else
RSYNC_DIRS=`cat $dir/rsync.dirs`
fi
rsync -ruql --delete --exclude-from=${dir}/excludes ${RSYNC_DIRS} ${WORKSPACE}
popd > /dev/null 2>&1
# execute the build for the appropriate architecture
_rpms_build_directory=${WORKSPACE}/rpms/build
_architecture=`uname -i`
_build_sh_directory=${_rpms_build_directory}/${_architecture}
pushd . > /dev/null 2>&1
cd ${_build_sh_directory}
cp -v ${dir}/buildEnvironment.sh .
# check rpms/build/x86_64/build.sh for build groups
build_log=${logdir}/build${1}-${timestamp}.log
if [ "${1}" = "-b" -a -n "${2}" ]; then
build_log=${logdir}/build-${2}-${timestamp}.log
fi
/bin/bash ${_build_sh_directory}/build.sh ${1} ${2} > ${build_log}
popd > /dev/null 2>&1
export rpm_end_dir="${AWIPSII_VERSION}-${AWIPSII_RELEASE}"
mkdir -p ${AWIPSII_TOP_DIR}/RPMS/x86_64/
if [ "$(ls -A ${AWIPSII_TOP_DIR}/RPMS/x86_64/)" ]; then
mv ${AWIPSII_TOP_DIR}/RPMS/x86_64/* ${JENKINS_HOME}/build/rpms/awips2_latest/x86_64/
fi
if [ "$(ls -A ${AWIPSII_TOP_DIR}/RPMS/noarch/)" ]; then
mv ${AWIPSII_TOP_DIR}/RPMS/noarch/* ${JENKINS_HOME}/build/rpms/awips2_latest/noarch/
fi
END_TIME=`date "+%s"`
TIME_SPENT=$((END_TIME - START_TIME))
TTI_HOURS=$((TIME_SPENT/3600))
TTI_SECS=$((TIME_SPENT %3600)) #Remaining seconds
TTI_MINS=$((TTI_SECS/60))
TTI_SECS=$((TTI_SECS%60))
echo "Total-time-Spent-In-The-Build-For $0 = $TTI_HOURS hours, $TTI_MINS minutes, $TTI_SECS seconds"
exit

View file

@ -1,26 +0,0 @@
#!/bin/bash
# Version
export AWIPSII_VERSION="20.3.2"
export AWIPSII_RELEASE="2"
export AWIPSII_BUILD_DATE=`date`
export AWIPSII_BUILD_SYS=`cat /etc/system-release`
# Author
export AWIPSII_BUILD_VENDOR="UCAR"
export AWIPSII_BUILD_SITE="Unidata"
export AWIPSII_AUTHOR="Tiffany Meyer <tiffanym@ucar.edu>"
# Directories
export UFRAME_ECLIPSE=/awips2/eclipse
export JAVA_HOME=/awips2/java
export ANT_HOME=/awips2/ant
export REPO=/awips2/repo
export JENKINS_HOME=/awips2/jenkins
export JENKINS_WORKSPACE=${REPO}/awips2-builds
export BUILD_DIR=${JENKINS_HOME}/buildspace/
export AWIPSII_STATIC_FILES=${REPO}/awips2-static
# More env vars
export BUILD_WORKSPACE=${BUILD_DIR}/workspace
export BASELINE=${JENKINS_WORKSPACE}
export AWIPSII_TOP_DIR=${BUILD_WORKSPACE}/tmp/rpms_built_dir
export WORKSPACE=${BUILD_WORKSPACE}/AWIPS2-UPC_build/baseline
export AWIPSII_BUILD_ROOT=${BUILD_WORKSPACE}/tmp/${USER}/awips-component
export REPO_DEST=${BUILD_WORKSPACE}/tmp/${USER}/repo

View file

@ -1,88 +0,0 @@
#!/bin/sh -xe
#
# Build Unidata AWIPS RPMs from source
# author: Michael James
# maintainer: <tiffanym@ucar.edu>
#
#
# Require el6 or el7 be specified
# RPM name is optional (see below)
#
os_version=$1
rpmname=$2
if [ -z "$os_version" ]; then
echo "supply os_version (el7)"
exit
fi
#
# Set up AWIPS environment
#
. /awips2/repo/awips2-builds/build/buildEnvironment.sh
buildsh=$REPO/awips2-builds/build/build.sh
pushd $REPO
#
# If local source directories, exist, mount them to the
# container, otherwise clone the repo from github
#
#if [ ! -d awips2-core-foss ]; then git clone https://github.com/Unidata/awips2-core-foss.git --branch unidata_${AWIPSII_VERSION} --single-branch ;fi
#if [ ! -d awips2-core ]; then git clone https://github.com/Unidata/awips2-core.git --branch unidata_${AWIPSII_VERSION} --single-branch ;fi
#if [ ! -d awips2-foss ]; then git clone https://github.com/Unidata/awips2-foss.git --branch unidata_${AWIPSII_VERSION} --single-branch ;fi
#if [ ! -d awips2-goesr ]; then git clone https://github.com/Unidata/awips2-goesr.git --branch unidata_${AWIPSII_VERSION} --single-branch ;fi
#if [ ! -d awips2-ncep ]; then git clone https://github.com/Unidata/awips2-ncep.git --branch unidata_${AWIPSII_VERSION} --single-branch ;fi
#if [ ! -d awips2-nws ]; then git clone https://github.com/Unidata/awips2-nws.git --branch unidata_${AWIPSII_VERSION} --single-branch ;fi
#if [ ! -d awips2-unidata ]; then git clone https://github.com/Unidata/awips2-unidata.git --branch unidata_${AWIPSII_VERSION} --single-branch ;fi
#
# AWIPS Static files are too large to host on github
#
if [ ! -d awips2-static && ! $rpmname = "buildCAVE" ]; then
mkdir awips2-static
cd awips2-static
wget https://www.unidata.ucar.edu/downloads/awips2/static.tar
tar -xvf static.tar
rm -rf static.tar
fi
#
# If RPM name is given
#
if [ ! -z "$rpmname" ]; then
frst="$(echo $rpmname | head -c 1)"
if [[ "$frst" = "-" ]]; then
# If first character is a dash, then a build group alias was given
su - awips -c "/bin/bash $buildsh $rpmname"
else
su - awips -c "/bin/bash $buildsh -b $rpmname"
fi
else
# If RPM name is not given build all groups in this order
# yum localinstall /awips2/repo/awips2-builds/dist/18.2.1-ade/x86_64/awips2-hdf5* -y
# yum localinstall /awips2/repo/awips2-builds/dist/18.2.1-ade/x86_64/awips2-netcdf* -y
su - awips -c "/bin/bash $buildsh -ade"
su - awips -c "/bin/bash $buildsh -python"
su - awips -c "/bin/bash $buildsh -qpid"
su - awips -c "/bin/bash $buildsh -server"
su - awips -c "/bin/bash $buildsh -database"
su - awips -c "/bin/bash $buildsh -edex"
su - awips -c "/bin/bash $buildsh -cave"
#su - awips -c "/bin/bash $buildsh -pypies"
#su - awips -c "/bin/bash $buildsh -localization"
fi
# Move RPMs to awips2-builds/dist
if [ "$(ls -A ${JENKINS_HOME}/build/rpms/awips2_latest/x86_64/)" ]; then
mkdir -p /awips2/repo/awips2-builds/dist/${os_version}-dev/x86_64/
mv ${JENKINS_HOME}/build/rpms/awips2_latest/x86_64/* /awips2/repo/awips2-builds/dist/${os_version}-dev/x86_64/
fi
if [ "$(ls -A ${JENKINS_HOME}/build/rpms/awips2_latest/noarch/)" ]; then
mkdir -p /awips2/repo/awips2-builds/dist/${os_version}-dev/noarch/
mv ${JENKINS_HOME}/build/rpms/awips2_latest/noarch/* /awips2/repo/awips2-builds/dist/${os_version}-dev/noarch/
fi

View file

@ -1,75 +0,0 @@
#!/bin/bash -v
set -xe
# Determine where we are
path_to_script=`readlink -f $0`
dir=$(dirname $path_to_script)
source ${dir}/buildEnvironment.sh
export _script_dir=${dir}
echo "Running build.sh from ${_script_dir}"
echo " JENKINS_WORKSPACE = ${JENKINS_WORKSPACE}"
cd ${dir}
START_TIME=`date "+%s"`
timestamp=`date +%Y_%m_%d_%H:%M:%S`
# Cleanup before building CAVE rpms
if [[ ${2} = "buildCAVE" ]]; then
rm -rf ${JENKINS_HOME}/buildspace/workspace/AWIPS2-UPC_build/baseline/
rm -rf ${JENKINS_HOME}/buildspace/workspace/tmp/${USER}/
fi
echo "BUILD_DIR = $BUILD_DIR"
echo "BUILD_WORKSPACE = $BUILD_WORKSPACE"
echo "BASELINE = $BASELINE"
echo "WORKSPACE = $WORKSPACE"
echo "AWIPSII_VERSION = $AWIPSII_VERSION"
echo "AWIPSII_RELEASE = $AWIPSII_RELEASE"
echo "AWIPSII_TOP_DIR = $AWIPSII_TOP_DIR"
echo "UFRAME_ECLIPSE = $UFRAME_ECLIPSE"
echo "AWIPSII_STATIC_FILES = $AWIPSII_STATIC_FILES"
echo "AWIPSII_BUILD_ROOT = $AWIPSII_BUILD_ROOT"
# Prepare the rpm build directory structure
mkdir -p ${AWIPSII_TOP_DIR}/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
pushd . > /dev/null 2>&1
cd ${BASELINE}
mkdir -p ${WORKSPACE}
RSYNC_DIRS=`cat $dir/rsync.dirs`
rsync -ruql --delete --exclude-from=${dir}/excludes ${RSYNC_DIRS} ${WORKSPACE}
popd > /dev/null 2>&1
# execute the build for the appropriate architecture
_rpms_build_directory=${WORKSPACE}/rpms/build
_architecture=`uname -i`
_build_sh_directory=${_rpms_build_directory}/${_architecture}
pushd . > /dev/null 2>&1
cd ${_build_sh_directory}
cp -v ${dir}/buildEnvironment.sh .
/bin/bash ${_build_sh_directory}/build.sh ${1} ${2}
popd > /dev/null 2>&1
export rpm_end_dir="${AWIPSII_VERSION}-${AWIPSII_RELEASE}"
if [ "$(ls -A ${AWIPSII_TOP_DIR}/RPMS/x86_64/)" ]; then
mv ${AWIPSII_TOP_DIR}/RPMS/x86_64/* ${JENKINS_HOME}/build/rpms/awips2_latest/x86_64/
fi
if [ "$(ls -A ${AWIPSII_TOP_DIR}/RPMS/noarch/)" ]; then
mv ${AWIPSII_TOP_DIR}/RPMS/noarch/* ${JENKINS_HOME}/build/rpms/awips2_latest/noarch/
fi
END_TIME=`date "+%s"`
TIME_SPENT=$((END_TIME - START_TIME))
TTI_HOURS=$((TIME_SPENT/3600))
TTI_SECS=$((TIME_SPENT %3600)) #Remaining seconds
TTI_MINS=$((TTI_SECS/60))
TTI_SECS=$((TTI_SECS%60))
echo "Total-time-Spent-In-The-Build-For $0 = $TTI_HOURS hours, $TTI_MINS minutes, $TTI_SECS seconds"
exit

View file

@ -34,7 +34,9 @@
<then>
<copy todir="${edex.root.directory}"
overwrite="${esb.overwrite}" failonerror="true">
<fileset dir="${wa.base.directory}/esb"/>
<fileset dir="${wa.base.directory}/esb">
<include name="**/*" />
</fileset>
</copy>
</then>
</if>
@ -47,6 +49,7 @@
<echo message="Cleaning target directory: ${edex.root.directory}/lib/" />
<delete includeemptydirs="true">
<fileset dir="${edex.root.directory}/lib/">
<include name="**" />
<exclude name="native/**" />
</fileset>
</delete>
@ -63,8 +66,9 @@
<echo message="Cleaning target directory: ${edex.root.directory}/conf" />
<delete>
<fileset dir="${edex.root.directory}/conf">
<include name="**" />
<exclude name="**/site/**"/>
<exclude name="**/auth/**"/>
<exclude name="**/*.jks"/>
</fileset>
</delete>
</target>
@ -103,11 +107,6 @@
</exec>
</target>
<path id="ant.contrib.path">
<fileset dir="/awips2/ant/lib/">
<include name="ant-contrib-*.jar" />
</fileset>
</path>
<taskdef resource="net/sf/antcontrib/antlib.xml"
classpathref="ant.contrib.path" />
classpath="${basedir}/lib/ant/ant-contrib-1.0b3.jar" />
</project>

View file

@ -1,9 +1,7 @@
### EDEX localization related variables ###
export AW_SITE_IDENTIFIER=OAX
## Cluster id can be set to the cluster's id (example:tbw for dv1-tbwo)
## Cluster id can be set to the cluster's id (example:tbw for dx1-tbwo)
## it will be autogenerated if not set
export EXT_ADDR=external.fqdn
export CLUSTER_ID=
# database names
@ -30,36 +28,21 @@ export BROKER_HOST=localhost
export BROKER_PORT=5672
export BROKER_HTTP=8180
# setup ignite
#export DATASTORE_PROVIDER=${DATASTORE_PROVIDER:-ignite}
export DATASTORE_PROVIDER=pypies
# Server that redirects PYPIES http requests to ignite
export PYPIES_COMPATIBILITY_HOST=localhost
export PYPIES_COMPATIBILITY_PORT=9586
export PYPIES_COMPATIBILITY_SERVER=http://${PYPIES_COMPATIBILITY_HOST}:${PYPIES_COMPATIBILITY_PORT}
# The following two values are comma-delimited lists of the machines that are
# hosting each of the ignite cluster's servers (example: cache1,cache2,cache3
# and cache4,cache5,cache6). Leaving the second value blank indicates that only
# one cluster is being used. These values should be the same on all machines.
export IGNITE_CLUSTER_1_SERVERS=localhost
export IGNITE_CLUSTER_2_SERVERS=
# The address that other ignite nodes should use to communicate with this ignite client
export LOCAL_ADDRESS=127.0.0.1
export IGNITE_SSL_CERT_DB=/awips2/edex/conf/ignite/auth
# setup hdf5 connection
export PYPIES_HOST=${EXT_ADDR}
# setup hdf5 connection if pypies is enabled
export PYPIES_HOST=localhost
export PYPIES_PORT=9582
export PYPIES_SERVER=http://${PYPIES_HOST}:${PYPIES_PORT}
# moved here from environment.xml
# these values are returned to clients that contact the localization service
export HTTP_HOST=${EXT_ADDR}
export HTTP_HOST=localhost
export HTTP_PORT=9581
export HTTP_SERVER_PATH=services
export HTTP_SERVER=http://${HTTP_HOST}:${HTTP_PORT}/${HTTP_SERVER_PATH}
export JMS_SERVER=${BROKER_HOST}:${BROKER_PORT}
export HTTP_SERVER_PATH=/services
export HTTP_SERVER=http://${HTTP_HOST}:${HTTP_PORT}${HTTP_SERVER_PATH}
export JMS_SERVER=tcp://${BROKER_HOST}:${BROKER_PORT}
export JMS_VIRTUALHOST=edex
export JMS_CONNECTIONS_URL=https://${BROKER_HOST}:${BROKER_HTTP}/api/latest/connection/${JMS_VIRTUALHOST}
export JMS_QUEUE_URL=https://${BROKER_HOST}:${BROKER_HTTP}/api/latest/queue/${JMS_VIRTUALHOST}/${JMS_VIRTUALHOST}
export JMS_SSL_ENABLED=true
export QPID_SSL_CERT_DB=/awips2/edex/conf/jms/auth
export QPID_SSL_CERT_NAME=guest
@ -75,7 +58,6 @@ export TEMP_DIR=/awips2/edex/data/tmp
# set hydroapps directory path
export apps_dir=${SHARE_DIR}/hydroapps
# site identifier for hydroapps
export SITE_IDENTIFIER=${AW_SITE_IDENTIFIER}

View file

@ -18,14 +18,6 @@
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------- -------- --------- --------------------------------------------
# Jul 03, 2019 7875 randerso Changed to get EDEX version from
# awips2-version.rpm
##
# edex startup script
if [ -z "${SKIP_RPM_CHECK}" ]; then
@ -76,10 +68,12 @@ if [ -z "$PSQL_INSTALL" ]; then PSQL_INSTALL="$awips_home/psql"; fi
if [ -z "$YAJSW_HOME" ]; then YAJSW_HOME="$awips_home/yajsw"; fi
# Find the edex version
version=`rpm -q awips2-version --qf %{VERSION}`
rpm -q awips2-edex > /dev/null 2>&1
RC=$?
if [ ${RC} -ne 0 ]; then
version="Undefined"
else
version=`rpm -q awips2-edex --qf %{VERSION}`
fi
export EDEX_VERSION=$version
@ -108,7 +102,7 @@ export PATH=$PATH:$awips_home/GFESuite/bin:$awips_home/GFESuite/ServiceBackup/sc
export PATH=$PATH:$PROJECT/bin
export JAVA_HOME="${JAVA_INSTALL}"
export LD_LIBRARY_PATH=$EDEX_HOME/lib/native/linux32/awips1:${JAVA_INSTALL}/lib:${PYTHON_INSTALL}/lib:${PYTHON_INSTALL}/lib/python3.6/site-packages/jep:${PSQL_INSTALL}/lib:$PROJECT/sharedLib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$EDEX_HOME/lib/native/linux32/awips1:${JAVA_INSTALL}/lib:${PYTHON_INSTALL}/lib:${PYTHON_INSTALL}/lib/python2.7/site-packages/jep:${PSQL_INSTALL}/lib:$PROJECT/sharedLib:$LD_LIBRARY_PATH
export FXA_DATA=$EDEX_HOME/data/fxa
export ALLOW_ARCHIVE_DATA="false"
@ -171,8 +165,6 @@ export EDEX_WRAPPER_LOGFILE_FORMAT
if [ -e $EDEX_HOME/etc/${RUN_MODE}.sh ]; then
. $EDEX_HOME/etc/${RUN_MODE}.sh
else
export DATASTORE_PROVIDER=pypies
fi
if [ $PROFILE_FLAG == "on" ]; then
@ -198,6 +190,6 @@ if [ ${RC} -ne 0 ]; then
exit 1
fi
YAJSW_JVM_ARGS="-Xmx32m -Djava.io.tmpdir=${AWIPS2_TEMP}"
YAJSW_JVM_ARGS="-Xmx32m -XX:ReservedCodeCacheSize=4m -Djava.io.tmpdir=${AWIPS2_TEMP}"
java ${YAJSW_JVM_ARGS} -jar ${YAJSW_HOME}/wrapper.jar -c ${EDEX_HOME}/conf/${CONF_FILE}

View file

@ -1,9 +1,9 @@
**************************************************
* Unidata AWIPS EDEX ESB Platform *
* Version: 20.3.2-2 *
* UCAR NSF Unidata Program Center *
* AWIPS II EDEX ESB Platform *
* Version: SOTE 11.X *
* Raytheon Company *
*------------------------------------------------*
* NON-OPERATIONAL *
* DEVELOPMENT *
* *
* *
**************************************************

View file

@ -41,16 +41,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">5</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">10</property>
<property name="hibernate.c3p0.max_idle_time">10</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -61,10 +62,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -41,16 +41,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">10</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.max_idle_time">600</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -61,10 +62,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/${dc.db.name}
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">5</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">10</property>
<property name="hibernate.c3p0.max_idle_time">10</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/${dc.db.name}
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">10</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.max_idle_time">600</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849)-->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/${fxa.db.name}
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">5</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">10</property>
<property name="hibernate.c3p0.max_idle_time">10</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/${fxa.db.name}
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">25</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.max_idle_time">600</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/${hm.db.name}
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">5</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">10</property>
<property name="hibernate.c3p0.max_idle_time">10</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/${hm.db.name}
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">10</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.max_idle_time">600</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/${ih.db.name}
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">5</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">10</property>
<property name="hibernate.c3p0.max_idle_time">10</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/${ih.db.name}
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">10</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.max_idle_time">600</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.spatial.dialect.postgis.PostgisPG95Dialect
org.hibernate.spatial.dialect.postgis.PostgisDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/maps
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">5</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">10</property>
<property name="hibernate.c3p0.max_idle_time">10</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.spatial.dialect.postgis.PostgisPG95Dialect
org.hibernate.spatial.dialect.postgis.PostgisDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/maps
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.max_idle_time">600</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">20</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -29,7 +29,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.spatial.dialect.postgis.PostgisPG95Dialect
org.hibernate.spatial.dialect.postgis.PostgisDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/metadata
@ -55,33 +55,26 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">5</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">10</property>
<property name="hibernate.c3p0.max_idle_time">10</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">${db.metadata.pool.timeout}</property>
<property name="hibernate.c3p0.max_statements">10</property>
<property name="hibernate.generate_statistics">false</property>
<property name="hibernate.transaction.coordinator_class">jdbc</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.jdbc.use_streams_for_binary">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid that is necessary to start edex in
registry mode as of Hibernate 5.2. JPA spec does not allow flushing
updates outside of a transaction boundary. Figure out why we need
this -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.spatial.dialect.postgis.PostgisPG95Dialect
org.hibernate.spatial.dialect.postgis.PostgisDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/metadata
@ -59,32 +59,28 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">1</property>
<property name="hibernate.c3p0.min_size">1</property>
<property name="hibernate.c3p0.max_size">${db.metadata.pool.max}</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.max_idle_time">600</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">${db.metadata.pool.timeout}</property>
<property name="hibernate.c3p0.max_statements">10</property>
<property name="hibernate.generate_statistics">false</property>
<property name="hibernate.transaction.coordinator_class">jdbc</property>
<property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property>
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.jdbc.use_streams_for_binary">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
<property name="hibernate.query.plan_cache_max_strong_references">16</property>
<property name="hibernate.query.plan_cache_max_soft_references">32</property>
<!-- TODO: This is a band-aid that is necessary to start edex in
registry mode as of Hibernate 5.2. JPA spec does not allow flushing
updates outside of a transaction boundary. Figure out why we need
this -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/ncep
@ -59,16 +59,17 @@
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">5</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">10</property>
<property name="hibernate.c3p0.max_idle_time">10</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +80,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -30,7 +30,7 @@
org.postgresql.Driver
</property>
<property name="dialect">
org.hibernate.dialect.PostgreSQL95Dialect
org.hibernate.dialect.PostgreSQLDialect
</property>
<property name="connection.url">
jdbc:postgresql://${db.addr}:${db.port}/ncep
@ -58,17 +58,15 @@
debugging, defaults to false -->
<property name="hibernate.use_sql_comments">false</property>
<!-- Use c3p0 connection pooling -->
<property name="hibernate.connection.provider_class">com.raytheon.uf.edex.database.DatabaseC3P0ConnectionProvider</property>
<!-- c3p0 Connection Pool Properties -->
<!-- Additional properties may be added to c3p0.properties -->
<property name="hibernate.c3p0.initial_pool_size">0</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.max_size">10</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.acquireRetryAttempts">0</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">60</property>
<property name="hibernate.c3p0.max_idle_time">600</property>
<property name="hibernate.c3p0.preferred_test_query">select 1</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">10</property>
@ -79,10 +77,5 @@
<property name="hibernate.query.plan_cache_max_strong_references">8</property>
<property name="hibernate.query.plan_cache_max_soft_references">16</property>
<!-- TODO: This is a band-aid to prevent edex errors with Hibernate 5.2.
JPA spec does not allow flushing updates outside of a transaction
boundary. Figure out why we need this (RODO #7849) -->
<property name="hibernate.allow_update_outside_transaction">true</property>
</session-factory>
</hibernate-configuration>

View file

@ -1,10 +0,0 @@
root.crt is the root CA certificate used to sign client and server certificates
used with Ignite.
guest.crt and guest.key are the client certificate and private key.
passwords.properties contains the passwords for the keystore and truststore,
this file is read by EDEX on startup.
The baseline versions of these files are for testing purposes only and NOT to
be used in an operational environment!

View file

@ -1,28 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIE2zCCAsMCAhI0MA0GCSqGSIb3DQEBCwUAMDMxDjAMBgNVBAoMBUFXSVBTMRAw
DgYDVQQLDAdUZXN0aW5nMQ8wDQYDVQQDDAZjYXJvb3QwHhcNMjAxMTEzMTU0ODUz
WhcNMzAxMTExMTU0ODUzWjAzMQ4wDAYDVQQKDAVBV0lQUzEQMA4GA1UECwwHVGVz
dGluZzEPMA0GA1UEAwwGY2xpZW50MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC
CgKCAgEAuoEEn9bpvCC5Tf6QtTDSiSvdtQyQNv8LGpg8cdqpIITEclaC45KB2vtZ
MaYECIs+uS57jzinaGB/5wW047Uf0KXXZApVArvs5WwXV8zNGCnF9KXZHnacz5XO
UU4uzA40i5SI7YS74amH1dcXpAnJd+EKTH+zZ9sXvQOBP0ZqgRje3xaOHNjzDD0S
V52mj4gLCmQSS16wnfR/uT1TjxN0IYMoJ99yDzs0ZZWqYtRK+3N++ek4PxszbZZZ
PbQ0FS/UV2LzkBp3tFStc9cQDwTpYwa8NR4xQLOv4r8Xqz2rWFfKV5OFiiM6aJdq
D4wgD9tM/jOzPfGRruMsVyjDspdim8DKxavw/OyvxBcfzER0iHqv31iAJ624f+23
8iQ4FoUpU/VTqYfIIabjWrivmd62et18iCaoRBXYsA5Q0pFe18RxfNAquRlSHGP2
1Lrx7kWMAlokRn7+2PpCA2Fx2TTlg4FeltHkqq8/HdEIXBAbsJvRpknP0bj9TQcU
Zv2pvuE5V6pH/F7UPiDVDQ+HJDG4aIcpwy6glz0if/MyoSjSnkzlGWT3aWJLj3cE
rsQGEQFYX7ACY9G/fv+VLR13rn0EpiEcqRsd57imW4HVS5cs3z80jXc6LZfNLxdQ
ngg/JBw9zOx/GJLIsi+Ep+PH87IpTpqadDBnrtDTQLYGs8eRL3cCAwEAATANBgkq
hkiG9w0BAQsFAAOCAgEAc9qdoHGgaVcYvc0Q7YfVryyHxDy9D4BEwk8Vq1jnwxF5
lCiccnZO0vAYwt83V+aI4K9z8IWhdkCaZkMIrZGwkwiUCOlhHTg0xqiyS1QiGwK7
bc6f5t7B/zn8QN0yVUfNsBgnTUSbrwsGd7QndzwIJqTUBrZ1Ir2J+O0lgTT5/x9w
+JZEm4yudJeXBlVOGkg+DQNaSpCM2IGtk+Y1seuBamv2XMBpip02DfKm2MNr66we
9zm/IWFUOgoFn2SgFvD8kqnrIT6DppA4+u1tsCo+rM6emRPCTe4SBq0653x4ZbwX
JMoRWhC+D/GdyxVb7W52DyXyaziZNsaStqd/XNqpQG9FR7hZWwdZ/+fVG+2OlkWj
ZqtvmZA5OoRDGesbNPP7VRv17uEEMbbiW0k4bjsYTjmVQDkMcdgLMooB6n/GMaXi
M2obV6Gz43Ps383VgpMmucLNI+OV12e/mGq0Y4Gg9BD/U0JvyJ1jcxbyJnka+ON8
2LELTnNukN7IHGA75FFvoW5FuPN9wwuaBWyh+MW9qXF7nMNOOWL6hxgzcFoQQwMZ
bcXdXkMWnpkrxocoTPCykxi1KVZhmh+iaV0dwW0KIsblhKlj7JLn1EftHcNMsIbt
ROUId4u/qdnKmCWYjIsSuqjRiMTBThn6LZQKgV60MVN2li8XoJ7ROsuo2MVB78Y=
-----END CERTIFICATE-----

View file

@ -1,52 +0,0 @@
-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQC6gQSf1um8ILlN
/pC1MNKJK921DJA2/wsamDxx2qkghMRyVoLjkoHa+1kxpgQIiz65LnuPOKdoYH/n
BbTjtR/QpddkClUCu+zlbBdXzM0YKcX0pdkedpzPlc5RTi7MDjSLlIjthLvhqYfV
1xekCcl34QpMf7Nn2xe9A4E/RmqBGN7fFo4c2PMMPRJXnaaPiAsKZBJLXrCd9H+5
PVOPE3Qhgygn33IPOzRllapi1Er7c3756Tg/GzNtllk9tDQVL9RXYvOQGne0VK1z
1xAPBOljBrw1HjFAs6/ivxerPatYV8pXk4WKIzpol2oPjCAP20z+M7M98ZGu4yxX
KMOyl2KbwMrFq/D87K/EFx/MRHSIeq/fWIAnrbh/7bfyJDgWhSlT9VOph8ghpuNa
uK+Z3rZ63XyIJqhEFdiwDlDSkV7XxHF80Cq5GVIcY/bUuvHuRYwCWiRGfv7Y+kID
YXHZNOWDgV6W0eSqrz8d0QhcEBuwm9GmSc/RuP1NBxRm/am+4TlXqkf8XtQ+INUN
D4ckMbhohynDLqCXPSJ/8zKhKNKeTOUZZPdpYkuPdwSuxAYRAVhfsAJj0b9+/5Ut
HXeufQSmIRypGx3nuKZbgdVLlyzfPzSNdzotl80vF1CeCD8kHD3M7H8YksiyL4Sn
48fzsilOmpp0MGeu0NNAtgazx5EvdwIDAQABAoICAHk93i+6mn/+FfiqAJCJiJQ7
vAkyfZ4C9sj3JnQtXb0SElLyAmzGlTwyIa2w6vZS7xebLB/TCKFF+l/Iyestl90f
soIKZXE9kacjOZmOPdXzcgi0uAyhtxcLn/AjDzEAGxCSIuGlZC4y82cESQ4OfrY7
yWIpsgtV1ny9howHzrzV2izUkNYYAwh1uzLR/bFZEzRSEcKFb/N/OnjFcUiVsO0I
QlaJX7CfIFTZksZkk8obLvRvtGzx1eDr2F/Qgfsz+KpGXWfUjPTiB1BDAuGAo+gI
PNmbIxGYvkJ9T3m2wWjQyW1dLXa7qADOTdiFk2I7gjXOjjs6iyZR8EVI7s9usl7I
I8/Hkg3jcMV53v4/0j51qaDGx+54J//rN/CCnZ17uP6cWX8ftLC76rSTK+KzqRUA
0GFnNbpaHMCMwADpYUJzNR8SB7PNJYJ7cauaJQInfYU5sv0tsiY2R70SxdBuRf3t
uW9hzDsoI5agOZ2271plW95wczHBsadn9H5NfMaQmbHomPr5dQvBvmbEUaQI2wEe
ugWqFV+A1abbv9EuWguox/yDZu93jYvxrelAuxjnaAPrbUgIAw+ER3kSX3a6NTco
k+eaUuipmbQvwfIwrAlKDnRarEpn3jx82pUWPx1YWgVCKGaDJH0wrEiwZQqxaXaF
fPVLlaLtru0rmEatXfKBAoIBAQD22qEU6aqovJGXG9JrQOzG/cErk1UTmXHUZNDY
ZdO/AHLLw/hRYHlprNuGRTl8MT6wC8hmCcdQYTl2vQslSdYef6BrVmltQPJ9QxZI
wgjQ9z/f4HXDDxd/CXmIHgcZOuIy1YU/boss3Xe/I2VFzHPxMe64EpNvo6TJcv1y
4Wub23Ww0+VjQ4taYPx5c1JlLJh7gojXzi/CyI8XgaW9fT+gJLfOhkF4IufXFyjc
yqRVsZ5FIG2qmUQ6kLJA4h4QvCbxZF2If94yON5o17k5+2Ss1DXulxOHLDQP9G7V
7g8pXr0HpR6dUzhMeTd2LZnD+1AL6LdMqH2olTVUF7iVm2BHAoIBAQDBafp1tGQK
5fLEP7odK6OJuserg8fn4nxUWzUiTLIxlSUBhJEqjn7e5tdGaP7HvAHttus18MyF
fXTBor41VzNf3EN2W8Nfe5H34u5TUnUQNi0szD8ZoVRDKKeviWZ0E+1zy0FVuf43
2wKnrlHz7qe3KB5dygRO25wFaZzen4l8gIzyolYVsQS+LBmbb1HePe0qeL3Dd50D
7CZBlb6Y0BskhYLO4VXhF2aEilwdMHRe7Ni2CKlgW9rruGyS1zjUCz8lRSo/FF58
oY/7B5tWZuXBtBEB5C7Um9vibGWC5+fiv1mPouhR1SJ2qSBpGRIlb5ZMbp1T+V3L
ep7MySj49/9RAoIBAGUOGXVjNw7+qydOJ3TqzkOkLAk3tlNgtL27S9faz7VYbKZI
IobF1M5TCkdMXX0e98e/+xjyldqtAoY+W6q3MGWp37UxWdV1ChAHf77nWA6buVVg
ITVqHzdNXKhBiqxkc6dVy8es7GLAgz4HMnVBfpFV3KEUUbEZL+OcJG98Ir5aODLc
fAKH6ytjmtfpQujSOdYOGREnglveGN4JoB0TghGAFpMAWRriR0DBZWQFvQKrxNwN
q3d0aP8Er0RqjN5S+CpH6RZxKjgrGbmX3mcDKDKsaSu0QzVJ/kIt0ZXYb/KCqyXP
Ddpf8CM2WGMTxef6IMnPSgKi01ZJRtyXHWR5iA8CggEBAKSdsakqnpdT/VqgrqQT
Nz6LNVQ6dhNbhdw6QK/vvR5MRVHUZcVlwH+w03q+9FJFJxMOw4SGbVwUWiJQhDlb
/YLSMqzzDwceYi8r+xH5tV7V7X8aW35R/Cio2oehlwymxUvvq6zt/3vEsK7MxD2s
WxydTbMftKj1awaETBqCiH7p3ozINCKEJnhBio3roi9YX5ntZ/2MuZvUCv95Ft5z
CRb9d0bjLLfGtd+K7zl8ux7r0Mql9prnsx07O1WDTn/TDqyHAJztljnXPHc4kzJn
o5dIzczhTCZyfSRqg79157vqhqykx7yWfZ2m9sncp8ArCC4HW2pUbEs6ExxS/cdh
M/ECggEBANEDWMkUaPrwgUiThAl3zj97k4gobIRZf6fp35LdZjhA3VAADGv/L4Yt
zHNxtKOO9h6sMX61jD/7BJ1RPqahgMsxUcG/DabfIeu7NusunTIyuquN8AgwgUKS
HZA98Lj8Ea/Xeac46OPWSbXLNaS6Vc47DbxqnTWjySSc16mLOxmls3NFB++sf0o8
8jk2zMqTnaz2tlRe6PO4fwWIFhjtBPdkUCAZ/jUQ7Ym72JSAkaWrLIqJFhIjuhCb
6na5HN0CGbwUEB12do6aQIQ7paV5gKn044lhI98T9M5+Rz7zXPLfAoLxCqVeCAyM
FVtawPpck3F1bQwimvE+pfP0+XJhLqA=
-----END PRIVATE KEY-----

View file

@ -1,2 +0,0 @@
a2.ignite.keystore.password=TFBlX9gsPm0=
a2.ignite.truststore.password=TFBlX9gsPm0=

View file

@ -1,30 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFOTCCAyGgAwIBAgIJAOz0RCYTMDmCMA0GCSqGSIb3DQEBCwUAMDMxDjAMBgNV
BAoMBUFXSVBTMRAwDgYDVQQLDAdUZXN0aW5nMQ8wDQYDVQQDDAZjYXJvb3QwHhcN
MjAxMTEzMTU0NzEwWhcNMzAxMTExMTU0NzEwWjAzMQ4wDAYDVQQKDAVBV0lQUzEQ
MA4GA1UECwwHVGVzdGluZzEPMA0GA1UEAwwGY2Fyb290MIICIjANBgkqhkiG9w0B
AQEFAAOCAg8AMIICCgKCAgEAnsWmnwIUEXg4BTBqr1datXTKDhgbSZVecE8M75U+
8U8boKXy7IcOa2V8SL0fSa23HIUok03Ed7ATxfRSriU2oEaPMBgovUd+kZ1931ru
AMERMg9wbJa9/cQFWhkwqV8XvOH99xV3OtbHQqkLOvXJk239bJNR3q4/C4poKusY
15elhMBWEqIUrAMkK9adn9uKX8DZK3IhFW1oVH/HTu5uBnz1q5GfsogYU3qapLqo
Ob65iH20m6bmUMbsMbPSMns8D9Wkb3Z+tNZilIBvZKVSnhIyUOx+IQgpH/aFdUpQ
otLykFc78UzF6fjTuh49HAshcjGsLjHRg7vuagClmdjNds+Xm6+Byeuv2YUD371p
wkDUDjhAK7VApvBdMANTlxVON67oRqCj9/JKkRhJyNL04+JnXSBVOoa/eAhwMRA/
TnKwfI/w49AZoy09ip3xsZ3f9x/ssP2608AIBVTknFX/CdxMsIhMt4hZlqUzNUlP
D4hwWsRg0Vgb4j+o8rqIjh+v4t3v8adOumi7h8nsUQYiwPrfr/RIrtRnQjblr1PY
vpXiJNm8hf6de+VldrLLV5bk6UPU/ik9fPRf6HwvAI5Y6oQTF93pZCtgD9I09CXn
zyo7veSK/KrLJO4Wv50RpIwn1weJ6grz6syUSpXCbux6Igu/ObcrszdIb+vDahX0
nesCAwEAAaNQME4wHQYDVR0OBBYEFFL1dmRTdXNfSXj2Dj1/KDybI656MB8GA1Ud
IwQYMBaAFFL1dmRTdXNfSXj2Dj1/KDybI656MAwGA1UdEwQFMAMBAf8wDQYJKoZI
hvcNAQELBQADggIBABtBApfuRTxEbOMFv6k8+EQATNgjIbuwjcs2ZwUvS5zl3xKI
rNTmTHkLseKXVXMTa8+0SjttrTVKb74dwB5pppqAYejLLlfqEMz0Vq/GjhtkW3MX
b4AEcLdN/oYmPYrzBxwUXt9lBwauDNFq9SDGdIewKPr2yoN58kBBB2y3BeCILHiH
g0Q7OxrJgM6GuD6ikMI6VHHXSRY5jn7JnA6WkmSUBI8tvA95Hdz750OZFtKPZqRA
KykuFOxg8J0EXnQgbGjQiMTePwZjvHcB15bPEyHF7LVUNKKg44TnI7Wf2lFcHB0N
+Eccu+ABXPW3jObq2hMpZHxB62I22VgjzQ6lTqM+4mJ0xpKSX79WzNYvBf/wZMuN
EEkZcuiNNMPJ3pVwQraLHWoYZ3LTTzbleUgcrfFOyl1+HIZ/o2Uzll9kS06D4/KN
l235PW+irCex35u1s+4X7G7hWSKFy2ZVPEpppBhtaF3bvAx4Oo2njse8MtlN6XNz
F70YerEvH+w9rXyhbVA87hOOz4Jm8eblIxPDn+59FEZ/m/3gR22dTfe4L7o9NfvX
SvoHVbrz0Bf+S0NZOblqQ4gwM3KjceSkWz19ZmAdjtUy6M3VIPQZYMvlkuUmeHI3
Rvni9txlRYV4G6tzH93DhWsSz6fY5VaFBPd6wxGxZq9QJ7UHrslx8Mweu/1x
-----END CERTIFICATE-----

View file

@ -0,0 +1,648 @@
#
# This is the "master security properties file".
#
# An alternate java.security properties file may be specified
# from the command line via the system property
#
# -Djava.security.properties=<URL>
#
# This properties file appends to the master security properties file.
# If both properties files specify values for the same key, the value
# from the command-line properties file is selected, as it is the last
# one loaded.
#
# Also, if you specify
#
# -Djava.security.properties==<URL> (2 equals),
#
# then that properties file completely overrides the master security
# properties file.
#
# To disable the ability to specify an additional properties file from
# the command line, set the key security.overridePropertiesFile
# to false in the master security properties file. It is set to true
# by default.
# In this file, various security properties are set for use by
# java.security classes. This is where users can statically register
# Cryptography Package Providers ("providers" for short). The term
# "provider" refers to a package or set of packages that supply a
# concrete implementation of a subset of the cryptography aspects of
# the Java Security API. A provider may, for example, implement one or
# more digital signature algorithms or message digest algorithms.
#
# Each provider must implement a subclass of the Provider class.
# To register a provider in this master security properties file,
# specify the Provider subclass name and priority in the format
#
# security.provider.<n>=<className>
#
# This declares a provider, and specifies its preference
# order n. The preference order is the order in which providers are
# searched for requested algorithms (when no specific provider is
# requested). The order is 1-based; 1 is the most preferred, followed
# by 2, and so on.
#
# <className> must specify the subclass of the Provider class whose
# constructor sets the values of various properties that are required
# for the Java Security API to look up the algorithms or other
# facilities implemented by the provider.
#
# There must be at least one provider specification in java.security.
# There is a default provider that comes standard with the JDK. It
# is called the "SUN" provider, and its Provider subclass
# named Sun appears in the sun.security.provider package. Thus, the
# "SUN" provider is registered via the following:
#
# security.provider.1=sun.security.provider.Sun
#
# (The number 1 is used for the default provider.)
#
# Note: Providers can be dynamically registered instead by calls to
# either the addProvider or insertProviderAt method in the Security
# class.
#
# List of providers and their preference orders (see above):
#
security.provider.1=sun.security.provider.Sun
security.provider.2=sun.security.rsa.SunRsaSign
security.provider.3=sun.security.ec.SunEC
security.provider.4=com.sun.net.ssl.internal.ssl.Provider
security.provider.5=com.sun.crypto.provider.SunJCE
security.provider.6=sun.security.jgss.SunProvider
security.provider.7=com.sun.security.sasl.Provider
security.provider.8=org.jcp.xml.dsig.internal.dom.XMLDSigRI
security.provider.9=sun.security.smartcardio.SunPCSC
#
# Sun Provider SecureRandom seed source.
#
# Select the primary source of seed data for the "SHA1PRNG" and
# "NativePRNG" SecureRandom implementations in the "Sun" provider.
# (Other SecureRandom implementations might also use this property.)
#
# On Unix-like systems (for example, Solaris/Linux/MacOS), the
# "NativePRNG" and "SHA1PRNG" implementations obtains seed data from
# special device files such as file:/dev/random.
#
# On Windows systems, specifying the URLs "file:/dev/random" or
# "file:/dev/urandom" will enable the native Microsoft CryptoAPI seeding
# mechanism for SHA1PRNG.
#
# By default, an attempt is made to use the entropy gathering device
# specified by the "securerandom.source" Security property. If an
# exception occurs while accessing the specified URL:
#
# SHA1PRNG:
# the traditional system/thread activity algorithm will be used.
#
# NativePRNG:
# a default value of /dev/random will be used. If neither
# are available, the implementation will be disabled.
# "file" is the only currently supported protocol type.
#
# The entropy gathering device can also be specified with the System
# property "java.security.egd". For example:
#
# % java -Djava.security.egd=file:/dev/random MainClass
#
# Specifying this System property will override the
# "securerandom.source" Security property.
#
# In addition, if "file:/dev/random" or "file:/dev/urandom" is
# specified, the "NativePRNG" implementation will be more preferred than
# SHA1PRNG in the Sun provider.
#
securerandom.source=file:/dev/random
#
# A list of known strong SecureRandom implementations.
#
# To help guide applications in selecting a suitable strong
# java.security.SecureRandom implementation, Java distributions should
# indicate a list of known strong implementations using the property.
#
# This is a comma-separated list of algorithm and/or algorithm:provider
# entries.
#
securerandom.strongAlgorithms=NativePRNGBlocking:SUN
#
# Class to instantiate as the javax.security.auth.login.Configuration
# provider.
#
login.configuration.provider=sun.security.provider.ConfigFile
#
# Default login configuration file
#
#login.config.url.1=file:${user.home}/.java.login.config
#
# Class to instantiate as the system Policy. This is the name of the class
# that will be used as the Policy object.
#
policy.provider=sun.security.provider.PolicyFile
# The default is to have a single system-wide policy file,
# and a policy file in the user's home directory.
policy.url.1=file:${java.home}/lib/security/java.policy
policy.url.2=file:${user.home}/.java.policy
# whether or not we expand properties in the policy file
# if this is set to false, properties (${...}) will not be expanded in policy
# files.
policy.expandProperties=true
# whether or not we allow an extra policy to be passed on the command line
# with -Djava.security.policy=somefile. Comment out this line to disable
# this feature.
policy.allowSystemProperty=true
# whether or not we look into the IdentityScope for trusted Identities
# when encountering a 1.1 signed JAR file. If the identity is found
# and is trusted, we grant it AllPermission.
policy.ignoreIdentityScope=false
#
# Default keystore type.
#
keystore.type=jks
#
# Controls compatibility mode for the JKS keystore type.
#
# When set to 'true', the JKS keystore type supports loading
# keystore files in either JKS or PKCS12 format. When set to 'false'
# it supports loading only JKS keystore files.
#
keystore.type.compat=true
#
# List of comma-separated packages that start with or equal this string
# will cause a security exception to be thrown when
# passed to checkPackageAccess unless the
# corresponding RuntimePermission ("accessClassInPackage."+package) has
# been granted.
package.access=sun.,\
com.sun.xml.internal.,\
com.sun.imageio.,\
com.sun.istack.internal.,\
com.sun.jmx.,\
com.sun.media.sound.,\
com.sun.naming.internal.,\
com.sun.proxy.,\
com.sun.corba.se.,\
com.sun.org.apache.bcel.internal.,\
com.sun.org.apache.regexp.internal.,\
com.sun.org.apache.xerces.internal.,\
com.sun.org.apache.xpath.internal.,\
com.sun.org.apache.xalan.internal.extensions.,\
com.sun.org.apache.xalan.internal.lib.,\
com.sun.org.apache.xalan.internal.res.,\
com.sun.org.apache.xalan.internal.templates.,\
com.sun.org.apache.xalan.internal.utils.,\
com.sun.org.apache.xalan.internal.xslt.,\
com.sun.org.apache.xalan.internal.xsltc.cmdline.,\
com.sun.org.apache.xalan.internal.xsltc.compiler.,\
com.sun.org.apache.xalan.internal.xsltc.trax.,\
com.sun.org.apache.xalan.internal.xsltc.util.,\
com.sun.org.apache.xml.internal.res.,\
com.sun.org.apache.xml.internal.security.,\
com.sun.org.apache.xml.internal.serializer.utils.,\
com.sun.org.apache.xml.internal.utils.,\
com.sun.org.glassfish.,\
com.oracle.xmlns.internal.,\
com.oracle.webservices.internal.,\
oracle.jrockit.jfr.,\
org.jcp.xml.dsig.internal.,\
jdk.internal.,\
jdk.nashorn.internal.,\
jdk.nashorn.tools.,\
com.sun.activation.registries.,\
com.sun.browser.,\
com.sun.glass.,\
com.sun.javafx.,\
com.sun.media.,\
com.sun.openpisces.,\
com.sun.prism.,\
com.sun.scenario.,\
com.sun.t2k.,\
com.sun.pisces.,\
com.sun.webkit.,\
jdk.management.resource.internal.
#
# List of comma-separated packages that start with or equal this string
# will cause a security exception to be thrown when
# passed to checkPackageDefinition unless the
# corresponding RuntimePermission ("defineClassInPackage."+package) has
# been granted.
#
# by default, none of the class loaders supplied with the JDK call
# checkPackageDefinition.
#
package.definition=sun.,\
com.sun.xml.internal.,\
com.sun.imageio.,\
com.sun.istack.internal.,\
com.sun.jmx.,\
com.sun.media.sound.,\
com.sun.naming.internal.,\
com.sun.proxy.,\
com.sun.corba.se.,\
com.sun.org.apache.bcel.internal.,\
com.sun.org.apache.regexp.internal.,\
com.sun.org.apache.xerces.internal.,\
com.sun.org.apache.xpath.internal.,\
com.sun.org.apache.xalan.internal.extensions.,\
com.sun.org.apache.xalan.internal.lib.,\
com.sun.org.apache.xalan.internal.res.,\
com.sun.org.apache.xalan.internal.templates.,\
com.sun.org.apache.xalan.internal.utils.,\
com.sun.org.apache.xalan.internal.xslt.,\
com.sun.org.apache.xalan.internal.xsltc.cmdline.,\
com.sun.org.apache.xalan.internal.xsltc.compiler.,\
com.sun.org.apache.xalan.internal.xsltc.trax.,\
com.sun.org.apache.xalan.internal.xsltc.util.,\
com.sun.org.apache.xml.internal.res.,\
com.sun.org.apache.xml.internal.security.,\
com.sun.org.apache.xml.internal.serializer.utils.,\
com.sun.org.apache.xml.internal.utils.,\
com.sun.org.glassfish.,\
com.oracle.xmlns.internal.,\
com.oracle.webservices.internal.,\
oracle.jrockit.jfr.,\
org.jcp.xml.dsig.internal.,\
jdk.internal.,\
jdk.nashorn.internal.,\
jdk.nashorn.tools.,\
com.sun.activation.registries.,\
com.sun.browser.,\
com.sun.glass.,\
com.sun.javafx.,\
com.sun.media.,\
com.sun.openpisces.,\
com.sun.prism.,\
com.sun.scenario.,\
com.sun.t2k.,\
com.sun.pisces.,\
com.sun.webkit.,\
jdk.management.resource.internal.
#
# Determines whether this properties file can be appended to
# or overridden on the command line via -Djava.security.properties
#
security.overridePropertiesFile=true
#
# Determines the default key and trust manager factory algorithms for
# the javax.net.ssl package.
#
ssl.KeyManagerFactory.algorithm=SunX509
ssl.TrustManagerFactory.algorithm=PKIX
#
# The Java-level namelookup cache policy for successful lookups:
#
# any negative value: caching forever
# any positive value: the number of seconds to cache an address for
# zero: do not cache
#
# default value is forever (FOREVER). For security reasons, this
# caching is made forever when a security manager is set. When a security
# manager is not set, the default behavior in this implementation
# is to cache for 30 seconds.
#
# NOTE: setting this to anything other than the default value can have
# serious security implications. Do not set it unless
# you are sure you are not exposed to DNS spoofing attack.
#
#networkaddress.cache.ttl=-1
# The Java-level namelookup cache policy for failed lookups:
#
# any negative value: cache forever
# any positive value: the number of seconds to cache negative lookup results
# zero: do not cache
#
# In some Microsoft Windows networking environments that employ
# the WINS name service in addition to DNS, name service lookups
# that fail may take a noticeably long time to return (approx. 5 seconds).
# For this reason the default caching policy is to maintain these
# results for 10 seconds.
#
#
networkaddress.cache.negative.ttl=10
#
# Properties to configure OCSP for certificate revocation checking
#
# Enable OCSP
#
# By default, OCSP is not used for certificate revocation checking.
# This property enables the use of OCSP when set to the value "true".
#
# NOTE: SocketPermission is required to connect to an OCSP responder.
#
# Example,
# ocsp.enable=true
#
# Location of the OCSP responder
#
# By default, the location of the OCSP responder is determined implicitly
# from the certificate being validated. This property explicitly specifies
# the location of the OCSP responder. The property is used when the
# Authority Information Access extension (defined in RFC 3280) is absent
# from the certificate or when it requires overriding.
#
# Example,
# ocsp.responderURL=http://ocsp.example.net:80
#
# Subject name of the OCSP responder's certificate
#
# By default, the certificate of the OCSP responder is that of the issuer
# of the certificate being validated. This property identifies the certificate
# of the OCSP responder when the default does not apply. Its value is a string
# distinguished name (defined in RFC 2253) which identifies a certificate in
# the set of certificates supplied during cert path validation. In cases where
# the subject name alone is not sufficient to uniquely identify the certificate
# then both the "ocsp.responderCertIssuerName" and
# "ocsp.responderCertSerialNumber" properties must be used instead. When this
# property is set then those two properties are ignored.
#
# Example,
# ocsp.responderCertSubjectName="CN=OCSP Responder, O=XYZ Corp"
#
# Issuer name of the OCSP responder's certificate
#
# By default, the certificate of the OCSP responder is that of the issuer
# of the certificate being validated. This property identifies the certificate
# of the OCSP responder when the default does not apply. Its value is a string
# distinguished name (defined in RFC 2253) which identifies a certificate in
# the set of certificates supplied during cert path validation. When this
# property is set then the "ocsp.responderCertSerialNumber" property must also
# be set. When the "ocsp.responderCertSubjectName" property is set then this
# property is ignored.
#
# Example,
# ocsp.responderCertIssuerName="CN=Enterprise CA, O=XYZ Corp"
#
# Serial number of the OCSP responder's certificate
#
# By default, the certificate of the OCSP responder is that of the issuer
# of the certificate being validated. This property identifies the certificate
# of the OCSP responder when the default does not apply. Its value is a string
# of hexadecimal digits (colon or space separators may be present) which
# identifies a certificate in the set of certificates supplied during cert path
# validation. When this property is set then the "ocsp.responderCertIssuerName"
# property must also be set. When the "ocsp.responderCertSubjectName" property
# is set then this property is ignored.
#
# Example,
# ocsp.responderCertSerialNumber=2A:FF:00
#
# Policy for failed Kerberos KDC lookups:
#
# When a KDC is unavailable (network error, service failure, etc), it is
# put inside a blacklist and accessed less often for future requests. The
# value (case-insensitive) for this policy can be:
#
# tryLast
# KDCs in the blacklist are always tried after those not on the list.
#
# tryLess[:max_retries,timeout]
# KDCs in the blacklist are still tried by their order in the configuration,
# but with smaller max_retries and timeout values. max_retries and timeout
# are optional numerical parameters (default 1 and 5000, which means once
# and 5 seconds). Please notes that if any of the values defined here is
# more than what is defined in krb5.conf, it will be ignored.
#
# Whenever a KDC is detected as available, it is removed from the blacklist.
# The blacklist is reset when krb5.conf is reloaded. You can add
# refreshKrb5Config=true to a JAAS configuration file so that krb5.conf is
# reloaded whenever a JAAS authentication is attempted.
#
# Example,
# krb5.kdc.bad.policy = tryLast
# krb5.kdc.bad.policy = tryLess:2,2000
krb5.kdc.bad.policy = tryLast
# Algorithm restrictions for certification path (CertPath) processing
#
# In some environments, certain algorithms or key lengths may be undesirable
# for certification path building and validation. For example, "MD2" is
# generally no longer considered to be a secure hash algorithm. This section
# describes the mechanism for disabling algorithms based on algorithm name
# and/or key length. This includes algorithms used in certificates, as well
# as revocation information such as CRLs and signed OCSP Responses.
#
# The syntax of the disabled algorithm string is described as this Java
# BNF-style:
# DisabledAlgorithms:
# " DisabledAlgorithm { , DisabledAlgorithm } "
#
# DisabledAlgorithm:
# AlgorithmName [Constraint]
#
# AlgorithmName:
# (see below)
#
# Constraint:
# KeySizeConstraint
#
# KeySizeConstraint:
# keySize Operator DecimalInteger
#
# Operator:
# <= | < | == | != | >= | >
#
# DecimalInteger:
# DecimalDigits
#
# DecimalDigits:
# DecimalDigit {DecimalDigit}
#
# DecimalDigit: one of
# 1 2 3 4 5 6 7 8 9 0
#
# The "AlgorithmName" is the standard algorithm name of the disabled
# algorithm. See "Java Cryptography Architecture Standard Algorithm Name
# Documentation" for information about Standard Algorithm Names. Matching
# is performed using a case-insensitive sub-element matching rule. (For
# example, in "SHA1withECDSA" the sub-elements are "SHA1" for hashing and
# "ECDSA" for signatures.) If the assertion "AlgorithmName" is a
# sub-element of the certificate algorithm name, the algorithm will be
# rejected during certification path building and validation. For example,
# the assertion algorithm name "DSA" will disable all certificate algorithms
# that rely on DSA, such as NONEwithDSA, SHA1withDSA. However, the assertion
# will not disable algorithms related to "ECDSA".
#
# A "Constraint" provides further guidance for the algorithm being specified.
# The "KeySizeConstraint" requires a key of a valid size range if the
# "AlgorithmName" is of a key algorithm. The "DecimalInteger" indicates the
# key size specified in number of bits. For example, "RSA keySize <= 1024"
# indicates that any RSA key with key size less than or equal to 1024 bits
# should be disabled, and "RSA keySize < 1024, RSA keySize > 2048" indicates
# that any RSA key with key size less than 1024 or greater than 2048 should
# be disabled. Note that the "KeySizeConstraint" only makes sense to key
# algorithms.
#
# Note: This property is currently used by Oracle's PKIX implementation. It
# is not guaranteed to be examined and used by other implementations.
#
# Example:
# jdk.certpath.disabledAlgorithms=MD2, DSA, RSA keySize < 2048
#
#
jdk.certpath.disabledAlgorithms=MD2, RSA keySize < 1024
# Algorithm restrictions for Secure Socket Layer/Transport Layer Security
# (SSL/TLS) processing
#
# In some environments, certain algorithms or key lengths may be undesirable
# when using SSL/TLS. This section describes the mechanism for disabling
# algorithms during SSL/TLS security parameters negotiation, including
# protocol version negotiation, cipher suites selection, peer authentication
# and key exchange mechanisms.
#
# Disabled algorithms will not be negotiated for SSL/TLS connections, even
# if they are enabled explicitly in an application.
#
# For PKI-based peer authentication and key exchange mechanisms, this list
# of disabled algorithms will also be checked during certification path
# building and validation, including algorithms used in certificates, as
# well as revocation information such as CRLs and signed OCSP Responses.
# This is in addition to the jdk.certpath.disabledAlgorithms property above.
#
# See the specification of "jdk.certpath.disabledAlgorithms" for the
# syntax of the disabled algorithm string.
#
# Note: This property is currently used by Oracle's JSSE implementation.
# It is not guaranteed to be examined and used by other implementations.
#
# Example:
# jdk.tls.disabledAlgorithms=MD5, SSLv3, DSA, RSA keySize < 2048
jdk.tls.disabledAlgorithms=SSLv3, RC4, DH keySize < 768
# Legacy algorithms for Secure Socket Layer/Transport Layer Security (SSL/TLS)
# processing in JSSE implementation.
#
# In some environments, a certain algorithm may be undesirable but it
# cannot be disabled because of its use in legacy applications. Legacy
# algorithms may still be supported, but applications should not use them
# as the security strength of legacy algorithms are usually not strong enough
# in practice.
#
# During SSL/TLS security parameters negotiation, legacy algorithms will
# not be negotiated unless there are no other candidates.
#
# The syntax of the disabled algorithm string is described as this Java
# BNF-style:
# LegacyAlgorithms:
# " LegacyAlgorithm { , LegacyAlgorithm } "
#
# LegacyAlgorithm:
# AlgorithmName (standard JSSE algorithm name)
#
# See the specification of security property "jdk.certpath.disabledAlgorithms"
# for the syntax and description of the "AlgorithmName" notation.
#
# Per SSL/TLS specifications, cipher suites have the form:
# SSL_KeyExchangeAlg_WITH_CipherAlg_MacAlg
# or
# TLS_KeyExchangeAlg_WITH_CipherAlg_MacAlg
#
# For example, the cipher suite TLS_RSA_WITH_AES_128_CBC_SHA uses RSA as the
# key exchange algorithm, AES_128_CBC (128 bits AES cipher algorithm in CBC
# mode) as the cipher (encryption) algorithm, and SHA-1 as the message digest
# algorithm for HMAC.
#
# The LegacyAlgorithm can be one of the following standard algorithm names:
# 1. JSSE cipher suite name, e.g., TLS_RSA_WITH_AES_128_CBC_SHA
# 2. JSSE key exchange algorithm name, e.g., RSA
# 3. JSSE cipher (encryption) algorithm name, e.g., AES_128_CBC
# 4. JSSE message digest algorithm name, e.g., SHA
#
# See SSL/TLS specifications and "Java Cryptography Architecture Standard
# Algorithm Name Documentation" for information about the algorithm names.
#
# Note: This property is currently used by Oracle's JSSE implementation.
# It is not guaranteed to be examined and used by other implementations.
# There is no guarantee the property will continue to exist or be of the
# same syntax in future releases.
#
# Example:
# jdk.tls.legacyAlgorithms=DH_anon, DES_CBC, SSL_RSA_WITH_RC4_128_MD5
#
jdk.tls.legacyAlgorithms= \
K_NULL, C_NULL, M_NULL, \
DHE_DSS_EXPORT, DHE_RSA_EXPORT, DH_anon_EXPORT, DH_DSS_EXPORT, \
DH_RSA_EXPORT, RSA_EXPORT, \
DH_anon, ECDH_anon, \
RC4_128, RC4_40, DES_CBC, DES40_CBC
# The pre-defined default finite field Diffie-Hellman ephemeral (DHE)
# parameters for Transport Layer Security (SSL/TLS/DTLS) processing.
#
# In traditional SSL/TLS/DTLS connections where finite field DHE parameters
# negotiation mechanism is not used, the server offers the client group
# parameters, base generator g and prime modulus p, for DHE key exchange.
# It is recommended to use dynamic group parameters. This property defines
# a mechanism that allows you to specify custom group parameters.
#
# The syntax of this property string is described as this Java BNF-style:
# DefaultDHEParameters:
# DefinedDHEParameters { , DefinedDHEParameters }
#
# DefinedDHEParameters:
# "{" DHEPrimeModulus , DHEBaseGenerator "}"
#
# DHEPrimeModulus:
# HexadecimalDigits
#
# DHEBaseGenerator:
# HexadecimalDigits
#
# HexadecimalDigits:
# HexadecimalDigit { HexadecimalDigit }
#
# HexadecimalDigit: one of
# 0 1 2 3 4 5 6 7 8 9 A B C D E F a b c d e f
#
# Whitespace characters are ignored.
#
# The "DefinedDHEParameters" defines the custom group parameters, prime
# modulus p and base generator g, for a particular size of prime modulus p.
# The "DHEPrimeModulus" defines the hexadecimal prime modulus p, and the
# "DHEBaseGenerator" defines the hexadecimal base generator g of a group
# parameter. It is recommended to use safe primes for the custom group
# parameters.
#
# If this property is not defined or the value is empty, the underlying JSSE
# provider's default group parameter is used for each connection.
#
# If the property value does not follow the grammar, or a particular group
# parameter is not valid, the connection will fall back and use the
# underlying JSSE provider's default group parameter.
#
# Note: This property is currently used by OpenJDK's JSSE implementation. It
# is not guaranteed to be examined and used by other implementations.
#
# Example:
# jdk.tls.server.defaultDHEParameters=
# { \
# FFFFFFFF FFFFFFFF C90FDAA2 2168C234 C4C6628B 80DC1CD1 \
# 29024E08 8A67CC74 020BBEA6 3B139B22 514A0879 8E3404DD \
# EF9519B3 CD3A431B 302B0A6D F25F1437 4FE1356D 6D51C245 \
# E485B576 625E7EC6 F44C42E9 A637ED6B 0BFF5CB6 F406B7ED \
# EE386BFB 5A899FA5 AE9F2411 7C4B1FE6 49286651 ECE65381 \
# FFFFFFFF FFFFFFFF, 2}

View file

@ -1,24 +1,18 @@
-----BEGIN CERTIFICATE-----
MIIEBzCCAu8CCQCLCVKLLCRALDANBgkqhkiG9w0BAQsFADAzMQ4wDAYDVQQKDAVB
V0lQUzEQMA4GA1UECwwHVGVzdGluZzEPMA0GA1UEAwwGY2Fyb290MCAXDTIyMDEz
MTE3MTkzMVoYDzIxMjEwMTMxMTcxOTMxWjBWMQswCQYDVQQGEwJYWDEVMBMGA1UE
BwwMRGVmYXVsdCBDaXR5MQ4wDAYDVQQKDAVBV0lQUzEQMA4GA1UECwwHVGVzdGlu
ZzEOMAwGA1UEAwwFZ3Vlc3QwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC
AQC9GC+8Nhj8a6y4k8uwIIlVo6w7J8zEfeJtDP8++cj8srbgw77guCc0gkITrMm+
P0nIkSJxxUaIj++E75CKampAkcYH6hPFU4hOQJWL2QTRlV7VhoyFto8jXF8YGV88
6f/Z2UPAwW9dii9HdFz1oYJTuSSDzBBQkst1/2JxcA28WncJ95QZf7t1PKNrLwzy
SkPbjgaUww64FrQp2AXP6KHTR53S1x/Mve4fp7y+rufkByrJIBxVI3wGLADkVinW
5avZAhRBUZ0DCkRcR+1um6vZWwqqsRRdu9W/LTi3Ww98DJGTeS1Uc2mYiGKz1lSU
pYLm5e8ffUO6mJU70LaPQfuv37ABYm8ZdX3JuKlB9GWuHZv9rm1Dgp/MXv8DzuvN
x5bdbGKxxyl1QDNa3T9AWxLtKJviPDgGKyisLxMuNWRJcfa4a2QkF/b8x9PfaSrB
OsprEdpMQe5jdMN2OvFIAyk9lyi2nLkyocVneAVAx0OuZzbpQMRT2bl0UMVjyh+5
UoE/MnNVRKxxkfsaUEPSSz4ZjjWHVIoTm6Cmvsc58Qwv4KddG5QttuXqWnFnxnkk
+fso3bNLG1cFmIqwKzSH15iIvY3gGvgiDuj4op1RfQ2Idejkb0WjOJNgIHfxFdTr
ZkO9AD9i/b4Gw14t1dLq5Jdk1SLg4Huz3SQHSbv91Bd9AwIDAQABMA0GCSqGSIb3
DQEBCwUAA4IBAQBfBzo/6E6x2wL0+TduYzCL3iueGQaZPxg1g5aqa4JtWCu+ZIsj
8rpYlJTQYBjSAveYe/6eu1oQlZgKDHLEy0GmmCZiN4rp/xDL9dy9SuFaEorgF2Ue
sJnxMSODgYMMNti0wCXmztTSy4h/Eo6yLQvr/wvcQqU8eo19jUoMT9jloiM/qhPr
3Mm2jTY/amdqLNlwHHmd7KaD3xxKJ/khM6d4HTLhoSSTz32MEYIT+KBb3lUjaUjC
N6d2knROJKJDMxamNROc1M5z+iweeEdp//KJ/zDVRlawfG2Q1vEf5hIuwrkLVMnm
WMTdYqJ/r1FQLWAzJn++pwwxzhYyho6vlN/V
MIIC4TCCAckCCQCIm5v8zLBtjTANBgkqhkiG9w0BAQUFADAzMQ4wDAYDVQQKDAVB
V0lQUzEQMA4GA1UECwwHVGVzdGluZzEPMA0GA1UEAwwGY2Fyb290MB4XDTE3MDEz
MTIzMDE0NFoXDTIyMDEzMDIzMDE0NFowMjEOMAwGA1UECgwFQVdJUFMxEDAOBgNV
BAsMB1Rlc3RpbmcxDjAMBgNVBAMMBWd1ZXN0MIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAquAPNSusMUr4hewdfLpqGCFGVeVdLjfJr4sRdQ/JsIrR/0WT
EmD62cg5vC6SoH+evN9D+ZS477XniBBxb0PNc7mqsQU0btDudJEKw2LPfUMgU/uD
/QcNIC0ZGe3Fv9q265fufH8JkIJCZRJkUtsESL9U8io6cluVWWVkzRqYOrMo/86y
Y9Enfm6akKbcM8dued0gqU5j01Senb9jpQNCVDJ7ZZekw2uD4FMSXH40JXsD5QQt
5HYfNkGX3J2K5wsbW43DNCP+uHTNaBToiQ10syJ7gUA2bFXEXDGW8uTdofW/vWGZ
MM74XMe4EG8fyZpH6lKm+gvZ/oakcVJwJ4mVuQIDAQABMA0GCSqGSIb3DQEBBQUA
A4IBAQCDtYQfnDqXoyUqOEwR5D/Y8lyqU3EBCAGMdu3L/nrNiQ/noq1B/nkZC3Cg
BCmBWriI7/C6avIJC4bmR+GOTC2bPSA4xEOqTg8makUN1pJWA6cjw5K6wxIDn9L3
CdwT4tz1SK6rBXsWLG/yIwNm60ahg9C/qs2z0+HfZy7kNizRxS2AR049GT5KPpGt
Z+YTPq5ZyKkVoyIzo5ffWT9ZlC259bVq+L5MiDbhdK/KFP9xIyce2icIdpMy7MI2
mJr2fIN7JzQJP/j0ls+KjUU+euwI5ZGLoNWkt5OlmYel+uJsc0oSDvdGMPAD334c
M7ZuFuR9lYhdK5SkDgZ9VH8PZqhJ
-----END CERTIFICATE-----

View file

@ -1,52 +1,28 @@
-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQC9GC+8Nhj8a6y4
k8uwIIlVo6w7J8zEfeJtDP8++cj8srbgw77guCc0gkITrMm+P0nIkSJxxUaIj++E
75CKampAkcYH6hPFU4hOQJWL2QTRlV7VhoyFto8jXF8YGV886f/Z2UPAwW9dii9H
dFz1oYJTuSSDzBBQkst1/2JxcA28WncJ95QZf7t1PKNrLwzySkPbjgaUww64FrQp
2AXP6KHTR53S1x/Mve4fp7y+rufkByrJIBxVI3wGLADkVinW5avZAhRBUZ0DCkRc
R+1um6vZWwqqsRRdu9W/LTi3Ww98DJGTeS1Uc2mYiGKz1lSUpYLm5e8ffUO6mJU7
0LaPQfuv37ABYm8ZdX3JuKlB9GWuHZv9rm1Dgp/MXv8DzuvNx5bdbGKxxyl1QDNa
3T9AWxLtKJviPDgGKyisLxMuNWRJcfa4a2QkF/b8x9PfaSrBOsprEdpMQe5jdMN2
OvFIAyk9lyi2nLkyocVneAVAx0OuZzbpQMRT2bl0UMVjyh+5UoE/MnNVRKxxkfsa
UEPSSz4ZjjWHVIoTm6Cmvsc58Qwv4KddG5QttuXqWnFnxnkk+fso3bNLG1cFmIqw
KzSH15iIvY3gGvgiDuj4op1RfQ2Idejkb0WjOJNgIHfxFdTrZkO9AD9i/b4Gw14t
1dLq5Jdk1SLg4Huz3SQHSbv91Bd9AwIDAQABAoICAH7M+D2inTCvV5xSZ3VM7CsE
XVsxvitJKwvrekIVqARkJyQjvxzcAFZCvuKIrKQptmWLhWh7XGf49SnUp71ZzLRN
zFjES8u3zyCCSIYF2ihcnMJcvmBv4h5ZM99qLCYh2BKSkc9xJye3oSquSiPg0Q8p
iOXkclBFj7ApuC7PcDaNB2QkpChRMjhUmFUosOrMiCJzY9Bf2L/zYY7psEQSAGo4
jQm0fjuCZWrOxU+s5A1SDQvfv4AMEn/lBBgZ+2aCjrEvpruCaeJ/AQZMqVfRhfR0
C3wY0MpmSdgwD+dMZd7OYtRcntwRpI7HbkCgCgm/zz7ck3QvQLqg1PnOZI0+NvI6
tAu9skvmKFWp0mZpi96JXGzvwkTfrxWOM0GJDsomPJfOKj1kZucbFhLL4XcTm54W
XrW2UfiUF2jezqmp40HlPB9XMV2bIevmu4fzmdhF/ouJBjcKJmGLSAAqlBsDG18s
nwTKItVR0cXhDyCzWkZKV9tTN1hQn8A/9P2lghgVNgDFs2BOTJCzMjPFkv/5t5FB
Gv5DnxTPQU3zgEASWklBSlLdX+1wAg6m7ZCFox9CHqo3mFJyqJ/YwtKsVEK/Kdr2
6Vc7rSSF1xmGohPeXcykovrxQIlhlMWZZ4Y8q2Dx12lVxr2fqemhWfLKFUk8fOZD
/v8ig9zMrb/5EbU90stZAoIBAQDjhBqb+gBMZwaKQQq67YVw72s+Mwf4WDYzZgEI
emEaYJwfjDxLOp3LLOhixCHUZEEbAMmEQzSZFjvKmsDi/3aVhThLf7LBTrbQecT9
57jIfEIieSbOwPE3F7lNHPzk9C2rjkAKMz88fC/UUvafqW4Oa9ExzkW76LErwJO5
2k5OcFDf8004S1580KArT6pF1CmLKZzhu+81QCiGpXUb2REMtVKR0hMtWyM3YL9a
UIqITetfsRqY87JcD563YUIBgLXIcnJcORxGGW3LS6H0cr5IfAxBrXvkhNfy/XMp
Exd+k2C2G94gFR9r8rzoVDF8v37LDWeJTaiwvNscscPfDyf/AoIBAQDUxKuoIxTz
uo5wY4TtWBK9xXMgerPCyMhD7j8myjp8jjidvl/SgV8dSnOD/fEZzKr1RPP6l7WR
iUL72MRqwVo2adfDSndnzDzl/YfulJqegIz0zGHKgCj3uvfh6ke1gj0VSBMzC2u/
C8Nki6EU0n7n5+zA5M27EPrhvf+Ev114c/6PDtqGvP2H5M7VF3KwsGBCclIWOS19
t8PU3o3tQvGmb4bVBt2KwDhhAM4O1FAzUwGDs9QjpwFTbZkIdiCfWaRo3pnja2Cd
6Qr9vpE+7fHEzoqSzewezseo3fuIT0WKroTKhpL9VwRj5NZikEePLJ8osxjmwmXh
WpGg7yMtcwr9AoIBAQCEoLHSUz5xS22okpnqtiOf3jGqJJ10zBdshv37LzwD4GWi
jmFniVgK5LbjPGpsIbVCRIc0ruiuhSN9zBC9QyahqvNSL7LItVYk2ZdYXAh/9s+m
wPE6fYcgEphWt5tE7ILjCx2R1KX8YHiRUXurP12E0p00Z4aHL/J4Ct8S7IvRde/v
XSmas3T1VbjJBru/0RoWob9uZ9veMvRs6W8HONaTjfAASXIccpBo6+EgiOr44lNf
iSJ0HzvOJtzjEbMkpR9TJkQ8Np6gzpoOdJyIn4sFPir27mbWpAovAEhtnU+I3ej2
v/AQy79xciNlXA8tJYSIYdwFUlwQC0e/xnDkSzWJAoIBAGoS9sVnYA22x0aOxvmQ
/B7yLlhV9AK7GOSNBZzwG0J3oRA7lggbiXDP6lE2rBnBqMWIr94R/mplNSjbw+i5
JqGUGQZ6vJbaAs5inH88RO2ahyuQLXzIciQ3aVeO9lsuaAeRHElJe1kOo0YgOpln
6+7v+F+ecla9u2YJ1Da5NP9VTObDb/zWgctbLiacfwhJlmPqHLSJov1XPWGF5toP
kuv4FA9mUdLXzAPIY/KOtMExs8KWR7/Shd2y+SV3xwHKriW+PJhdsxhm05z3gfAO
rocAtaNE2F/vlSjCKqGla7UdFoTlnKiC1mR69MrExXhCtcKTr2l0J1i3T30dW7tP
7H0CggEBAJo8K8YmOi4fpksyUPr0j9UdrD69Q2bHsMPS9v2kUw/u3DQCpxc0o9Tb
AzqEUBwQjz+yd5Einv2wjn/p4hT8NgHT97Jz748z1pJHWJTecz3gHnZkRmQ1NxZv
CI1TRBx3Eh8T8+CfiwGMgoWQeWEG+FdQMHJQG/sD0SCL2jhzKLeGKYFU7ITbvMD4
ahLcX1hRBM1EuZsUoLo9CDSNFG77nvMPggSAdOiQHhd/EmYuk3fJ5ByNxFySPxUU
RkGQlurco7sjPU2xWts9vB2ws1jkFRZTi7yGu5H2d7qP2ZCuKKY+CnxvXuv3oT5P
Gc1x30eRgBAJVj6koG9CJ4Tb4y7Rp9E=
MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQCq4A81K6wxSviF
7B18umoYIUZV5V0uN8mvixF1D8mwitH/RZMSYPrZyDm8LpKgf56830P5lLjvteeI
EHFvQ81zuaqxBTRu0O50kQrDYs99QyBT+4P9Bw0gLRkZ7cW/2rbrl+58fwmQgkJl
EmRS2wRIv1TyKjpyW5VZZWTNGpg6syj/zrJj0Sd+bpqQptwzx2553SCpTmPTVJ6d
v2OlA0JUMntll6TDa4PgUxJcfjQlewPlBC3kdh82QZfcnYrnCxtbjcM0I/64dM1o
FOiJDXSzInuBQDZsVcRcMZby5N2h9b+9YZkwzvhcx7gQbx/JmkfqUqb6C9n+hqRx
UnAniZW5AgMBAAECggEBAJ9n2ogNr9tkRygYNwHunZ7ZG9Zx3FWLcbkzkRuVmN/7
ASCU9TjGA/46zbGB+QOFSr6DwdQJK+Vj2xSR0mCr7fQxls0BQALJIkrYLCRN/6ap
gnUWQ/E+LL6Bk9Mef8YU8WQjHjZCBNgszGehmrm42+xJoaMwRcn9Kfx1nG3Ci5Tl
m+1PG/T5LZv4e6I++RzsqBNdGhvRic44j8vYfVrr16ciSofFo1HJ4NmVCbIalPjU
K+wxNRUD3jATeU70B582VXXSJf1r26cG3vZHaMCUWLPPNjKDD8Dsr9Q53iJ4QZYA
DpgtLON6KxZSoRJeYS3Gdru0iquBJRMYV2T2dIjMkgECgYEA07++DbqCAnLI5ZjW
Du303ZWzfwdz0iQLXksVPR2NC1u8lMJwbLgF1fbcss+YET91jfrrW3Cd0nOxFqX4
kwrQtuRCs2qW6MAusqX+tAYC6//cMIjzNikGHC+4b/1MmPfvqtVXVM8uhxlny3Dm
gzcdFa9YN7PlDr5jpnVbWj1zzFkCgYEAzpWhm9wDEX6e0PxtbKPNVgsoYJ3+n3QR
i0XgJxFNDTpcFXEujwywsgauddSEOb8iPZgsQyfTXLr8qfhobTLSEYqDqmpzcm3O
xr+uhCL7Jy26EfuNnha5Ntzqs1KosLxoQwPx5JMKSzDPRApV/VsLAgA+GVn7rfsM
ri/DFygtaGECgYEAiuc6FillxZNko/BpYyvHuF/OeqL54DzS1E0upoOvFWddQrx2
IWtqMJya1BWH7LCTPcr+/2HVtcs8vN2tPVAX8BG2i5l9WztOptRrS86xtfyGhbQg
z0OEBZNsStJ/n8ztBESk4DZ0kB0jUHpETIkn5CS9GvVAajaMihJsFbtALikCgYB+
0ltBHKMBlXMYJy9h93kyLm1eSwAqkY3Hq2D9euCLk10+iJollYvP7swhaBK4hL8X
gxkBLSzTi7NbATXSe9V8kUVdVDaFdCXx23Dei78VgTvumDiLabXQmXS4G7GVtkRn
h79zLFWwbUmAorvBaqfqVY3J8HTSjQFu2cFxsOeXYQKBgQCOTiPntXCWWEsCBJp5
QOaHW1MZXYz8diVpNiZ/tJGhT4Van8ja7DJ0r+zWNrX5lMRct9E5jbbSgywsaY4W
1sMVmAraKrbnIlJvv+nyBoU0uLTiBVNsBhMy7UkqrgH03o+aq1zSqCrIvumxuOAe
cbq1B0DdERrn7goOytsviSDs1Q==
-----END PRIVATE KEY-----

View file

@ -5,6 +5,46 @@
<include file="${edex.home}/conf/logback-edex-loggers.xml" />
<include file="${edex.home}/conf/logback-edex-hibernate-logger.xml" />
<!-- BandwidthManager log -->
<appender name="BandwidthManagerLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>bandwidth</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<!-- data delivery log -->
<appender name="DataDeliveryLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>datadelivery</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<!-- data delivery Notification log -->
<appender name="NotificationLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>notification</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<!-- data delivery Retrieval log -->
<appender name="RetrievalLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>retrieval</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<!-- Purge log -->
<appender name="PurgeLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>purge</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<appender name="ThreadBasedLog" class="com.raytheon.uf.common.logback.appender.ThreadBasedAppender">
<defaultAppenderName>asyncConsole</defaultAppenderName>
<appender-ref ref="asyncConsole"/>
@ -17,6 +57,81 @@
<level value="ERROR"/>
</logger>
<logger name="org.apache.cxf.interceptor.LoggingOutInterceptor" additivity="false">
<level value="WARN"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="org.apache.cxf.interceptor.LoggingInInterceptor" additivity="false">
<level value="WARN"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="com.raytheon.uf.common.datadelivery" additivity="false">
<level value="INFO"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.service" additivity="false">
<level value="INFO"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.request" additivity="false">
<level value="INFO"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="com.raytheon.uf.common.datadelivery.event" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.event" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.edex.registry.ebxml.services.notification" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.registry.federation" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.registry.replication" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.common.datadelivery.retrieval" additivity="false">
<level value="INFO"/>
<appender-ref ref="RetrievalLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.retrieval" additivity="false">
<level value="INFO"/>
<appender-ref ref="RetrievalLog"/>
</logger>
<logger name="com.raytheon.uf.common.datadelivery.bandwidth" additivity="false">
<level value="INFO"/>
<appender-ref ref="BandwidthManagerLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.bandwidth" additivity="false">
<level value="INFO"/>
<appender-ref ref="BandwidthManagerLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.harvester.purge" additivity="false">
<level value="INFO"/>
<appender-ref ref="PurgeLog"/>
</logger>
<!-- default logging -->
<root>
<level value="INFO"/>

View file

@ -20,15 +20,4 @@
<appender name="PerformanceLogAsync" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="PerformanceLog" />
</appender>
<appender name="IgniteLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>ignite</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<appender name="IgniteLogAsync" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="IgniteLog" />
</appender>
</included>

View file

@ -21,15 +21,4 @@
<appender name="PerformanceLogAsync" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="PerformanceLog" />
</appender>
<appender name="IgniteLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>ignite</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<appender name="IgniteLogAsync" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="IgniteLog" />
</appender>
</included>

View file

@ -9,11 +9,6 @@
<appender-ref ref="PerformanceLogAsync" />
</logger>
<logger name="org.apache.ignite" additivity="false">
<level value="INFO"/>
<appender-ref ref="IgniteLogAsync" />
</logger>
<!-- used by c3p0 -->
<logger name="com.mchange">
<level value="ERROR"/>
@ -33,7 +28,7 @@
<logger name="org.apache.qpid">
<level value="INFO"/>
</logger>
<logger name="org.apache.qpid.jms.JmsMessageProducer">
<logger name="org.apache.qpid.client.BasicMessageProducer_0_10">
<level value="WARN"/>
</logger>
<logger name="org.apache.xbean.spring">

View file

@ -88,13 +88,6 @@
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<appender name="AuditLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>audit</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<appender name="ThreadBasedLog" class="com.raytheon.uf.common.logback.appender.ThreadBasedAppender">
<defaultAppenderName>asyncConsole</defaultAppenderName>
<appender-ref ref="asyncConsole"/>
@ -107,18 +100,22 @@
<level value="INFO"/>
<appender-ref ref="shef" />
</logger>
<logger name="com.raytheon.uf.edex.purgesrv" additivity="false">
<level value="INFO"/>
<appender-ref ref="purge"/>
</logger>
<logger name="com.raytheon.uf.edex.database.purge" additivity="false">
<level value="INFO"/>
<appender-ref ref="purge"/>
</logger>
<logger name="com.raytheon.edex.db.purge.DataPurgeRegistry" additivity="false">
<level value="INFO"/>
<appender-ref ref="purge"/>
</logger>
<logger name="RouteFailedLog" additivity="false">
<level value="WARN"/>
<appender-ref ref="RouteFailedLog"/>
@ -143,33 +140,28 @@
<level value="ERROR"/>
<appender-ref ref="FailedTriggerLog"/>
</logger>
<logger name="com.raytheon.uf.edex.ohd" additivity="false">
<level value="INFO"/>
<appender-ref ref="OhdLog" />
<appender-ref ref="console" />
</logger>
<logger name="com.raytheon.uf.edex.plugin.mpe" additivity="false">
<level value="INFO"/>
<appender-ref ref="MpeLog" />
</logger>
<logger name="com.raytheon.uf.common.mpe.gribit2" additivity="false">
<level value="INFO"/>
<appender-ref ref="MpeLog" />
</logger>
<logger name="com.raytheon.uf.edex.plugin.mpe.test" additivity="false">
<level value="INFO"/>
<appender-ref ref="MpeValidateLog" />
</logger>
<logger name="com.raytheon.uf.edex.database.health.DataStorageAuditer" additivity="false">
<level value="INFO"/>
<appender-ref ref="AuditLog" />
</logger>
<logger name="com.raytheon.uf.edex.database.health.DefaultDataStorageAuditListener" additivity="false">
<level value="INFO"/>
<appender-ref ref="AuditLog" />
</logger>
<!-- default logging -->
<root>
<level value="INFO"/>

View file

@ -5,6 +5,46 @@
<include file="${edex.home}/conf/logback-edex-loggers.xml" />
<include file="${edex.home}/conf/logback-edex-hibernate-logger.xml" />
<!-- BandwidthManager log -->
<appender name="BandwidthManagerLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>bandwidth</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<!-- data delivery log -->
<appender name="DataDeliveryLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>datadelivery</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<!-- data delivery Notification log -->
<appender name="NotificationLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>notification</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<!-- data delivery Retrieval log -->
<appender name="RetrievalLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>retrieval</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<!-- Purge log -->
<appender name="PurgeLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>purge</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<appender name="ThreadBasedLog" class="com.raytheon.uf.common.logback.appender.ThreadBasedAppender">
<defaultAppenderName>asyncConsole</defaultAppenderName>
<appender-ref ref="asyncConsole"/>
@ -17,6 +57,81 @@
<level value="ERROR"/>
</logger>
<logger name="org.apache.cxf.interceptor.LoggingOutInterceptor" additivity="false">
<level value="WARN"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="org.apache.cxf.interceptor.LoggingInInterceptor" additivity="false">
<level value="WARN"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="com.raytheon.uf.common.datadelivery" additivity="false">
<level value="INFO"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.service" additivity="false">
<level value="INFO"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.request" additivity="false">
<level value="INFO"/>
<appender-ref ref="DataDeliveryLog"/>
</logger>
<logger name="com.raytheon.uf.common.datadelivery.event" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.event" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.edex.registry.ebxml.services.notification" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.registry.federation" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.registry.replication" additivity="false">
<level value="INFO"/>
<appender-ref ref="NotificationLog"/>
</logger>
<logger name="com.raytheon.uf.common.datadelivery.retrieval" additivity="false">
<level value="INFO"/>
<appender-ref ref="RetrievalLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.retrieval" additivity="false">
<level value="INFO"/>
<appender-ref ref="RetrievalLog"/>
</logger>
<logger name="com.raytheon.uf.common.datadelivery.bandwidth" additivity="false">
<level value="INFO"/>
<appender-ref ref="BandwidthManagerLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.bandwidth" additivity="false">
<level value="INFO"/>
<appender-ref ref="BandwidthManagerLog"/>
</logger>
<logger name="com.raytheon.uf.edex.datadelivery.harvester.purge" additivity="false">
<level value="INFO"/>
<appender-ref ref="PurgeLog"/>
</logger>
<!-- default logging -->
<root>
<level value="INFO"/>

View file

@ -2,6 +2,18 @@
<include file="${edex.home}/conf/logback-edex-properties.xml"/>
<include file="${edex.home}/conf/${LOG_APPENDERS_CONFIG}" />
<!-- ProductSrvRequest log -->
<appender name="ProductSrvRequestLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
<name>productSrvRequest</name>
</rollingPolicy>
<encoder class="com.raytheon.uf.common.logback.encoder.UFStdEncoder"/>
</appender>
<appender name="ProductSrvRequestLogAsync" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="ProductSrvRequestLog" />
</appender>
<!-- TextDBSrvRequest log -->
<appender name="TextDBSrvRequestLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="com.raytheon.uf.common.logback.policy.StdTimeBasedRollingPolicy">
@ -41,6 +53,11 @@
<include file="${edex.home}/conf/logback-edex-loggers.xml" />
<logger name="ProductSrvRequestLogger" additivity="false">
<level value="DEBUG"/>
<appender-ref ref="ProductSrvRequestLogAsync"/>
</logger>
<logger name="TextDBSrvRequestLogger" additivity="false">
<level value="DEBUG"/>
<appender-ref ref="TextDBSrvRequestLogAsync"/>

View file

@ -1,341 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
Refer to edex/modes/README.txt for documentation
-->
<edexModes>
<mode name="ingest">
<exclude>.*request.*</exclude>
<exclude>edex-security.xml</exclude>
<exclude>ebxml.*\.xml</exclude>
<exclude>grib-decode.xml</exclude>
<exclude>grid-staticdata-process.xml</exclude>
<exclude>.*(dpa|taf|nctext).*</exclude>
<exclude>webservices.xml</exclude>
<exclude>ebxml.*\.xml</exclude>
<exclude>.*datadelivery.*</exclude>
<exclude>.*bandwidth.*</exclude>
<exclude>.*sbn-simulator.*</exclude>
<exclude>hydrodualpol-ingest.xml</exclude>
<exclude>grid-metadata.xml</exclude>
<exclude>.*ogc.*</exclude>
<exclude>obs-ingest-metarshef.xml</exclude>
<exclude>ffmp-ingest.xml</exclude>
<exclude>scan-ingest.xml</exclude>
<exclude>cwat-ingest.xml</exclude>
<exclude>fog-ingest.xml</exclude>
<exclude>vil-ingest.xml</exclude>
<exclude>preciprate-ingest.xml</exclude>
<exclude>qpf-ingest.xml</exclude>
<exclude>fssobs-ingest.xml</exclude>
<exclude>cpgsrv-spring.xml</exclude>
<exclude>ohd-common-database.xml</exclude>
<exclude>satpre-spring.xml</exclude>
<exclude>ncgrib-file-endpoint.xml</exclude>
<exclude>text-subscription.*</exclude>
</mode>
<mode name="ingestGrib">
<include>time-common.xml</include>
<include>auth-common.xml</include>
<include>python-common.xml</include>
<include>grib-decode.xml</include>
<include>grid-staticdata-process.xml</include>
<include>level-common.xml</include>
<include>levelhandler-common.xml</include>
<include>grid-common.xml</include>
<include>gridcoverage-common.xml</include>
<include>parameter-common.xml</include>
<include>persist-ingest.xml</include>
<include>management-common.xml</include>
<include>database-common.xml</include>
<include>event-ingest.xml</include>
<includeMode>statsTemplate</includeMode>
</mode>
<mode name="request">
<include>.*request.*</include>
<include>.*common.*</include>
<exclude>grid-metadata.xml</exclude>
<exclude>event-datadelivery-common.xml</exclude>
<exclude>.*ogc.*</exclude>
<exclude>.*dpa.*</exclude>
<exclude>ohd-common-database.xml</exclude>
<exclude>satpre-spring.xml</exclude>
</mode>
<mode name="ingestRadar">
<includeMode>ingest</includeMode>
<includeMode>pluginExclude</includeMode>
<includeMode>goesrExclude</includeMode>
<exclude>.*(airmet|atcf|aww|convsigmet|gfe|grid|hydro|intlsigmet|modis|ncpafm|ncuair|profiler|netcdf-grid).*</exclude>
<exclude>.*(nonconvsigmet|satellite|sgwh|ssha|stats|stormtrack|textlightning_ep|useradmin|wcp).*</exclude>
<exclude>purge-spring.*</exclude>
</mode>
<mode name="ingestGoesR">
<includeMode>ingest</includeMode>
<includeMode>pluginExclude</includeMode>
<includeMode>radarExclude</includeMode>
<exclude>purge-spring.*</exclude>
</mode>
<mode name="ingestGrids">
<includeMode>ingest</includeMode>
<includeMode>pluginModelSoundingExclude</includeMode>
<includeMode>radarExclude</includeMode>
<includeMode>goesrExclude</includeMode>
<exclude>purge-spring.*</exclude>
</mode>
<mode name="pluginExclude">
<exclude>^(acars|activetable|bufr|ccfp|climate|convectprob|cwa|geodb|goessounding|lma|lsr|modelsounding|nucaps|obs|poes|redbook|sfcobs|svrwx|tc|vaa|viirs|warning).*</exclude>
</mode>
<mode name="pluginModelSoundingExclude">
<exclude>^(acars|activetable|bufr|ccfp|climate|convectprob|cwa|geodb|goessounding|lma|lsr|nucaps|obs|poes|redbook|sfcobs|svrwx|tc|vaa|viirs|warning).*</exclude>
</mode>
<mode name="goesrExclude">
<exclude>^(binlightning|dmw|goesr|glm).*</exclude>
</mode>
<mode name="radarExclude">
<exclude>^radar.*</exclude>
</mode>
<mode name="statsTemplate" template="true">
<include>event-common.xml</include>
<include>eventbus-common.xml</include>
<include>stats-common.xml</include>
</mode>
<!-- HYDRO SERVER -->
<mode name="ingestHydro">
<include>distribution-spring.xml</include>
<include>manualIngest-common.xml</include>
<include>manualIngest-spring.xml</include>
<include>shef-ingest.xml</include>
<include>persist-ingest.xml</include>
<include>obs-common.xml</include>
<include>obs-ingest.xml</include>
<include>obs-ingest-metarshef.xml</include>
<include>metartohmdb-plugin.xml</include>
<include>metartoclimate-plugin.xml</include>
<include>pointdata-common.xml</include>
<include>shef-common.xml</include>
<include>ohd-common-database.xml</include>
<include>ohd-common.xml</include>
<include>alarmWhfs-spring.xml</include>
<include>arealffgGenerator-spring.xml</include>
<include>arealQpeGen-spring.xml</include>
<include>DPADecoder-spring.xml</include>
<include>dqcPreprocessor-spring.xml</include>
<include>floodArchiver-spring.xml</include>
<include>freezingLevel-spring.xml</include>
<include>hpeDHRDecoder-spring.xml</include>
<include>ihfsDbPurge-spring.xml</include>
<include>logFilePurger-spring.xml</include>
<include>mpeFieldgen-spring.xml</include>
<include>mpeHpeFilePurge-spring.xml</include>
<include>mpeLightningSrv-ingest.xml</include>
<include>mpeProcessGrib-spring.xml</include>
<include>ohdSetupService-spring.xml</include>
<include>pointDataRetrievel-spring.xml</include>
<include>q2FileProcessor-spring.xml</include>
<include>satpre-spring.xml</include>
<include>purge-logs.xml</include>
<exclude>fssobs-ingest.xml</exclude>
<exclude>fssobs-common.xml</exclude>
<include>ndm-ingest.xml</include>
</mode>
<mode name="requestHydro">
<include>ohd-common-database.xml</include>
<include>ohd-common.xml</include>
<include>database-common.xml</include>
<include>ohd-request.xml</include>
<include>alertviz-request.xml</include>
<include>auth-common.xml</include>
<include>auth-request.xml</include>
<include>persist-request.xml</include>
<include>menus-request.xml</include>
<include>utility-request.xml</include>
<include>management-common.xml</include>
<include>management-request.xml</include>
<include>manualIngest-common.xml</include>
<include>manualIngest-request.xml</include>
<include>auth-request.xml</include>
<include>persist-request.xml</include>
<include>site-common.xml</include>
<include>site-request.xml</include>
<include>time-common.xml</include>
<include>units-common.xml</include>
<include>event-common.xml</include>
<include>eventbus-common.xml</include>
<include>edex-request.xml</include>
<include>request-service.xml</include>
<include>request-service-common.xml</include>
</mode>
<!-- DECISION ASSITANCE TOOLS -->
<mode name="ingestDat">
<include>utility-common.xml</include>
<include>geo-common.xml</include>
<include>time-common.xml</include>
<include>ffmp-ingest.xml</include>
<include>ffmp-common.xml</include>
<include>scan-ingest.xml</include>
<include>scan-common.xml</include>
<include>cwat-ingest.xml</include>
<include>cwat-common.xml</include>
<include>fog-ingest.xml</include>
<include>fog-common.xml</include>
<include>vil-ingest.xml</include>
<include>vil-common.xml</include>
<include>preciprate-ingest.xml</include>
<include>preciprate-common.xml</include>
<include>qpf-ingest.xml</include>
<include>qpf-common.xml</include>
<include>hydrodualpol-ingest.xml</include>
<include>cpgsrv-spring.xml</include>
<include>persist-ingest.xml</include>
<include>binlightning-common.xml</include>
<include>parameter-common.xml</include>
<include>gridcoverage-common.xml</include>
<include>grid-common.xml</include>
<include>database-common.xml</include>
<include>radar-common.xml</include>
<include>text-common.xml</include>
<include>level-common.xml</include>
<include>levelhandler-common.xml</include>
<include>pointdata-common.xml</include>
<include>bufrua-common.xml</include>
<include>shef-common.xml</include>
<include>satellite-common.xml</include>
<include>satellite-dataplugin-common.xml</include>
<include>ohd-common-database.xml</include>
<include>ohd-common.xml</include>
<include>management-common.xml</include>
<include>obs-common.xml</include>
<include>fssobs-ingest.xml</include>
<include>fssobs-common.xml</include>
<include>manualIngest-common.xml</include>
<include>dataaccess-common.xml</include>
<exclude>nctext-common.xml</exclude>
<includeMode>statsTemplate</includeMode>
</mode>
<!-- EBXML REGISTRY / DATA DELIVERY -->
<mode name="ebxmlRegistry" template="true">
<includeMode>statsTemplate</includeMode>
<include>database-common.xml</include>
<include>dataaccess-common.xml</include>
<include>time-common.xml</include>
<include>auth-common.xml</include>
<include>auth-request.xml</include>
<include>management-common.xml</include>
<include>event-common.xml</include>
<include>purge-logs.xml</include>
<include>ebxml.*\.xml</include>
<include>eventbus-common.xml</include>
<include>edex-security.xml</include>
<include>geo-common.xml</include>
<include>utility-request.xml</include>
<include>utility-common.xml</include>
<include>request-service</include>
</mode>
<mode name="registry">
<includeMode>ebxmlRegistry</includeMode>
<includeMode>dataDeliveryTemplate</includeMode>
<include>datadelivery-wfo-cron.xml</include>
<include>bandwidth-datadelivery-.*-wfo.xml</include>
<exclude>.*datadelivery.*-ncf.*</exclude>
<exclude>harvester-.*</exclude>
<exclude>crawler-.*</exclude>
</mode>
<mode name="centralRegistry">
<includeMode>ebxmlRegistry</includeMode>
<includeMode>dataDeliveryTemplate</includeMode>
<include>stats-ingest.xml</include>
<include>bandwidth-datadelivery-.*-ncf.xml</include>
<exclude>.*datadelivery.*-wfo.*</exclude>
</mode>
<mode name="dataDeliveryTemplate" template="true">
<include>.*datadelivery.*</include>
<include>.*bandwidth.*</include>
<exclude>.*bandwidth.*-inmemory.*.xml</exclude>
<exclude>dpa-datadelivery.xml</exclude>
<include>satellite-common.xml</include>
<include>satellite-dataplugin-common.xml</include>
<include>goessounding-common.xml</include>
<include>grid-common.xml</include>
<include>grid-metadata.xml</include>
<include>gridcoverage-common.xml</include>
<include>parameter-common.xml</include>
<include>level-common.xml</include>
<include>levelhandler-common.xml</include>
<include>pointdata-common.xml</include>
<include>obs-common.xml</include>
<include>madis-common.xml</include>
<include>persist-ingest.xml</include>
</mode>
<mode name="dataProviderAgentTemplate" template="true">
<include>manualIngest*</include>
<include>time-common.xml</include>
<include>distribution-spring.xml</include>
<include>persist-ingest.xml</include>
<include>auth-common.xml</include>
<include>database-common.xml</include>
<!-- Remote connect to registry services -->
<include>datadelivery-handlers.xml</include>
<include>datadelivery-handlers-impl.xml</include>
<include>request-router.xml</include>
<include>^utility-request.xml</include>
<include>dpa-datadelivery.xml</include>
<include>geo-common.xml</include>
<include>request-service.*</include>
<include>utility-common.xml</include>
<include>localization-http-request.xml</include>
<!-- Don't want this for DPA, we don't need a local registry -->
<exclude>harvester-datadelivery-standalone.xml</exclude>
<exclude>datadelivery-standalone.xml</exclude>
<!-- OGC/DPA services -->
<include>ogc-common.xml</include>
<include>wfs-ogc-request.xml</include>
<include>wfs-ogc-rest-request.xml</include>
<include>wfs-ogc-soap-request.xml</include>
<include>wfs-ogc-soap-wsdl.xml</include>
<!-- Purge OGC/DPA registred plugins -->
<include>purge-spring.xml</include>
<include>purge-spring-impl.xml</include>
<include>purge-logs.xml</include>
</mode>
<!-- MADIS implmentation of dataprovideragent -->
<mode name="dataprovideragent">
<includeMode>dataProviderAgentTemplate</includeMode>
<include>pointdata-common.xml</include>
<include>madis-common.xml</include>
<include>madis-ogc.xml</include>
<include>madis-ogc-registry.xml</include>
</mode>
<!-- Utilized by BandwidthUtil for creating an in memory bandwidth manager -->
<mode name="inMemoryBandwidthManager">
<!-- This is not an edex runtime mode and is used in memory -->
<include>bandwidth-datadelivery-inmemory-impl.xml</include>
<include>bandwidth-datadelivery.xml</include>
<include>bandwidth-datadelivery-wfo.xml</include>
</mode>
</edexModes>

View file

@ -26,7 +26,7 @@ do
fi
done
JAVA_BIN=/awips2/java/bin/java
JAVA_BIN=/awips2/java/jre/bin/java
securityDir=/awips2/edex/conf/security
securityPropertiesDir=/awips2/edex/conf/resources/site/$AW_SITE_IDENTIFIER
@ -279,7 +279,7 @@ echo "Generating keystore..."
echo "Checking to see if a key with this alias exists in keystore.....[$keyAlias]!"
keytool -delete -alias $keyAlias -keystore $securityDir/$keystore
# create and add key
keytool -genkeypair -alias $keyAlias -keypass $keyPw -keystore $keystore -storepass $keystorePw -storetype JKS -validity 360 -dname "CN=$cn, OU=$orgUnit, O=$org, L=$loc, ST=$state, C=$country" -keyalg RSA -ext san=$ext
keytool -genkeypair -alias $keyAlias -keypass $keyPw -keystore $keystore -storepass $keystorePw -validity 360 -dname "CN=$cn, OU=$orgUnit, O=$org, L=$loc, ST=$state, C=$country" -keyalg RSA -ext san=$ext
echo -n "Exporting public key..."
exportOutput=`keytool -exportcert -alias $keyAlias -keystore $keystore -file $keyAlias$publicKeyFile -storepass $keystorePw 2>&1`
echo "Done!"
@ -288,7 +288,7 @@ obfuscatedKeystorePassword=`$JAVA_BIN -cp $LOCAL_CLASSPATH com.raytheon.uf.commo
echo "Generating trust store..."
echo "Checking to see if a trusted CA with this alias exists in truststore.....[$keyAlias]!"
keytool -delete -alias $keyAlias -keystore $securityDir/$truststore
keytool -genkey -alias tmp -keypass tempPass -dname CN=foo -keystore $truststore -storepass $truststorePw -storetype JKS
keytool -genkey -alias tmp -keypass tempPass -dname CN=foo -keystore $truststore -storepass $truststorePw
keytool -delete -alias tmp -keystore $truststore -storepass $truststorePw
keytool -import -trustcacerts -file $keyAlias$publicKeyFile -alias $keyAlias -keystore $truststore -storepass $truststorePw

View file

@ -24,7 +24,7 @@ do
fi
done
JAVA_BIN=/awips2/java/bin/java
JAVA_BIN=/awips2/java/jre/bin/java
securityDir=/awips2/edex/conf/security
securityPropertiesDir=/awips2/edex/conf/resources/site/$AW_SITE_IDENTIFIER
@ -179,7 +179,7 @@ function generateKeystores() {
echo "Checking to see if a key with this alias exists in keystore.....[$keyAlias]!"
keytool -delete -alias $keyAlias -storepass $keyPw -keystore $securityDir/$keystore
# create and add key
keytool -genkey -alias tmp -keypass $keyPw -dname CN=foo -keystore $keystore -storepass $keystorePw -storetype JKS
keytool -genkey -alias tmp -keypass $keyPw -dname CN=foo -keystore $keystore -storepass $keystorePw
keytool -delete -alias tmp -keystore $securityDir/$keystore -storepass $keyPw
# convert private DoD key file in PEM format to DER
openssl pkcs8 -topk8 -nocrypt -in $dodkey -inform PEM -out /tmp/dodkey.der -outform DER
@ -195,7 +195,7 @@ function generateKeystores() {
echo "Generating trust store..."
echo "Checking to see if a trusted CA with this alias exists in truststore.....[$keyAlias]!"
keytool -delete -alias $keyAlias -storepass $truststorePw -keystore $securityDir/$truststore
keytool -genkey -alias tmp -keypass tempPass -dname CN=foo -keystore $truststore -storepass $truststorePw -storetype JKS
keytool -genkey -alias tmp -keypass tempPass -dname CN=foo -keystore $truststore -storepass $truststorePw
keytool -delete -alias tmp -keystore $truststore -storepass $truststorePw
keytool -importcert -trustcacerts -file ${dodcert} -alias $keyAlias -keystore $truststore -storepass $truststorePw

View file

@ -1,255 +0,0 @@
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://camel.apache.org/schema/spring
http://camel.apache.org/schema/spring/camel-spring.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean id="pypiesStoreProps" class="com.raytheon.uf.common.pypies.PypiesProperties" lazy-init="true">
<property name="address" value="${PYPIES_SERVER}" />
</bean>
<bean id="pypiesDataStoreFactory" class="com.raytheon.uf.common.pypies.PyPiesDataStoreFactory"
depends-on="httpClient" lazy-init="true">
<constructor-arg ref="pypiesStoreProps" />
</bean>
<bean id="sslConfig" class="com.raytheon.uf.common.datastore.ignite.IgniteSslConfiguration">
<constructor-arg value="guest"/>
</bean>
<bean id="igniteKeyStorePath" factory-bean="sslConfig" factory-method="getJavaKeyStorePath" />
<bean id="igniteTrustStorePath" factory-bean="sslConfig" factory-method="getJavaTrustStorePath" />
<bean id="igniteKeyStorePassword" class="com.raytheon.uf.common.datastore.ignite.IgnitePasswordUtils"
factory-method="getIgniteKeyStorePassword" />
<bean id="igniteTrustStorePassword" class="com.raytheon.uf.common.datastore.ignite.IgnitePasswordUtils"
factory-method="getIgniteTrustStorePassword" />
<bean id="igniteCommSpiTemplate" class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi"
abstract="true" lazy-init="true">
<property name="messageQueueLimit" value="1024"/>
<!-- This causes clients to keep the last x messages up to this
threshold per connection in heap memory in case a connection
fails to resend the messages. Limiting this will cause more
acknowledgements to be sent but also reduce client heap
footprint. Default value is 32. -->
<property name="ackSendThreshold" value="2"/>
<property name="socketWriteTimeout" value="30000"/>
<property name="usePairedConnections" value="true"/>
<property name="connectionsPerNode" value="4"/>
<property name="localPortRange" value="0"/>
</bean>
<!-- Must have prototype scope so a fully new instance can be created when node fails and needs restarting -->
<bean id="igniteConfig1" class="org.apache.ignite.configuration.IgniteConfiguration"
scope="prototype" lazy-init="true">
<property name="igniteInstanceName" value="cluster1" />
<property name="localHost" value="${LOCAL_ADDRESS}"/>
<property name="clientMode" value="true" />
<property name="metricsLogFrequency" value="0" />
<property name="workDirectory" value="${AWIPS2_TEMP}/edex/ignite_work"/>
<property name="failureHandler">
<bean class="com.raytheon.uf.common.datastore.ignite.IgniteClientFailureHandler" />
</property>
<property name="gridLogger">
<bean class="org.apache.ignite.logger.slf4j.Slf4jLogger" />
</property>
<property name="sslContextFactory">
<bean class="org.apache.ignite.ssl.SslContextFactory">
<property name="keyStoreFilePath" ref="igniteKeyStorePath"/>
<property name="keyStorePassword" ref="igniteKeyStorePassword" />
<property name="trustStoreFilePath" ref="igniteTrustStorePath"/>
<property name="trustStorePassword" ref="igniteTrustStorePassword"/>
<property name="protocol" value="TLSv1.3"/>
</bean>
</property>
<property name="communicationSpi">
<bean parent="igniteCommSpiTemplate">
<property name="localPort" value="${IGNITE_CLUSTER_1_COMM_PORT}"/>
</bean>
</property>
<property name="transactionConfiguration">
<bean class="org.apache.ignite.configuration.TransactionConfiguration">
<property name="txTimeoutOnPartitionMapExchange" value="${a2.ignite.txTimeoutOnPartitionMapExchange}"/>
<property name="defaultTxTimeout" value="${a2.ignite.defaultTxTimeout}"/>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="localPort" value="${IGNITE_CLUSTER_1_DISCO_PORT}"/>
<property name="localPortRange" value="0"/>
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses" value="#{'${IGNITE_CLUSTER_1_SERVERS}'.split(',')}" />
</bean>
</property>
</bean>
</property>
</bean>
<bean id="igniteConfig2" class="org.apache.ignite.configuration.IgniteConfiguration" scope="prototype" lazy-init="true">
<constructor-arg ref="igniteConfig1" />
<property name="igniteInstanceName" value="cluster2" />
<property name="communicationSpi">
<bean parent="igniteCommSpiTemplate">
<property name="localPort" value="${IGNITE_CLUSTER_2_COMM_PORT}"/>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="localPort" value="${IGNITE_CLUSTER_2_DISCO_PORT}"/>
<property name="localPortRange" value="0"/>
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses" value="#{'${IGNITE_CLUSTER_2_SERVERS}'.split(',')}" />
</bean>
</property>
</bean>
</property>
</bean>
<bean id="igniteClusterManager" class="com.raytheon.uf.common.datastore.ignite.IgniteClusterManager" lazy-init="true">
<constructor-arg>
<bean class="com.raytheon.uf.common.datastore.ignite.IgniteConfigSpringGenerator">
<constructor-arg ref="igniteConfig1" />
<constructor-arg value="igniteConfig1" />
</bean>
</constructor-arg>
<constructor-arg>
<bean class="com.raytheon.uf.common.datastore.ignite.IgniteConfigSpringGenerator">
<constructor-arg ref="igniteConfig2" />
<constructor-arg value="igniteConfig2" />
</bean>
</constructor-arg>
</bean>
<!-- If any cache configuration is changed, all ignite and edex nodes need
to be shutdown to clear knowledge of the previous configuration before
any changes will take effect. Nodes can only be started up again after
all nodes are shutdown. -->
<bean id="defaultCacheConfig" class="org.apache.ignite.configuration.CacheConfiguration" scope="prototype" lazy-init="true">
<property name="name" value="defaultDataStore" />
<property name="cacheMode" value="PARTITIONED" />
<property name="backups" value="${IGNITE_CACHE_BACKUPS:0}" />
<!-- Rebalancing is unnecessary, missing entries will read
from the underlying datastore instead of being copied
preemptively. Attempting to rebalance will load the entire
cache in the heap and results in OOM.
-->
<property name="rebalanceMode" value="NONE" />
<property name="readThrough" value="true" />
<property name="writeThrough" value="true" />
<property name="writeBehindEnabled" value="true" />
<property name="writeBehindFlushFrequency" value="5000" />
<property name="writeBehindFlushThreadCount" value="4" />
<property name="writeBehindBatchSize" value="20" />
<property name="writeBehindFlushSize" value="100" />
<property name="sqlIndexMaxInlineSize" value="350" />
<property name="cacheStoreFactory">
<bean
class="com.raytheon.uf.common.datastore.ignite.store.DataStoreCacheStoreFactory">
<constructor-arg>
<bean
class="com.raytheon.uf.common.datastore.ignite.pypies.SerializablePyPiesDataStoreFactory" lazy-init="true">
<constructor-arg name="address" value="${PYPIES_SERVER}" />
</bean>
</constructor-arg>
</bean>
</property>
<property name="indexedTypes">
<list>
<value>com.raytheon.uf.common.datastore.ignite.DataStoreKey</value>
<value>com.raytheon.uf.common.datastore.ignite.DataStoreValue</value>
</list>
</property>
</bean>
<bean id="gridCacheConfig" class="org.apache.ignite.configuration.CacheConfiguration" lazy-init="true">
<constructor-arg ref="defaultCacheConfig" />
<property name="name" value="gridDataStore" />
<property name="writeBehindFlushFrequency" value="1000" />
<property name="writeBehindFlushThreadCount" value="12" />
<property name="writeBehindBatchSize" value="5" />
<property name="writeBehindFlushSize" value="60" />
</bean>
<bean id="satelliteCacheConfig" class="org.apache.ignite.configuration.CacheConfiguration" lazy-init="true">
<constructor-arg ref="defaultCacheConfig" />
<property name="name" value="satelliteDataStore" />
<property name="writeBehindFlushFrequency" value="5000" />
<property name="writeBehindFlushThreadCount" value="4" />
<property name="writeBehindBatchSize" value="5" />
<property name="writeBehindFlushSize" value="20" />
</bean>
<bean id="radarCacheConfig" class="org.apache.ignite.configuration.CacheConfiguration" lazy-init="true">
<constructor-arg ref="defaultCacheConfig" />
<property name="name" value="radarDataStore" />
<property name="writeBehindFlushFrequency" value="5000" />
<property name="writeBehindFlushThreadCount" value="4" />
<property name="writeBehindBatchSize" value="10" />
<property name="writeBehindFlushSize" value="40" />
</bean>
<bean id="pointCacheConfig" class="org.apache.ignite.configuration.CacheConfiguration" lazy-init="true">
<constructor-arg ref="defaultCacheConfig" />
<property name="name" value="pointDataStore" />
<!-- Do NOT enable write behind for point data. It must currently be
disabled, or else the postgres metadata and hdf5 data can get out
of sync and cause significant issues. -->
<property name="writeBehindEnabled" value="false" />
</bean>
<bean id="defaultCacheRegistered" factory-bean="igniteClusterManager" factory-method="addCache" lazy-init="true">
<constructor-arg ref="defaultCacheConfig" />
<constructor-arg value="1" />
</bean>
<bean id="gridCacheRegistered" factory-bean="igniteClusterManager" factory-method="addCache" lazy-init="true">
<constructor-arg ref="gridCacheConfig" />
<constructor-arg value="2" />
</bean>
<bean id="satelliteCacheRegistered" factory-bean="igniteClusterManager" factory-method="addCache" lazy-init="true">
<constructor-arg ref="satelliteCacheConfig" />
<constructor-arg value="1" />
</bean>
<bean id="radarCacheRegistered" factory-bean="igniteClusterManager" factory-method="addCache" lazy-init="true">
<constructor-arg ref="radarCacheConfig" />
<constructor-arg value="1" />
</bean>
<bean id="pointCacheRegistered" factory-bean="igniteClusterManager" factory-method="addCache" lazy-init="true">
<constructor-arg ref="pointCacheConfig" />
<constructor-arg value="1" />
</bean>
<bean id="pluginMapCacheRegistered" factory-bean="igniteClusterManager" factory-method="setPluginMapCacheCluster" lazy-init="true">
<!-- This needs to match the cluster that the cache config is set on in awips2-config.xml -->
<constructor-arg value="1" />
</bean>
<bean id="ignitePluginRegistry"
class="com.raytheon.uf.common.datastore.ignite.plugin.CachePluginRegistry" lazy-init="true" />
<!-- The full topo dataset is too large to efficiently cache the entire
record so do not cache topo. -->
<bean factory-bean="ignitePluginRegistry" factory-method="registerPluginCacheName">
<constructor-arg value="topo" />
<constructor-arg value="none" />
</bean>
<bean id="igniteDataStoreFactory" class="com.raytheon.uf.common.datastore.ignite.IgniteDataStoreFactory" lazy-init="true"
depends-on="defaultCacheRegistered,gridCacheRegistered,satelliteCacheRegistered,radarCacheRegistered,pointCacheRegistered,pluginMapCacheRegistered">
<constructor-arg ref="igniteClusterManager" />
<constructor-arg ref="ignitePluginRegistry" />
</bean>
<bean id="dataStoreFactory" class="com.raytheon.uf.common.datastorage.DataStoreFactory"
factory-method="getInstance">
<property name="underlyingFactory" ref="${DATASTORE_PROVIDER}DataStoreFactory" />
</bean>
<bean id="dataStorageAuditerContainer" class="com.raytheon.uf.common.datastorage.audit.DataStorageAuditerContainer" factory-method="getInstance">
<property name="auditer">
<bean class="com.raytheon.uf.edex.database.health.EdexDataStorageAuditerProxy">
<constructor-arg ref="messageProducer"/>
</bean>
</property>
</bean>
</beans>

View file

@ -29,7 +29,6 @@
<value>com.raytheon.edex.plugin.shef</value>
<value>com.raytheon.uf.common.bmh</value>
<value>com.raytheon.uf.common.plugin.hpe.data</value>
<value>com.raytheon.uf.common.dataplugin.geographic</value>
</list>
</property>
</bean>
@ -45,7 +44,7 @@
</bean>
<bean id="metadataTxManager"
class="org.springframework.orm.hibernate5.HibernateTransactionManager">
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="metadataSessionFactory" />
</bean>
@ -62,7 +61,7 @@
</bean>
<bean id="admin_metadataTxManager"
class="org.springframework.orm.hibernate5.HibernateTransactionManager">
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="admin_metadataSessionFactory" />
</bean>
@ -70,17 +69,6 @@
<property name="transactionManager" ref="admin_metadataTxManager"/>
</bean>
<bean id="mapsDbSessionConfig"
class="com.raytheon.uf.edex.database.DatabaseSessionConfiguration">
<property name="classFinder" ref="dbClassFinder" />
<property name="includes">
<list>
<value>com.raytheon.uf.common.dataplugin.geographic</value>
<value>com.raytheon.uf.edex.database</value>
</list>
</property>
</bean>
<bean id="mapsSessionFactory"
class="com.raytheon.uf.edex.database.DatabaseSessionFactoryBean">
<!-- no annotations to load, so no databaseSessionConfig -->
@ -88,11 +76,10 @@
<value>file:///${edex.home}/conf/db/hibernateConfig/maps/hibernate.cfg.xml
</value>
</property>
<property name="databaseSessionConfiguration" ref="mapsDbSessionConfig" />
</bean>
<bean id="mapsTxManager"
class="org.springframework.orm.hibernate5.HibernateTransactionManager">
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="mapsSessionFactory" />
</bean>
@ -103,11 +90,10 @@
<value>file:///${edex.home}/conf/db/hibernateConfig/maps/hibernate.admin.cfg.xml
</value>
</property>
<property name="databaseSessionConfiguration" ref="mapsDbSessionConfig" />
</bean>
<bean id="admin_mapsTxManager"
class="org.springframework.orm.hibernate5.HibernateTransactionManager">
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="admin_mapsSessionFactory" />
</bean>
</beans>

View file

@ -8,34 +8,50 @@
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean id="jmsClientId" class="com.raytheon.uf.common.util.SystemUtil" factory-method="getClientID">
<constructor-arg value="${edex.run.mode}" />
</bean>
<!-- Separated out database specific beans to separate file so they can be loaded by themselves if necessary -->
<import resource="file:///${edex.home}/conf/spring/edex-db.xml"/>
<bean id="jmsConnectionInfo"
class="com.raytheon.uf.common.jms.JMSConnectionInfo">
<constructor-arg value="${BROKER_HOST}"/>
<constructor-arg value="${BROKER_PORT}"/>
<constructor-arg value="${JMS_VIRTUALHOST}"/>
<constructor-arg value="${BROKER_HTTP}"/>
<constructor-arg>
<!-- MaxPrefetch set at 0, due to DataPool routers getting messages backed up behind long running tasks -->
<bean id="amqConnectionURL" class="com.raytheon.uf.common.jms.AMQConnectionURLBean">
<constructor-arg value="amqp://guest:guest@/${JMS_VIRTUALHOST}" />
<property name="brokerDetails">
<bean class="org.apache.qpid.client.AMQBrokerDetails">
<constructor-arg value="${JMS_SERVER}" />
<property name="properties">
<map>
<entry key="jms.prefetchPolicy.all" value="0"/>
<entry key="provider.futureType" value="balanced"/>
<entry key="jms.clientID" value-ref="jmsClientId"/>
<entry key="retries" value="9999" />
<entry key="heartbeat" value="0" />
<entry key="connecttimeout" value="5000" />
<entry key="connectdelay" value="5000" />
</map>
</constructor-arg>
</property>
</bean>
</property>
<property name="options">
<map>
<entry key="maxprefetch" value="0" />
<entry key="sync_publish" value="all" />
<entry key="sync_ack" value="true" />
<entry key="ssl" value="${JMS_SSL_ENABLED}" />
</map>
</property>
</bean>
<bean id="qpidUfConnectionFactory" class="com.raytheon.uf.common.jms.qpid.QpidUFConnectionFactory">
<constructor-arg ref="jmsConnectionInfo"/>
<bean id="secureAmqConnectionURL" class="com.raytheon.uf.common.jms.JmsSslConfiguration" factory-method="configureURL">
<constructor-arg ref="amqConnectionURL" />
</bean>
<bean id="jmsConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<constructor-arg ref="qpidUfConnectionFactory"/>
<!-- The value of 50 is arbitrary. Can be tweaked later based on
observed frequency of session creation -->
<property name="sessionCacheSize" value="50"/>
<!-- specify the connection to the broker (qpid) -->
<bean id="amqConnectionFactory" class="org.apache.qpid.client.AMQConnectionFactory">
<constructor-arg ref="secureAmqConnectionURL"/>
</bean>
<bean id="jmsPooledConnectionFactory" class="com.raytheon.uf.common.jms.JmsPooledConnectionFactory">
<constructor-arg ref="amqConnectionFactory"/>
<property name="provider" value="QPID"/>
<property name="reconnectInterval" value="5000"/>
<!-- After resource has been closed by thread keep it allocated for another 2 minutes in case thread needs it again -->
<property name="resourceRetention" value="120000"/>
</bean>
<bean id="genericThreadPool"
@ -56,16 +72,26 @@
</bean>
<bean id="jmsGenericConfig" class="org.apache.camel.component.jms.JmsConfiguration"
factory-bean="jmsConfig" factory-method="copy">
</bean>
factory-bean="jmsConfig" factory-method="copy"/>
<bean id="jmsDurableConfig" class="org.apache.camel.component.jms.JmsConfiguration"
factory-bean="jmsConfig" factory-method="copy">
<property name="destinationResolver" ref="qpidDurableResolver" />
<property name="deliveryPersistent" value="true"/>
</bean>
<bean id="qpidNoDurableResolver" class="com.raytheon.uf.edex.esb.camel.spring.QpidDestinationNameResolver">
<property name="queueNamePrefix" value="direct://amq.direct/"/>
<property name="queueNamePostfix" value="?durable='false'"/>
</bean>
<bean id="qpidDurableResolver" class="com.raytheon.uf.edex.esb.camel.spring.QpidDestinationNameResolver">
<property name="queueNamePrefix" value="direct://amq.direct/"/>
<property name="queueNamePostfix" value="?durable='true'"/>
</bean>
<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="cacheLevelName" value="CACHE_CONSUMER"/>
<property name="cacheLevelName" value="CACHE_NONE"/>
<property name="recoveryInterval" value="10000"/>
<property name="requestTimeout" value="5000"/>
<!-- If this is false, while stopping we will reject messages that have already been pulled from qpid, essentially losing the message -->
@ -80,8 +106,9 @@
<!-- force maxMessagesPerTask so that the threads don't keep disconnecting and reconnecting.
This will keep a data-type attached to the initial thread it starts on -->
<property name="maxMessagesPerTask" value="-1"/>
<property name="listenerConnectionFactory" ref="jmsConnectionFactory" />
<property name="templateConnectionFactory" ref="jmsConnectionFactory" />
<property name="listenerConnectionFactory" ref="jmsPooledConnectionFactory" />
<property name="templateConnectionFactory" ref="jmsPooledConnectionFactory" />
<property name="destinationResolver" ref="qpidNoDurableResolver" />
<property name="disableReplyTo" value="true" />
<property name="deliveryPersistent" value="false"/>
@ -122,6 +149,18 @@
<constructor-arg ref="httpClientConfig"/>
</bean>
<bean id="pypiesStoreProps" class="com.raytheon.uf.common.pypies.PypiesProperties">
<property name="address" value="${PYPIES_SERVER}" />
</bean>
<bean id="pypiesDataStoreFactory" class="com.raytheon.uf.common.pypies.PyPiesDataStoreFactory" depends-on="httpClient">
<constructor-arg ref="pypiesStoreProps" />
</bean>
<bean id="dataStoreFactory" class="com.raytheon.uf.common.datastorage.DataStoreFactory" factory-method="getInstance">
<!-- Get instance of DataStoreFactory and set underlying factory to use -->
<property name="underlyingFactory" ref="pypiesDataStoreFactory"/>
</bean>
<bean id="initialcorePropertyConfigurer"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName">
@ -162,12 +201,6 @@
<property name="forceCheck" value="true" />
</bean>
<bean id="mapsDbPluginProperties" class="com.raytheon.uf.edex.database.DatabasePluginProperties">
<property name="pluginFQN" value="com.raytheon.uf.edex.database" />
<property name="database" value="maps" />
<property name="forceCheck" value="true" />
</bean>
<!--
The DatabasePluginRegistry is used to create tables in
a database beyond the auto-detected table for each dataplugin.
@ -201,11 +234,6 @@
</property>
</bean>
<bean id="mapsDbRegistered" factory-bean="dbPluginRegistry" factory-method="register">
<constructor-arg value="com.raytheon.uf.edex.database.maps" />
<constructor-arg ref="mapsDbPluginProperties" />
</bean>
<!-- The pluginDefaults are the values that a data plugin will use for
some plugin properties if they are not specified in the individual
plugin's Spring XML configuration -->
@ -236,22 +264,12 @@
<bean id="stringToFile" class="com.raytheon.uf.edex.esb.camel.StringToFile"/>
<bean id="dataUnzipper" class="com.raytheon.uf.common.util.DataUnzipper"/>
<bean id="errorHandlerRedeliveryPolicy" class="org.apache.camel.processor.errorhandler.RedeliveryPolicy">
<!-- This policy matches that of the old LoggingErrorHandlerBuilder
class we used to use. (That class is gone now that we have moved
to Camel 3.) -->
<property name="logRetryAttempted" value="false" />
</bean>
<bean id="errorHandler" class="org.apache.camel.builder.DeadLetterChannelBuilder">
<property name="deadLetterUri" value="log:edex?level=ERROR" />
<property name="redeliveryPolicy" ref="errorHandlerRedeliveryPolicy" />
</bean>
<bean id="errorHandler" class="org.apache.camel.builder.LoggingErrorHandlerBuilder"/>
<!-- sets default settings of log component across all of edex -->
<!-- if log component beans are created and the exchangeFormatter property is set, they can't process URI parameters -->
<!-- this bean needs to be named 'logFormatter' for the log component to find it in the context -->
<bean id="logFormatter" class="org.apache.camel.support.processor.DefaultExchangeFormatter" scope="prototype">
<bean id="logFormatter" class="org.apache.camel.processor.DefaultExchangeFormatter" scope="prototype">
<property name="maxChars" value="0" />
<property name="showBody" value="false" />
<property name="showCaughtException" value="true" />
@ -308,6 +326,17 @@
<to uri="jms-generic:topic:edex.alarms.msg" />
</route>
<!-- Route to periodically close any unused jms resources that have been pooled -->
<route id="jmsPooledResourceChecker">
<from uri="timer://jmsPooledResourceCheck?period=60s" />
<doTry>
<bean ref="jmsPooledConnectionFactory" method="checkPooledResources"/>
<doCatch>
<exception>java.lang.Throwable</exception>
<to uri="log:jmsPooledResourceCheck?level=ERROR"/>
</doCatch>
</doTry>
</route>
</camelContext>
<camelContext
id="clusteredCamel"

View file

@ -23,6 +23,7 @@
wrapper.debug=false
wrapper.java.debug.port=${EDEX_DEBUG_PORT}
set.default.EDEX_HOME=../..
wrapper.working.dir=${EDEX_HOME}/bin
# required due to java bug:
# http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4388188
@ -67,8 +68,6 @@ wrapper.java.library.path.3=${EDEX_HOME}/lib/native/linux64/
# note that n is the parameter number starting from 1.
wrapper.java.additional.1=-Dedex.run.mode=${EDEX_RUN_MODE}
wrapper.java.additional.2=-Dedex.home=${EDEX_HOME}
# fixes Logjam vulnerability, see https://weakdh.org/
wrapper.java.additional.3=-Djdk.tls.ephemeralDHKeySize=2048
# Use wrapper.jvm.parameter.order.# to specify the order
# that the jvm parameters should be included in the command.
@ -100,6 +99,9 @@ wrapper.java.additional.gc.4=-XX:SoftRefLRUPolicyMSPerMB=${SOFT_REF_LRU_POLICY_M
wrapper.java.additional.stacktraces.1=-XX:-OmitStackTraceInFastThrow
# use qpid binding URL instead of default address string format
wrapper.java.additional.qpid.1=-Dqpid.dest_syntax=BURL
# hibernate.cfg.xml cannot read from ENV variables but can read from Java system properties
wrapper.java.additional.db.1=-Ddb.addr=${DB_HOST}
wrapper.java.additional.db.2=-Ddb.port=${DB_PORT}
@ -136,12 +138,7 @@ wrapper.java.additional.log.5=-Djava.util.logging.config.file=${EDEX_HOME}/conf/
# the max size in MB of any stream sent to thrift, this prevents the OutOfMemory
# errors reported by thrift sometimes when the stream is corrupt/incorrect
wrapper.java.additional.thrift.maxStreamSize=-Dthrift.stream.maxsize=320
# define properties for rest path
# required due to issue in camel 2.23+ reading env variables
wrapper.java.additional.http.1=-Dedex.http.port=${HTTP_PORT}
wrapper.java.additional.http.2=-Dedex.http.server.path=${HTTP_SERVER_PATH}
wrapper.java.additional.thrift.maxStreamSize=-Dthrift.stream.maxsize=200
#wrapper.java.additional.retain.failed=-Dretain.failed.data=${RETAIN_FAILED}
@ -154,13 +151,6 @@ wrapper.java.additional.prefs.1=-Djava.util.prefs.userRoot=${HOME}/.java/${HOSTN
# Add option to override java.security settings if needed
wrapper.java.additional.security.1=${JAVA_SECURITY_OPTION}
wrapper.java.additional.ignite.1=-DIGNITE_NO_ASCII=true
wrapper.java.additional.ignite.2=-DIGNITE_QUIET=false
wrapper.java.additional.ignite.3=-Djava.net.preferIPv4Stack=true
wrapper.java.additional.ignite.4=-DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
wrapper.java.additional.ignite.5=-Da2.ignite.defaultTxTimeout=120000
wrapper.java.additional.ignite.6=-Da2.ignite.txTimeoutOnPartitionMapExchange=30000
# Initial Java Heap Size (in MB)
wrapper.java.initmemory=${INIT_MEM}

View file

@ -1,17 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>deploy.ignite.awips2</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.python.pydev.PyDevBuilder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.python.pydev.pythonNature</nature>
</natures>
</projectDescription>

View file

@ -1,5 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?eclipse-pydev version="1.0"?><pydev_project>
<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
<pydev_property name="org.python.pydev.PYTHON_PROJECT_VERSION">python interpreter</pydev_property>
</pydev_project>

View file

@ -1,2 +0,0 @@
<project name="deploy.esb" >
</project>

View file

@ -1,93 +0,0 @@
<project name="deploy.esb" default="main">
<!-- <import file="deploy-web.xml" /> -->
<target name="main">
<!-- on a developer machine, the following directories should
already exist. -->
<property name="ignite.root.directory" location="${edex.root.directory}" />
<property name="ignite.src.directory" location="${repo.dir}/../ufcore/ignite/com.raytheon.uf.ignite.core" />
<property name="ignite.config.directory" location="${ignite.root.directory}/config" />
<property name="ignite.tls.directory" location="${ignite.root.directory}/tls" />
<property name="ignite.bin.directory" location="${ignite.root.directory}/bin" />
<property name="ignite.lib.directory" location="${ignite.root.directory}/lib" />
<property name="ignite.conf.directory" location="${ignite.root.directory}/conf" />
<mkdir dir="${ignite.config.directory}" />
<mkdir dir="${ignite.tls.directory}" />
<mkdir dir="${ignite.bin.directory}" />
<mkdir dir="${ignite.lib.directory}" />
<mkdir dir="${ignite.conf.directory}" />
<antcall target="cleanup" />
<antcall target="deploy.esb" />
</target>
<target name="cleanup">
<!-- delete all files under lib directory -->
<echo message="Cleaning target directory: ${ignite.lib.directory}/" />
<delete includeemptydirs="true">
<fileset dir="${ignite.lib.directory}/" />
</delete>
<!-- delete the shell scripts from bin directory -->
<echo message="Cleaning target directory: ${ignite.bin.directory}/" />
<delete includeemptydirs="true">
<fileset dir="${ignite.bin.directory}/">
<exclude name="**/setup.env"/>
</fileset>
</delete>
<!-- delete all files under conf directory (preserve site overrides) -->
<echo message="Cleaning target directory: ${ignite.conf.directory}" />
<delete>
<fileset dir="${ignite.conf.directory}">
<exclude name="**/site/**"/>
<exclude name="**/auth/**"/>
</fileset>
</delete>
<!-- delete all files from config directory -->
<echo message="Cleaning target directory: ${ignite.config.directory}/" />
<delete includeemptydirs="true">
<fileset dir="${ignite.config.directory}/" />
</delete>
</target>
<target name="deploy.esb">
<copy todir="${ignite.bin.directory}"
overwrite="${esb.overwrite}">
<fileset dir="${ignite.src.directory}/scripts" />
</copy>
<!-- set executable permissions - a2_ignite.sh. -->
<chmod file="${ignite.bin.directory}/a2_ignite.sh" perm="ugo+rx" />
<copy todir="${ignite.config.directory}"
overwrite="${esb.overwrite}">
<fileset dir="${ignite.src.directory}/config" />
</copy>
<copy todir="${ignite.tls.directory}"
overwrite="${esb.overwrite}">
<fileset dir="${ignite.src.directory}/tls" />
</copy>
<chmod perm="o-rwx">
<fileset dir="${ignite.tls.directory}" />
</chmod>
<!-- conf/jms/auth -->
<copy todir="${ignite.conf.directory}"
overwrite="${esb.overwrite}">
<fileset dir="${esb.directory}/conf">
<include name="**/jms/auth/*" />
</fileset>
</copy>
<!-- set executable permissions - private keys -->
<chmod file="${ignite.conf.directory}/jms/auth/*.key" perm="go-rwx" />
<chmod file="${ignite.conf.directory}/jms/auth/*.pk8" perm="go-rwx" />
</target>
<taskdef resource="net/sf/antcontrib/antlib.xml"
classpath="${basedir}/lib/ant/ant-contrib-1.0b3.jar" />
</project>

View file

@ -1,5 +0,0 @@
.git
.metadata
.svn
eclipse-rcp-mars-1-linux-gtk-x86_64.tar.gz
java.tar

View file

@ -1,15 +0,0 @@
awips2-cimss
awips2-core-foss
awips2-core
awips2-data-delivery
awips2-drawing
awips2-foss
awips2-goesr
awips2-gsd
awips2-nativelib
awips2-ncep
awips2-nws
awips2-ogc
awips2-rpm
awips2-static
python-awips

View file

@ -1,37 +0,0 @@
edexOsgi/* cave/* localization/*
javaUtilities/* rpms pythonPackages
*.pdf
../awips2-nativelib/*
../awips2-cimss/common/*
../awips2-cimss/edex/*
../awips2-cimss/features/*
../awips2-cimss/viz/*
../awips2-core/common/*
../awips2-core/edex/*
../awips2-core/features/*
../awips2-core/viz/*
../awips2-core-foss/lib/*
../awips2-foss/lib/*
../awips2-ncep/common/*
../awips2-ncep/viz/*
../awips2-ncep/features/*
../awips2-ncep/edex/*
../awips2-goesr/edexOsgi/*
../awips2-goesr/cave/*
../awips2-unidata/*
../python-awips
../awips2-data-delivery/common/*
../awips2-data-delivery/edex/*
../awips2-data-delivery/features/*
../awips2-data-delivery/viz/*
../awips2-drawing/viz/*
../awips2-drawing/features/*
../awips2-gsd/viz/*
../awips2-gsd/features/*
../awips2-ogc/foss/*
../awips2-ogc/edex/*
../awips2-ogc/features/*
../awips2-nws/edex/*
../awips2-nws/common/*
../awips2-nws/features/*
../awips2-nws/viz/*

View file

@ -1,35 +0,0 @@
edexOsgi/* cave/* localization
javaUtilities/* rpms pythonPackages
build/deploy.edex
build/deploy.edex.awips2
build/deploy.ignite.awips2
../awips2-nativelib/*
../awips2-cimss/edex/*
../awips2-cimss/features/*
../awips2-cimss/viz/*
../awips2-cimss/common/*
../awips2-core/common/*
../awips2-core/edex/*
../awips2-core/features/*
../awips2-core/ignite/*
../awips2-core/viz/*
../awips2-core-foss/lib/*
../awips2-foss/lib/*
../awips2-rpm/foss
../awips2-rpm/installers
../awips2-ncep/common/*
../awips2-ncep/viz/*
../awips2-ncep/features/*
../awips2-ncep/edex/*
../awips2-nws/edex/*
../awips2-nws/common/*
../awips2-nws/features/*
../awips2-nws/viz/*
../awips2-goesr/edexOsgi/*
../awips2-goesr/cave/*
../awips2-gsd/viz/*
../awips2-gsd/features/*
../awips2-ogc/foss/*
../awips2-ogc/edex/*
../awips2-ogc/features/*
../python-awips

View file

@ -1,92 +0,0 @@
#!/bin/sh -xe
#
# Unidata AWIPS Build Setup Script
# author: Michael James
# maintainer: <tiffanym@ucar.edu>
#
#
# Require el6 or el7 be specified
#
if [ -z "$1" ]; then
echo "supply type (el7)"
exit
fi
os_version=$1
rpmname=$2
dirs=" -v `pwd`:/awips2/repo/awips2-builds:rw "
. /awips2/repo/awips2-builds/build/buildEnvironment.sh
version=${AWIPSII_VERSION}-${AWIPSII_RELEASE}
java -jar /awips2/repo/awips-unidata-builds/all/awips_splashscreen_updater.jar "$version"
splashLoc=$(find /awips2/repo/awips2/cave -name "splash.bmp")
mv splash.bmp $splashLoc
echo "replacing splash.bmp"
#Set CAVE About information
echo "0=$AWIPSII_VERSION-$AWIPSII_RELEASE
1=$AWIPSII_BUILD_DATE
2=$AWIPSII_BUILD_SYS">/awips2/repo/awips2/cave/com.raytheon.viz.product.awips/about.mappings
# If local source directories, exist, mount them to the container
if [ $rpmname = "buildCAVE" ]; then
for dn in `cat build/repos| grep -v static| grep -v nativelib |grep -v awips2-rpm`
do
echo $dn
if [ -d /awips2/repo/$dn ]; then
dirs+=" -v /awips2/repo/${dn}:/awips2/repo/${dn} "
fi
done
else
for dn in `cat build/repos`
do
echo $dn
if [ -d /awips2/repo/$dn ]; then
dirs+=" -v /awips2/repo/${dn}:/awips2/repo/${dn} "
fi
done
fi
#
# Run Docker AWIPS ADE Image
#
imgname=tiffanym13/awips-ade
imgvers=20.3.2
sudo docker run --entrypoint=/bin/bash --privileged -d -ti -e "container=docker" $dirs $imgname-$imgvers-2:$imgvers-$os_version
dockerID=$(sudo docker ps | grep awips-ade | awk '{print $1}' | head -1 )
sudo docker logs $dockerID
sudo docker exec -ti $dockerID /bin/bash -xec "/awips2/repo/awips2-builds/build/build_rpms.sh $os_version $rpmname";
#sudo docker stop $dockerID
#sudo docker rm -v $dockerID
#
# Update/Recreate YUM Repository
#
date=$(date +%Y%m%d)
if [[ $(whoami) == "awips" ]]; then # local build
#copy awips_install-YYYYMMDD.sh to robin
#TM#cp awips_install.sh awips_install-${date}.sh
#TM#echo "rsync -aP awips_install-${date}.sh tiffanym@fserv:/share/awips2/${AWIPSII_VERSION}/linux/"
#TM#rsync -aP awips_install-${date}.sh tiffanym@fserv:/share/awips2/${AWIPSII_VERSION}/linux/
#For testing, copy el7-test.repo to robin with updated path
#sed -i 's/el7-dev-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]/el7-dev-${date}/' dist/el7-test.repo
sudo mv dist/${os_version}-dev dist/${os_version}-dev-${date}
sudo su - -c "createrepo -g /awips2/repo/awips2/dist/comps.xml /awips2/repo/awips2/dist/${os_version}-dev-${date}/"
sudo chown -R awips:fxalpha dist/${os_version}-dev-${date}
echo "rsync -aP dist/${os_version}-dev-${date} tiffanym@fserv:/share/awips2/${AWIPSII_VERSION}/linux/"
rsync -aP dist/${os_version}-dev-${date} tiffanym@fserv:/share/awips2/${AWIPSII_VERSION}/linux/
cmd="cd /share/awips2/${AWIPSII_VERSION}/linux ; find ${os_version}-dev-${date} -type f | ../../git_nexus_tool/nexus-tools/bash/nexus-upload.sh -t downloads -u tiffanym -o awips2 -v ${AWIPSII_VERSION}/linux/rpms/"
echo "Need to run ssh tiffanym@fserv '${cmd}' and provide -p [password]"
#rsync -aP dist/${os_version}-dev-${date} awips@edex3:/awips2/dev
#rsync -aP dist/${os_version}-dev-${date} awips@hardy:/awips2/dev
#repomanage -k1 --old dist/${os_version}-dev | xargs rm -f
#
# Push to web server
#
#rsync --archive --delete dist/${os_version}-dev tomcat@www:/web/content/repos/yum/
fi

View file

@ -203,7 +203,7 @@ javacDebugInfo=false
javacFailOnError=true
# Enable or disable verbose mode of the compiler
javacVerbose=false
javacVerbose=true
# Extra arguments for the compiler. These are specific to the java compiler being used.
compilerArg=-g:lines,source -nowarn

View file

@ -1,19 +1,12 @@
###############################################################################
# Copyright (c) 2003, 2016 IBM Corporation and others.
#
# This program and the accompanying materials
# are made available under the terms of the Eclipse Public License 2.0
# Copyright (c) 2003, 2006 IBM Corporation and others.
# All rights reserved. This program and the accompanying materials
# are made available under the terms of the Eclipse Public License v1.0
# which accompanies this distribution, and is available at
# https://www.eclipse.org/legal/epl-2.0/
#
# SPDX-License-Identifier: EPL-2.0
# http://www.eclipse.org/legal/epl-v10.html
#
# Contributors:
# IBM Corporation - initial API and implementation
# Compuware Corporation - Sebastien Angers <sebastien.angers@compuware.com>
# - Enabled additional mirror slicingOptions in Headless PDE Build
# - Enabled 'raw' attribute for mirror step in Headless PDE Build
# - https://bugs.eclipse.org/338878
###############################################################################
#####################
# Parameters describing how and where to execute the build.
@ -25,21 +18,18 @@
# Of course any of the settings here can be overridden by spec'ing
# them on the command line (e.g., -DbaseLocation=d:/eclipse
#The type of the top level element we are building, generally "feature"
topLevelElementType = feature
#The id of the top level element we are building
#topLevelElementId = org.foo.bar
############# PRODUCT/PACKAGING CONTROL #############
runPackager=true
#Needed for p2, comment out these lines if using developer.product
p2.gathering=true
generate.p2.metadata=true
generate.p2.metadata = true
p2.metadata.repo=file:${buildDirectory}/repository
p2.artifact.repo=file:${buildDirectory}/repository
p2.flavor=tooling
p2.publish.artifacts=true
############# PRODUCT/PACKAGING CONTROL #############
runPackager=true
#Set the name of the archive that will result from the product build.
#archiveNamePrefix=
@ -51,40 +41,37 @@ collectingFolder=${archivePrefix}
# The list of {os, ws, arch} configurations to build. This
# value is a '&' separated list of ',' separate triples. For example,
# configs=win32,win32,x86 & linux,gtk,x86
# configs=win32,win32,x86 & linux,motif,x86
# By default the value is *,*,*
#configs = *, *, *
configs = linux, gtk, x86_64
#configs=win32, win32, x86 & \
# win32,win32,x86_64 & \
# linux, gtk, x86 & \
configs=linux,gtk,x86_64
# win32,win32,x86_64
# win32, win32, x86 & \
# linux, gtk, x86_64 & \
# macosx, cocoa, x86 & \
# macosx, cocoa, x86_64
# linux, motif, x86 & \
# solaris, motif, sparc & \
# solaris, gtk, sparc & \
# aix, motif, ppc & \
# hpux, motif, PA_RISC & \
# macosx, carbon, ppc
# By default PDE creates one archive (result) per entry listed in the configs property.
# Setting this value to true will cause PDE to only create one output containing all
# Setting this value to try will cause PDE to only create one output containing all
# artifacts for all the platforms listed in the configs property.
# To control the output format for the group, add a "group, group, group - <format>" entry to the
# archivesFormat.
#groupConfigurations=true
#The format of the archive. By default a zip is created using antZip.
#The list can only contain the configuration for which the desired format is different than zip.
#archivesFormat=win32, win32, x86 - antZip& \
# linux, gtk, ppc - antZip &\
# linux, gtk, x86 - antZip& \
# linux, gtk, x86_64 - antZip
#Allow cycles involving at most one bundle that needs to be compiled with the rest being binary bundles.
allowBinaryCycles = true
#Sort bundles depenedencies across all features instead of just within a given feature.
#flattenDependencies = true
#Parallel compilation, requires flattenedDependencies=true
#parallelCompilation=true
#parallelThreadCount=
#parallelThreadsPerProcessor=
# linux, gtk, x86_64 - antZip& \
# linux, motif, x86 - antZip& \
# solaris, motif, sparc - antZip& \
# solaris, gtk, sparc - antZip& \
# aix, motif, ppc - antZip& \
# hpux, motif, PA_RISC - antZip& \
# macosx, carbon, ppc - antZip
#Set to true if you want the output to be ready for an update jar (no site.xml generated)
#outputUpdateJars = false
@ -95,15 +82,12 @@ allowBinaryCycles = true
#jnlp.codebase=<codebase url>
#jnlp.j2se=<j2se version>
#jnlp.locale=<a locale>
#jnlp.generateOfflineAllowed=true or false generate <offlineAllowed/> attribute in the generated features
#jnlp.configs=${configs} #uncomment to filter the content of the generated jnlp files based on the configuration being built
#Set to true if you want to sign jars
#signJars=false
#sign.alias=<alias>
#sign.keystore=<keystore location>
#sign.storepass=<keystore password>
#sign.keypass=<key password>
#Arguments to send to the zip executable
zipargs=
@ -114,44 +98,7 @@ tarargs=
#Control the creation of a file containing the version included in each configuration - on by default
#generateVersionsLists=false
############ REPO MIRROR OPTIONS CONTROL ############
# Default values for the slicingOptions and raw attribute of the p2.mirror Ant target used to generate the p2 repo (buildRepo)
# Note that the default values used by PDE/Build are different from the default values for p2.mirror's slicingOptions and raw attribute
# See http://help.eclipse.org/topic//org.eclipse.platform.doc.isv/guide/p2_repositorytasks.htm for the details
# of each setting.
#p2.mirror.slicing.filter=
#p2.mirror.slicing.followOnlyFilteredRequirements=false
#p2.mirror.slicing.followStrict=false
#p2.mirror.slicing.includeFeatures=true
#p2.mirror.slicing.includeNonGreedy=false
#p2.mirror.slicing.includeOptional=true
#p2.mirror.slicing.platformFilter=
#p2.mirror.slicing.latestVersionOnly=false
p2.mirror.raw=true
############## SOURCE BUNDLE CONTROL ################
# Set this property to have source bundles created and output into build repository.
# This does NOT put them in the build output (e.g., product) itself.
# Valid values are: not set, built, all.
# built = only source for bundles that are actually built/compiled in this run are output
# all = all available source is collected and output
#sourceBundleMode=all
# When outputting autogenerated source bundles a feature is created to contain all the automatic
# source bundles. Typically this feature is not needed and can be ignored. As such, it is given a default
# name and version. These properties can be used to override the defaults.
# sourceBundleTemplateFeature - can specify an existing feature which will be augmented to form the generated source feature
# sourceBundleFeatureId - will be the id of generated source feature which contains all the generated source bundles, default value
# is sourceBundleTemplateFeature + ".source" if sourceBundleTemplateFeature is specified
#sourceBundleTemplateFeature=
#sourceBundleFeatureId=
#sourceBundleFeatureVersion=
############## BUILD NAMING CONTROL ################
# The directory into which the build elements are fetched and where
# the build takes place.
buildDirectory=${user.home}/eclipse.build
# Type of build. Used in naming the build output. Typically this value is
# one of I, N, M, S, ...
@ -181,61 +128,16 @@ timestamp=007
# in most RCP app or a plug-in, the baseLocation should be the location of a previously
# installed Eclipse against which the application or plug-in code will be compiled and the RCP delta pack.
#base=<path/to/parent/of/eclipse>
#baseLocation=${base}/eclipse
#Folder containing repositories whose content is needed to compile against
#repoBaseLocation=${base}/repos
#Folder where the content of the repositories from ${repoBaseLocation} will be made available as a form suitable to be compiled against
#transformedRepoLocation=${base}/transformedRepos
#Os/Ws/Arch/nl of the eclipse specified by baseLocation
baseos=linux
basews=gtk
basearch=x86_64
basearch=x86
#this property indicates whether you want the set of plug-ins and features to be considered during the build to be limited to the ones reachable from the features / plugins being built
filteredDependencyCheck=false
#this property indicates whether the resolution should be done in development mode (i.e. ignore multiple bundles with singletons)
resolution.devMode=false
#pluginPath is a list of locations in which to find plugins and features. This list is separated by the platform file separator (; or :)
#a location is one of:
#- the location of the jar or folder that is the plugin or feature : /path/to/foo.jar or /path/to/foo
#- a directory that contains a /plugins or /features subdirectory
#- the location of a feature.xml, or for 2.1 style plugins, the plugin.xml or fragment.xml
#pluginPath=
skipBase=true
eclipseURL=<url for eclipse download site>
eclipseBuildId=<Id of Eclipse build to get>
eclipseBaseURL=${eclipseURL}/eclipse-platform-${eclipseBuildId}-win32.zip
############# MAP FILE CONTROL ################
# This section defines CVS tags to use when fetching the map files from the repository.
# If you want to fetch the map file from repository / location, change the getMapFiles target in the customTargets.xml
skipMaps=true
mapsRepo=:pserver:anonymous@example.com/path/to/repo
mapsRoot=path/to/maps
mapsCheckoutTag=HEAD
#tagMaps=true
mapsTagTag=v${buildId}
############ REPOSITORY CONTROL ###############
# This section defines properties parameterizing the repositories where plugins, fragments
# bundles and features are being obtained from.
# The tags to use when fetching elements to build.
# By default thebuilder will use whatever is in the maps.
# This value takes the form of a comma separated list of repository identifier (like used in the map files) and the
# overriding value
# For example fetchTag=CVS=HEAD, SVN=v20050101
# fetchTag=HEAD
skipFetch=true

View file

@ -41,7 +41,6 @@ com.raytheon.uf.viz.d2d.gfe.feature
com.raytheon.uf.viz.ncep.dataplugins.feature
com.raytheon.uf.viz.alertview.feature
com.raytheon.viz.satellite.feature
com.raytheon.uf.viz.satellite.goesr.feature
com.raytheon.uf.viz.ncep.displays.feature
com.raytheon.uf.viz.ncep.nsharp.feature
com.raytheon.uf.viz.d2d.nsharp.feature
@ -52,6 +51,3 @@ com.raytheon.uf.viz.ncep.npp.feature
com.raytheon.uf.viz.ncep.perspective.feature
com.raytheon.uf.viz.d2d.skewt.feature
com.raytheon.uf.viz.server.edex.feature
com.raytheon.uf.viz.dataplugin.nswrc.feature
edu.wisc.ssec.cimss.viz.probsevere.feature
gov.noaa.nws.sti.mdl.viz.griddednucaps.feature

View file

@ -208,9 +208,6 @@
<antcall target="p2.build.repo">
<param name="feature" value="com.raytheon.viz.satellite.feature" />
</antcall>
<antcall target="p2.build.repo">
<param name="feature" value="com.raytheon.uf.viz.satellite.goesr.feature" />
</antcall>
<antcall target="p2.build.repo">
<param name="feature" value="com.raytheon.uf.viz.ncep.core.feature" />
</antcall>
@ -298,15 +295,7 @@
<antcall target="p2.build.repo">
<param name="feature" value="com.raytheon.uf.viz.d2d.ui.awips.feature" />
</antcall>
<antcall target="p2.build.repo">
<param name="feature" value="com.raytheon.uf.viz.alertview.feature" />
</antcall>
<antcall target="p2.build.repo">
<param name="feature" value="edu.wisc.ssec.cimss.viz.probsevere.feature" />
</antcall>
<antcall target="p2.build.repo">
<param name="feature" value="gov.noaa.nws.sti.mdl.viz.griddednucaps.feature" />
</antcall>
<antcall target="cleanup.features" />
</target>

View file

@ -32,9 +32,6 @@
# Sep 17, 2015 #4869 bkowal Read dynamic AlertViz version information at startup.
# Oct 05, 2015 #4869 bkowal Fix AlertViz argument ordering
# Feb 15, 2017 6025 tgurney Force use of GTK2
# Nov 21, 2019 7597 randerso Re-enable use of GTK3
# Jan 09, 2020 7606 randerso Remove jre directory level from JAVA_HOME
# Apr 15, 2020 8144 tgurney Set the port dynamically based on user ID
#
user=`/usr/bin/whoami`
@ -69,7 +66,7 @@ export AWIPS_INSTALL_DIR=${ALERTVIZ_INSTALL}
export LD_LIBRARY_PATH=${JAVA_INSTALL}/lib:$LD_LIBRARY_PATH
export PATH=${JAVA_INSTALL}/bin:$PATH
export JAVA_HOME="${JAVA_INSTALL}"
export JAVA_HOME="${JAVA_INSTALL}/jre"
exitVal=1
@ -203,9 +200,8 @@ if [ -f ${dir}/awipsVersion.txt ]; then
IFS=${prevIFS}
fi
# Allows multiple users to run AlertViz simultaneously on the same workstation
# Have to multiply by 2 because AlertViz opens two ports, n and n+1
ALERTVIZ_PORT=$((61998+$(id -u)%1024*2))
# Force GTK2
export SWT_GTK3=0
#run a loop for alertviz
count=0
@ -220,9 +216,9 @@ do
# VERSION_ARGS includes jvm arguments so it must always be at the end of the argument
# sequence passed to AlertViz.
if [ -w $FULL_LOGDIR ] ; then
${dir}/alertviz -p $ALERTVIZ_PORT "${SWITCHES[@]}" $* "${VERSION_ARGS[@]}" > /dev/null 2>&1 &
${dir}/alertviz "${SWITCHES[@]}" $* "${VERSION_ARGS[@]}" > /dev/null 2>&1 &
else
${dir}/alertviz -p $ALERTVIZ_PORT "${SWITCHES[@]}" $* "${VERSION_ARGS[@]}" &
${dir}/alertviz "${SWITCHES[@]}" $* "${VERSION_ARGS[@]}" &
fi
pid=$!
wait $pid

View file

@ -39,7 +39,6 @@ if [ ! -f /tmp/vizUtility.log ]; then
else
echo "" > /tmp/vizUtility.log
fi
chgrp fxalpha /tmp/vizUtility.log
date >> /tmp/vizUtility.log
@ -148,5 +147,4 @@ done
date >> /tmp/vizUtility.log
echo >> /tmp/vizUtility.log
# Fix for appLauncher to work with IdM users
chgrp -f fxalpha /tmp/appLauncher.out /tmp/appLauncher.log

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.5 KiB

View file

@ -44,16 +44,7 @@
# Feb 6, 2017 #6025 tgurney Force use of GTK2
# Nov 07, 2017 6516 randerso Use correct ini file for gfeClient
# Apr 23, 2018 6351 mapeters Fix looking up of ini file
# Jun 27, 2019 7876 dgilling Update LD_LIBRARY_PATH for python 3.
# Nov 21, 2019 7597 randerso Re-enable use of GTK3
# Jan 09, 2020 7606 randerso Remove jre directory level from JAVA_HOME
# Feb 05, 2020 7867 randerso Fix ERROR message at cave startup regarding apps_dir
# Apr 20, 2020 8137 tgurney Force use of the short hostname as the
# default Text Workstation hostname
# Sep 23, 2020 8228 randerso Disable GTK overlay scrollbars due to issues with TreeEditors.
# See Eclipse bug https://bugs.eclipse.org/bugs/show_bug.cgi?id=560071
# Apr 29, 2021 8137 randerso Remove TEXTWS environment variable
##
#
user=`/usr/bin/whoami`
@ -90,16 +81,15 @@ deleteOldCaveDiskCaches &
# Enable core dumps
ulimit -c unlimited >> /dev/null 2>&1
export LD_LIBRARY_PATH=${JAVA_INSTALL}/lib:${PYTHON_INSTALL}/lib:${PYTHON_INSTALL}/lib/python3.6/site-packages/jep:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=${JAVA_INSTALL}/lib:${PYTHON_INSTALL}/lib:${PYTHON_INSTALL}/lib/python2.7/site-packages/jep:$LD_LIBRARY_PATH
if [[ -z "$CALLED_EXTEND_LIB_PATH" ]]; then
extendLibraryPath
fi
export PATH=${JAVA_INSTALL}/bin:${PYTHON_INSTALL}/bin:$PATH
export JAVA_HOME="${JAVA_INSTALL}"
export JAVA_HOME="${JAVA_INSTALL}/jre"
# The user can update this field if they choose to do so.
export SHARE_DIR="/awips2/edex/data/share"
export HYDRO_APPS_DIR="${SHARE_DIR}/hydroapps"
export HYDRO_APPS_DIR="/awips2/edex/data/share/hydroapps"
export EDEX_HOME=/awips2/edex
export LOCALIZATION_ROOT=~/caveData/common
@ -108,9 +98,31 @@ if [ $? -ne 0 ]; then
echo "FATAL: Unable to locate the PostgreSQL JDBC Driver."
exit 1
fi
export apps_dir=${HYDRO_APPS_DIR}
SWITCHES=($SWITCHES)
MODE="PRACTICE"
TESTCHECK="$TMCP_HOME/bin/getTestMode"
if [ -x ${TESTCHECK} ]; then
echo "Calling getTestMode()"
${TESTCHECK}
status=${?}
if [ $status -eq 11 ]; then
MODE="TEST"
SWITCHES+=(-mode TEST)
elif [ $status -eq 12 ];then
MODE="PRACTICE"
SWITCHES+=(-mode PRACTICE)
elif [ $status -eq 15 ];then
MODE="OPERATIONAL"
SWITCHES+=(-mode OPERATIONAL)
else
MODE="OPERATIONAL (no response)"
fi
echo "getTestMode() returned ${MODE}"
else
MODE="UNKNOWN"
echo "getTestMode() not found - going to use defaults"
fi
VERSION_ARGS=()
if [ -f ${CAVE_INSTALL}/awipsVersion.txt ]; then
@ -122,6 +134,13 @@ if [ -f ${CAVE_INSTALL}/awipsVersion.txt ]; then
IFS=${prevIFS}
fi
TEXTWS=`hostname | sed -e 's/lx/xt/g'`
if [[ $XT_WORKSTATIONS != *$TEXTWS* ]]
then
TEXTWS=`hostname`
fi
export TEXTWS
hostName=`hostname -s`
if [[ -z "$PROGRAM_NAME" ]]
@ -226,9 +245,8 @@ export LOGFILE_STARTUP_SHUTDOWN="$FULL_LOGDIR/${PROGRAM_NAME}_${pid}_${curTime}_
createEclipseConfigurationDir
TMP_VMARGS="--launcher.appendVmargs -vmargs -Djava.io.tmpdir=${eclipseConfigurationDir}"
# Disable GTK3 Overlay Scrollbars due to issues with TreeEditors.
# See Eclipse bug https://bugs.eclipse.org/bugs/show_bug.cgi?id=560071
export GTK_OVERLAY_SCROLLING=0
# Force GTK2
export SWT_GTK3=0
# At this point fork so that log files can be set up with the process pid and
# this process can log the exit status of cave.

View file

@ -45,9 +45,7 @@
# May 27, 2016 ASM#18971 dfriedman Fix local variable usage in deleteOldEclipseConfigurationDirs
# Aug 09, 2016 ASM#18911 D. Friedman Add minimum purge period of 24 hours. Use a lock file to prevent
# simultaneous purges. Allow override of days to keep.
# Jan 26, 2017 #6092 randerso return exitCode so it can be propagated back to through the calling processes
# Oct 22, 2019 #7943 tjensen Remove -x flag from grep check in deleteOldEclipseConfigurationDirs()
# Jan 31, 2022 tiffanym@ucar.edu Clean up output when CAVE is started
# Jan 26,2017 #6092 randerso return exitCode so it can be propagated back to through the calling processes
########################
source /awips2/cave/iniLookup.sh
@ -419,7 +417,9 @@ function deleteOldCaveLogs()
# Purge the old logs.
local n_days_to_keep=${CAVE_LOG_DAYS_TO_KEEP:-30}
find "$logdir" -type f -name "*.log" -mtime +"$n_days_to_keep" | xargs -r rm
echo -e "Cleaning consoleLogs: "
echo -e "find $logdir -type f -name "*.log" -mtime +$n_days_to_keep | xargs rm "
find "$logdir" -type f -name "*.log" -mtime +"$n_days_to_keep" | xargs rm
# Record the last purge time and remove the lock file.
echo $(date +%s) > "$last_purge_f"
@ -457,7 +457,7 @@ function deleteOldEclipseConfigurationDirs()
IFS=$save_IFS
local p
for p in "${old_dirs[@]}"; do
if ! echo "$in_use_dirs" | grep -qF "$p"; then
if ! echo "$in_use_dirs" | grep -qxF "$p"; then
rm -rf "$p"
fi
done
@ -473,7 +473,7 @@ function deleteEclipseConfigurationDir()
function createEclipseConfigurationDir()
{
local d dir id=$(hostname)-$(whoami)
for d in "$HOME/caveData/.cave-eclipse/"; do
for d in "/local/cave-eclipse/" "$HOME/.cave-eclipse/"; do
if [[ $d == $HOME/* ]]; then
mkdir -p "$d" || continue
fi
@ -486,7 +486,7 @@ function createEclipseConfigurationDir()
fi
done
echo "Unable to create a unique Eclipse configuration directory. Will proceed with default." >&2
export eclipseConfigurationDir=$HOME/caveData/.cave-eclipse
export eclipseConfigurationDir=$HOME/.cave-eclipse
return 1
}

View file

@ -0,0 +1,63 @@
#!/usr/bin/env python
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
# Converts netcdf style colormaps to AWIPS II XML colormaps
#
# Usage: ./convCT.py colormap1 colormap2 colormap3
#
# Requires scipy and numpy
#
# Deposits files in /tmp
#
# SOFTWARE HISTORY
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# Jun 23, 2008 chammack Initial creation
#
import pupynere as netcdf
import numpy
import sys
import os
def convert(i):
return str((i & 0xFF) / 255.0)
ct = sys.argv
numct = len(ct)
for k in range(1, numct):
print 'Converting: ' + ct[k]
nc = netcdf.netcdf_file(ct[k], "r")
colors = nc.variables['tableColors'][:][0]
f = open('/tmp/' + os.path.basename(ct[k]).replace('.COLORTABLE', '.cmap'), 'w')
f.write('<colorMap>\n')
aVal = 1.0
for i in range(numpy.shape(colors)[1]):
f.write(" <color ")
f.write('r = "' + convert(colors[0,i]) + '" ')
f.write('g = "' + convert(colors[1,i]) + '" ')
f.write('b = "' + convert(colors[2,i]) + '" ')
f.write('a = "' + str(aVal) + '" ')
f.write('/>\n')
f.write('</colorMap>\n')
f.close()

View file

@ -1,4 +1,4 @@
#!/awips2/python/bin/python3
#!/usr/bin/env python
##
# This software was developed and / or modified by Raytheon Company,

View file

@ -1,4 +1,4 @@
#!/awips2/python/bin/python3
#!/usr/bin/env python
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
@ -21,6 +21,9 @@
from optparse import OptionParser
from optparse import OptionGroup
import subprocess
import re
from os.path import isfile
import sys
from FileFilter import FileFilter
import HeaderUpdater

View file

@ -24,8 +24,8 @@
# ------------ ---------- ----------- --------------------------
# 3 Mar 2010 #3771 jelkins Initial Creation.
from configparser import ConfigParser
from configparser import NoOptionError
from ConfigParser import ConfigParser
from ConfigParser import NoOptionError
from os import pathsep
from os import listdir
from os.path import join

View file

@ -1,4 +1,4 @@
#!/awips2/python/bin/python3
#!/usr/bin/env python
##
# This software was developed and / or modified by Raytheon Company,
@ -26,6 +26,8 @@
# ------------ ---------- ----------- --------------------------
# 3 Mar 2010 #3771 jelkins Initial Creation.
from __future__ import with_statement
# the version is derived from the date last updated y.y.m.d
version = "1.0.3.12"
@ -188,7 +190,7 @@ def main(commandOption=None, FILE=None):
if revertSuffix != None:
try:
rename(inputFileName + revertSuffix, inputFileName)
except OSError as v:
except OSError, v:
logger.error(v)
return

View file

@ -2,34 +2,32 @@ Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Acarssounding Plug-in
Bundle-SymbolicName: com.raytheon.uf.viz.acarssounding;singleton:=true
Bundle-Version: 1.18.0.qualifier
Bundle-Version: 1.17.0.qualifier
Bundle-Vendor: RAYTHEON
Bundle-RequiredExecutionEnvironment: JavaSE-11
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Bundle-ActivationPolicy: lazy
Export-Package: com.raytheon.uf.viz.acarssounding
Require-Bundle: org.eclipse.core.runtime,
com.raytheon.uf.common.serialization,
com.raytheon.uf.common.dataplugin.acarssounding,
com.raytheon.uf.common.pointdata,
com.raytheon.uf.common.dataplugin,
com.raytheon.uf.common.datastorage,
com.raytheon.uf.common.dataplugin.level,
com.raytheon.uf.viz.core,
com.raytheon.viz.pointdata,
Require-Bundle: org.eclipse.core.runtime;bundle-version="3.8.0",
com.raytheon.uf.common.serialization;bundle-version="1.12.1174",
com.raytheon.uf.common.dataplugin.acarssounding;bundle-version="1.12.1174",
com.raytheon.uf.common.pointdata;bundle-version="1.12.1174",
com.raytheon.uf.common.dataplugin;bundle-version="1.12.1174",
com.raytheon.uf.common.datastorage;bundle-version="1.12.1174",
com.raytheon.uf.common.dataplugin.level;bundle-version="1.12.1174",
com.raytheon.uf.viz.core;bundle-version="1.12.1174",
com.raytheon.viz.pointdata;bundle-version="1.12.1174",
com.raytheon.uf.common.wxmath,
gov.noaa.nws.ncep.edex.common,
gov.noaa.nws.ncep.ui.nsharp,
com.raytheon.uf.viz.d2d.nsharp,
org.geotools,
javax.measure,
com.raytheon.viz.volumebrowser,
com.raytheon.uf.common.comm,
com.raytheon.uf.common.derivparam,
com.raytheon.uf.viz.volumebrowser.dataplugin,
gov.noaa.nws.ncep.edex.common;bundle-version="1.0.0",
gov.noaa.nws.ncep.ui.nsharp;bundle-version="1.0.0",
com.raytheon.uf.viz.d2d.nsharp;bundle-version="1.0.0",
org.geotools;bundle-version="2.6.4",
javax.measure;bundle-version="1.0.0",
com.raytheon.viz.volumebrowser;bundle-version="1.15.0",
com.raytheon.uf.common.comm;bundle-version="1.12.1174",
com.raytheon.uf.common.derivparam;bundle-version="1.14.0",
com.raytheon.uf.viz.volumebrowser.dataplugin;bundle-version="1.15.0",
com.raytheon.uf.common.geospatial,
com.raytheon.uf.viz.d2d.xy.adapters,
com.raytheon.uf.viz.d2d.core,
javax.xml.bind
com.raytheon.uf.viz.d2d.xy.adapters;bundle-version="1.15.0"
Import-Package: com.raytheon.uf.common.inventory.exception,
com.raytheon.uf.viz.datacube
Bundle-ClassPath: com.raytheon.uf.viz.acarssounding.jar

View file

@ -19,6 +19,13 @@
**/
package com.raytheon.uf.viz.acarssounding;
import gov.noaa.nws.ncep.edex.common.sounding.NcSoundingCube;
import gov.noaa.nws.ncep.edex.common.sounding.NcSoundingCube.QueryStatus;
import gov.noaa.nws.ncep.edex.common.sounding.NcSoundingLayer;
import gov.noaa.nws.ncep.edex.common.sounding.NcSoundingProfile;
import gov.noaa.nws.ncep.ui.nsharp.NsharpStationInfo;
import gov.noaa.nws.ncep.ui.nsharp.natives.NsharpDataHandling;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
@ -26,10 +33,12 @@ import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.measure.unit.NonSI;
import javax.measure.unit.SI;
import javax.measure.unit.Unit;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import com.raytheon.uf.common.dataplugin.acarssounding.ACARSSoundingConstants;
import com.raytheon.uf.common.dataplugin.acarssounding.ACARSSoundingLayer;
import com.raytheon.uf.common.dataplugin.acarssounding.ACARSSoundingRecord;
import com.raytheon.uf.common.dataquery.requests.DbQueryRequest;
@ -44,16 +53,6 @@ import com.raytheon.uf.viz.core.requests.ThriftClient;
import com.raytheon.uf.viz.d2d.nsharp.SoundingLayerBuilder;
import com.raytheon.uf.viz.d2d.nsharp.rsc.D2DNSharpResourceData;
import gov.noaa.nws.ncep.edex.common.sounding.NcSoundingCube;
import gov.noaa.nws.ncep.edex.common.sounding.NcSoundingCube.QueryStatus;
import gov.noaa.nws.ncep.edex.common.sounding.NcSoundingLayer;
import gov.noaa.nws.ncep.edex.common.sounding.NcSoundingProfile;
import gov.noaa.nws.ncep.ui.nsharp.NsharpStationInfo;
import gov.noaa.nws.ncep.ui.nsharp.natives.NsharpDataHandling;
import si.uom.NonSI;
import si.uom.SI;
import tec.uom.se.AbstractUnit;
/**
* Provides sounding data to nsharp from aircraft reports.
*
@ -67,7 +66,7 @@ import tec.uom.se.AbstractUnit;
* Jul 23, 2014 3410 bclement preparePointInfo() calls unpackResultLocation()
* Dec 17, 2015 5215 dgilling Set point name to stationId.
* Mar 17, 2016 5459 tgurney Compute specific humidity from mixing ratio
* Jan 15, 2019 7697 bsteffen Add aircraft info to location dislay info.
*
* </pre>
*
* @author bsteffen
@ -116,7 +115,7 @@ public class AcarsSndNSharpResourceData extends D2DNSharpResourceData {
DbQueryRequest request = new DbQueryRequest();
request.setEntityClass(ACARSSoundingRecord.class);
request.setLimit(1);
request.setConstraints(new HashMap<>(
request.setConstraints(new HashMap<String, RequestConstraint>(
getMetadataMap()));
request.addConstraint("dataTime", new RequestConstraint(new DataTime(
stnInfo.getReftime()).toString()));
@ -127,17 +126,7 @@ public class AcarsSndNSharpResourceData extends D2DNSharpResourceData {
.getEntityObjects(ACARSSoundingRecord.class);
if (records.length > 0) {
ACARSSoundingRecord record = records[0];
String phase = record.getPhase();
String loc = record.getTailNumber();
if(ACARSSoundingConstants.ASCENDING_PHASE.equals(phase)){
loc = loc + " Asc.";
}else if(ACARSSoundingConstants.DESCENDING_PHASE.equals(phase)){
loc = loc + " Desc.";
}else if(phase != null){
loc = loc + " " + phase;
}
stnInfo.setLocationDetails(loc);
List<NcSoundingLayer> layers = new ArrayList<>(
List<NcSoundingLayer> layers = new ArrayList<NcSoundingLayer>(
record.getLevels().size());
for (ACARSSoundingLayer layer : record.getLevels()) {
SoundingLayerBuilder builder = new SoundingLayerBuilder();
@ -153,19 +142,19 @@ public class AcarsSndNSharpResourceData extends D2DNSharpResourceData {
}
if (layer.getWindSpeed() != null) {
builder.addWindSpeed(layer.getWindSpeed(),
SI.METRE_PER_SECOND);
SI.METERS_PER_SECOND);
}
if (layer.getPressure() != null) {
builder.addPressure(layer.getPressure(), SI.PASCAL);
}
if (layer.getFlightLevel() != null) {
builder.addHeight(layer.getFlightLevel(), SI.METRE);
builder.addHeight(layer.getFlightLevel(), SI.METER);
}
if (layer.getMixingRatio() != null) {
double mixingRatio = layer.getMixingRatio();
if (mixingRatio != 0) {
double specHum = mixingRatio / (mixingRatio + 1.0);
builder.addSpecificHumidity(specHum, AbstractUnit.ONE);
builder.addSpecificHumidity(specHum, Unit.ONE);
}
}
layers.add(builder.toNcSoundingLayer());

View file

@ -1,10 +0,0 @@
eclipse.preferences.version=1
org.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode=enabled
org.eclipse.jdt.core.compiler.codegen.targetPlatform=11
org.eclipse.jdt.core.compiler.compliance=11
org.eclipse.jdt.core.compiler.problem.assertIdentifier=error
org.eclipse.jdt.core.compiler.problem.enablePreviewFeatures=disabled
org.eclipse.jdt.core.compiler.problem.enumIdentifier=error
org.eclipse.jdt.core.compiler.problem.reportPreviewFeatures=warning
org.eclipse.jdt.core.compiler.release=enabled
org.eclipse.jdt.core.compiler.source=11

View file

@ -6,7 +6,7 @@ Bundle-Version: 1.15.0.qualifier
Bundle-Vendor: RAYTHEON
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Bundle-ActivationPolicy: lazy
Require-Bundle: com.raytheon.uf.common.localization,
com.raytheon.uf.viz.alertview
Require-Bundle: com.raytheon.uf.common.localization;bundle-version="1.14.1",
com.raytheon.uf.viz.alertview;bundle-version="1.15.0"
Service-Component: OSGI-INF/*.xml

View file

@ -1,10 +0,0 @@
eclipse.preferences.version=1
org.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode=enabled
org.eclipse.jdt.core.compiler.codegen.targetPlatform=11
org.eclipse.jdt.core.compiler.compliance=11
org.eclipse.jdt.core.compiler.problem.assertIdentifier=error
org.eclipse.jdt.core.compiler.problem.enablePreviewFeatures=disabled
org.eclipse.jdt.core.compiler.problem.enumIdentifier=error
org.eclipse.jdt.core.compiler.problem.reportPreviewFeatures=warning
org.eclipse.jdt.core.compiler.release=enabled
org.eclipse.jdt.core.compiler.source=11

View file

@ -6,9 +6,9 @@ Bundle-Version: 1.15.0.qualifier
Bundle-Vendor: RAYTHEON
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Eclipse-RegisterBuddy: ch.qos.logback
Require-Bundle: ch.qos.logback,
com.raytheon.uf.viz.alertview,
org.slf4j
Require-Bundle: ch.qos.logback;bundle-version="1.1.2",
com.raytheon.uf.viz.alertview;bundle-version="1.15.0",
org.slf4j;bundle-version="1.7.5"
Import-Package: org.osgi.framework;version="1.7.0",
org.osgi.util.tracker;version="1.5.1"
Bundle-ActivationPolicy: lazy

View file

@ -1,10 +0,0 @@
eclipse.preferences.version=1
org.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode=enabled
org.eclipse.jdt.core.compiler.codegen.targetPlatform=11
org.eclipse.jdt.core.compiler.compliance=11
org.eclipse.jdt.core.compiler.problem.assertIdentifier=error
org.eclipse.jdt.core.compiler.problem.enablePreviewFeatures=disabled
org.eclipse.jdt.core.compiler.problem.enumIdentifier=error
org.eclipse.jdt.core.compiler.problem.reportPreviewFeatures=warning
org.eclipse.jdt.core.compiler.release=enabled
org.eclipse.jdt.core.compiler.source=11

View file

@ -2,16 +2,15 @@ Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: AlertView
Bundle-SymbolicName: com.raytheon.uf.viz.alertview;singleton:=true
Bundle-Version: 1.18.0.qualifier
Bundle-Version: 1.15.0.qualifier
Bundle-Vendor: RAYTHEON
Bundle-RequiredExecutionEnvironment: JavaSE-11
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Export-Package: com.raytheon.uf.viz.alertview
Require-Bundle: org.eclipse.ui,
org.eclipse.core.runtime,
org.eclipse.jface.text,
org.eclipse.ui.console,
org.slf4j,
javax.xml.bind
Require-Bundle: org.eclipse.ui;bundle-version="3.8.2",
org.eclipse.core.runtime;bundle-version="3.8.0",
org.eclipse.jface.text;bundle-version="3.8.2",
org.eclipse.ui.console;bundle-version="3.5.100",
org.slf4j;bundle-version="1.7.5"
Bundle-ActivationPolicy: lazy
Bundle-ClassPath: com.raytheon.uf.viz.alertview.jar
Service-Component: OSGI-INF/*.xml

View file

@ -2,22 +2,22 @@ Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: AlertViz UI Plugin
Bundle-SymbolicName: com.raytheon.uf.viz.alertviz.ui;singleton:=true
Bundle-Version: 1.18.1.qualifier
Bundle-Version: 1.15.0.qualifier
Bundle-Activator: com.raytheon.uf.viz.alertviz.ui.Activator
Bundle-Vendor: Raytheon
Service-Component: OSGI-INF/alertvizService.xml
Require-Bundle: org.eclipse.ui,
org.eclipse.core.runtime,
com.raytheon.uf.common.localization,
com.raytheon.uf.common.message,
com.raytheon.uf.viz.alertviz,
org.apache.commons.lang3,
com.raytheon.viz.ui,
com.raytheon.uf.common.alertviz,
com.raytheon.uf.common.util,
com.raytheon.uf.common.python
com.raytheon.uf.common.message;bundle-version="1.11.11",
com.raytheon.uf.viz.alertviz;bundle-version="1.11.11",
org.apache.commons.lang3;bundle-version="3.4.0",
com.raytheon.viz.ui;bundle-version="1.15.3"
Bundle-ActivationPolicy: lazy
Export-Package: com.raytheon.uf.viz.alertviz.ui
Bundle-RequiredExecutionEnvironment: JavaSE-1.8
Export-Package: com.raytheon.uf.viz.alertviz.ui,
com.raytheon.uf.viz.alertviz.ui.audio,
com.raytheon.uf.viz.alertviz.ui.dialogs,
com.raytheon.uf.viz.alertviz.ui.timer
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Import-Package: com.raytheon.uf.common.alertmonitor
Bundle-ClassPath: com.raytheon.uf.viz.alertviz.ui.jar

View file

@ -2,6 +2,5 @@ output.com.raytheon.uf.viz.alertviz.ui.jar = bin/
bin.includes = META-INF/,\
localization/,\
com.raytheon.uf.viz.alertviz.ui.jar,\
OSGI-INF/,\
icons/
OSGI-INF/
source.com.raytheon.uf.viz.alertviz.ui.jar = src/

Binary file not shown.

Before

Width:  |  Height:  |  Size: 688 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.7 KiB

Some files were not shown because too many files have changed in this diff Show more