gfesuite, admin, hostname cleanup for EDEX

This commit is contained in:
mjames-upc 2019-01-29 11:59:23 -07:00
parent e82646faac
commit f9e4756e85
16 changed files with 37 additions and 1623 deletions

View file

@ -1,62 +0,0 @@
POSTGRESQL REPLICATION
----------------------
This file contains the procedure for setting up PostgreSQL replication with
a single master and one or more standby servers.
SETUP - ALL SERVERS
-------------------
On each server you must add lines to pg_hba.conf to allow remote replication
connections:
host replication replication 12.34.56.0/24 cert clientcert=1
Replace "12.34.56.0/24" with the standby server IP address (or block).
Every server should have one line for every server in the replication setup,
including itself--thus all servers should have the same lines. This enables
quickly changing which server is the master without extra configuration.
Also you need to add a SSL certificate and key for the "replication" DB role.
You have to create these files yourself, except for root.crt which is the
root certificate for the database--you can copy that one straight from
/awips2/database/ssl. Put them here:
/awips2/database/ssl/replication/${hostname -s}/replication.crt
/awips2/database/ssl/replication/${hostname -s}/replication.key
/awips2/database/ssl/replication/${hostname -s}/root.crt
SETTING UP STANDBYS
-------------------
You must run "setup-standby.sh" on each server to turn it into a standby.
Before you run it, open the script in a text editor, and verify
the value of each variable in the Configuration section at the top of the
script.
When you run it, specify the master server as the first argument, e.g.:
$ ./setup-standby.sh dds1-ancf
The existing PostgreSQL data directory on the new standby server will be
destroyed, and it will become a replica of the master. pg_hba.conf will
be retained (i.e., will not be replaced with the copy from master).
Note that a standby can replicate from another standby--it does not have to
replicate directly from the master. This is useful for reducing WAN traffic.
FAILOVER
--------
To promote a standby to master:
1. Stop PostgreSQL on the master if it is running.
2. On the standby, create the file "/awips2/database/data/promote". The
standby will stop replication and recognize itself as the new master.
3. On the old master and the other standbys, run setup-standby.sh with the
hostname of the new master, to re-initialize them and start replicating
from the new master.

View file

@ -1,48 +0,0 @@
#!/bin/bash
# This script configures a server to allow Postgres replication:
# - Creates replication user
# - Adds lines to pg_hba.conf to allow replication
#
# This must run on all servers that will replicate or be replicated. You
# only need to run this once per server.
psql="/awips2/psql/bin/psql"
db_superuser=awipsadmin
postgres_data_dir=/awips2/database/data
cleanup_exit () {
echo INFO: Cleaning up.
rm -f ${temp_hba_conf}
exit $1
}
temp_hba_conf=$(mktemp || cleanup_exit 1)
if [[ "$(id -u)" -ne 0 ]]; then
echo ERROR: You need to be root.
cleanup_exit 1
fi
echo "INFO: Creating replication role"
sudo -u awips -i "${psql}" -v ON_ERROR_STOP=1 --user="${db_superuser}" --db=metadata << EOF || cleanup_exit 1
begin transaction;
drop role if exists replication;
create role replication with replication login password 'replication';
commit transaction;
EOF
grep -Ev "replication" "${postgres_data_dir}/pg_hba.conf" > ${temp_hba_conf}
cat << EOF >> ${temp_hba_conf} || cleanup_exit 1
# replication connections
local replication replication trust
hostssl replication replication 162.0.0.0/8 cert clientcert=1
hostssl replication replication ::1/128 cert clientcert=1
EOF
echo INFO: Updating pg_hba.conf
install -T -m 600 -o awips -g fxalpha ${temp_hba_conf} "${postgres_data_dir}/pg_hba.conf" || cleanup_exit 1
echo "INFO: Finished. No errors reported."
cleanup_exit 0

View file

@ -1,282 +0,0 @@
#!/bin/bash
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
#
#
# SOFTWARE HISTORY
#
# Date Ticket# Engineer Description
# ------------ ---------- ----------- --------------------------
# Sep 22, 2016 5885 tgurney Initial creation
# Jan 4, 2017 6056 tgurney Fix tmp dir cleanup
# Mar 16, 2017 6184 tgurney Update to fetch and use SSL certificates
# Mar 29, 2017 6184 tgurney Fix SSL certificate sync from master
# Apr 4, 2017 6184 tgurney Set correct permissions on cert directory
# Apr 4, 2017 6122 tgurney Move database to /awips2/database
# + cleanup and fixes for postgres 9.5.x
# Aug 6, 2018 7431 tgurney Bug fixes. Allow starting without a
# pg_hba.conf on standby server
#
##
# Configuration ###############################################################
# Credentials
db_superuser=awipsadmin
db_rep_user=replication # for connecting to master
# Master server info
master_hostname="$1" # from command line
master_port=5432
# Local server info
this_host=$(hostname -s)
local_port=5432
data_dir=/awips2/database/data
ssl_dir=/awips2/database/ssl
tablespace_dir=/awips2/database/tablespaces
# pg_hba.conf backup location
hba_backup="$(mktemp /tmp/pg_hba.backup.XXXXXX)"
rm -f "${hba_backup}"
# For logging the output of this script
log_dir=/awips2/database/replication/logs
# Keep this many logs, delete old ones
keep_logs=5
log_file="${log_dir}/setup-standby.$(date +%Y%m%d.%H%M%S).log"
# Location of PostgreSQL install
pg_dir=/awips2/postgresql
# Location of programs
pg_basebackup=${pg_dir}/bin/pg_basebackup
pg_ctl=${pg_dir}/bin/pg_ctl
psql=/awips2/psql/bin/psql
as_awips='sudo -u awips -i '
log() {
echo $* | ${as_awips} tee -a "${log_file}"
}
###############################################################################
do_pg_ctl() {
${as_awips} "${pg_ctl}" -o \"--port=${local_port}\" -D "${data_dir}" $* >/dev/null 2>&1
return $?
}
stop_server() {
do_pg_ctl -m fast stop && sleep 1
do_pg_ctl -s status
# error code 3 from pg_ctl means server is not running
# 4 means there is no database in the data dir, this is okay
[[ "$?" -eq 3 || "$?" -eq 4 ]]; return $?
}
make_clean_db_dirs() {
rm -rf "${data_dir}"
${as_awips} mkdir -p "${data_dir}"
${as_awips} chmod 700 "${data_dir}"
rm -rf "${tablespace_dir}"
${as_awips} mkdir -p "${tablespace_dir}"
${as_awips} chmod 700 "${tablespace_dir}"
}
restore_pg_hba() {
if [[ -f "${hba_backup}" ]]; then
log "INFO: Restoring backed up pg_hba.conf"
${as_awips} cp -a "${hba_backup}" "${data_dir}"/pg_hba.conf
${as_awips} chmod 600 "${data_dir}"/pg_hba.conf
else
log "WARN: No backed up pg_hba.conf to restore"
fi
}
cleanup_exit() {
log "ERROR: There were one or more errors; see above."
log "INFO: Cleaning up."
stop_server
if [[ "$?" -eq 0 ]]; then
make_clean_db_dirs
else
# Do not delete any database files if server is running
log -n "WARNING: Postgres is still running. "
log "See ${data_dir}/pg_log/postgresql-$(date +%A).log for possible errors."
fi
restore_pg_hba
exit 1
}
# Preliminary checks ##########################################################
# Hostname arg
if [[ -z "${master_hostname}" ]]; then
echo "Usage: $(basename $0) <master-hostname>"
exit 1
fi
if [[ "$(id -u)" -ne 0 ]]; then
echo "$(basename $0): Must run as root."
exit 1
fi
# Cannot replicate self
if [[ "${master_hostname}" == "${this_host}" ||
"${master_hostname}" == "localhost" ||
"${master_hostname}" == "$(hostname)" ]]; then
echo -n "$(basename $0): ${master_hostname} cannot replicate itself. "
echo "Choose a different server name."
exit 1
fi
# Warning prompt
echo "You are about to configure this server (${this_host}) as a PostgreSQL"
echo "standby server."
echo
echo " Master: ${master_hostname}:${master_port}"
echo " Standby: ${this_host}:${local_port}"
echo " Data dir: ${data_dir}"
echo " Tablespace dir: ${tablespace_dir}"
echo
echo "All data at ${data_dir} and ${tablespace_dir} on this server "
echo "will be destroyed and this server will become a replica of "
echo "${master_hostname}."
echo
echo -n "Is this OK? Type YES in all caps: "
read answer
if [[ "${answer}" != 'YES' ]]; then
echo Canceling.
exit 1
fi
# Actually do it ##############################################################
# Make log file for script output
${as_awips} mkdir -p "${log_dir}" || exit 1
${as_awips} touch "${log_file}" || exit 1
# Purge old logs
${as_awips} find "${log_dir}"/*.log -xdev \
| sort \
| head -n -${keep_logs} \
| tr '\n' '\0' \
| sudo xargs -0r rm
log "INFO: Starting replication setup on ${this_host}:${local_port}"
log "INFO: Will replicate ${master_hostname}:${master_port}"
stop_server || exit 1
trap 'cleanup_exit' SIGINT SIGTERM
# Get certificates from master
master_ssl_dir="${ssl_dir}/replication/${master_hostname}"
echo ${as_awips} mkdir -p "${master_ssl_dir}"
${as_awips} mkdir -p "${master_ssl_dir}"
log "INFO: Downloading SSL certs and keyfile from ${master_hostname}"
# must ssh as root to skip password prompt
rsync --delete-before -av -e ssh \
"${master_hostname}":"${master_ssl_dir}"/{replication.crt,replication.key,root.crt} \
"${master_ssl_dir}" || exit 1
chown -R awips:fxalpha "${ssl_dir}"/replication
find "${ssl_dir}"/replication -xdev -type f -exec chmod 600 {} \;
find "${ssl_dir}"/replication -xdev -type d -exec chmod 700 {} \;
# Backup pg_hba.conf
if [[ -f "${data_dir}/pg_hba.conf" ]]; then
${as_awips} cp -a "${data_dir}/pg_hba.conf" "${hba_backup}" || cleanup_exit
log "INFO: Backup of pg_hba.conf is at ${hba_backup}"
else
log "WARN: Cannot backup local ${data_dir}/pg_hba.conf because it does not exist. Continuing anyway."
fi
# Delete all database files
make_clean_db_dirs
# SSL connection string parts
# needed for basebackup and recovery.conf
sslmode_part="sslmode=verify-ca"
sslcert_part="sslcert=${master_ssl_dir}/replication.crt"
sslkey_part="sslkey=${master_ssl_dir}/replication.key"
sslrootcert_part="sslrootcert=${master_ssl_dir}/root.crt"
ssl_part="${sslmode_part} ${sslcert_part} ${sslkey_part} ${sslrootcert_part}"
log "INFO: Retrieving base backup from ${master_hostname}"
log "Enter the password for the '${db_rep_user}' role now, if prompted."
${as_awips} "${pg_basebackup}" \
--host="${master_hostname}" \
--verbose --progress \
--xlog-method=stream \
--username="${db_rep_user}" \
--port=${master_port} \
--db="${ssl_part}" \
-D "${data_dir}" 2>&1 | tee -a ${log_file}
if [[ "${PIPESTATUS[0]}" != "0" ]]; then
cleanup_exit
fi
# Write recovery.conf
host_part="host=${master_hostname}"
port_part="port=${master_port}"
user_part="user=${db_rep_user}"
primary_conninfo="${host_part} ${port_part} ${user_part} ${ssl_part}"
log "INFO: Writing ${data_dir}/recovery.conf"
rm -f "${data_dir}/recovery.conf"
${as_awips} touch "${data_dir}"/recovery.conf
cat >> "${data_dir}/recovery.conf" << EOF || cleanup_exit
standby_mode='on'
primary_conninfo='${primary_conninfo}'
recovery_target_timeline='latest'
trigger_file='${data_dir}/promote'
EOF
# Remove recovery.done if it exists
rm -f "${data_dir}/recovery.done"
# Install backed up pg_hba.conf
restore_pg_hba
# Start it up and run test query
log "INFO: Starting PostgreSQL"
do_pg_ctl start -w || cleanup_exit
log "INFO: Testing read-only connection to standby"
is_recovery=$(${as_awips} "${psql}" \
-U "${db_superuser}" \
--port=${local_port} \
--db=metadata \
-Aqtc "select pg_is_in_recovery();")
if [[ "${is_recovery}" != "t" ]]; then
log "ERROR: It looks like this server failed to start up properly, or is"
log "ERROR: not in recovery mode."
cleanup_exit
fi
rm -f "${hba_backup}"
log "INFO: Setup is complete. No errors reported."

View file

@ -1,136 +0,0 @@
output_dir=output
db_users=(awips awipsadmin pguser postgres)
unpriv_db_users=(awips pguser)
dn_attrs='/C=US/ST=Maryland/L=Silver Spring/O=Raytheon/OU=AWIPS'
validity_days=$((30 * 365))
site=${SITE_IDENTIFIER}
if test -n "$site"; then
ca_cn=dx1-$site
# Clients connect with "dx1f" and not the full host name
db_server_host=dx1f
fi
usage() {
echo "$(basename "$0") [options]"
echo " Generates self-signed certificates for testing PostgreSQL certificate-based authentication."
echo ""
echo " Files are written to {current directory}/output:"
echo " root.{crt,key,srl} - root cert, key, and serial file"
echo " server.{crt,key} - database cert and key"
echo " {db user}.{crt,key,pk8} - database user crt, key, and DER-formatted key"
echo " server.tgz - bundle for database server (Extract in /awips2/database/ssl on dx1f.)"
echo " awips.tgz - bundle for admin users with .postgresql/ prefix (Extract in /home/awips/ .)"
echo " user.tgz - bundle for app users with .postgresql/ prefix (Extract in /home/{user}/ .)"
echo " alldb.tgz - contains all db user (including admin) certs/keys with no directory prefix"
echo ""
echo " Options:"
echo " -c {name} Specify the root certificate CN [dx1-\$SITE_IDENTIFIER]"
echo " -d {name} Specify the database server host name [dx1f-\$SITE_IDENTIFIER]"
echo " -D {attrs} Distinguished name attributes"
echo " -h Display usage"
echo " -s Use current host name for CA and database server CNs"
}
while test "$#" -gt 0; do
case "$1" in
-c) ca_cn=$2 ; shift ;;
-d) db_server_host=$2 ; shift ;;
-D) dn_attrs=$2 ; shift ;;
-s)
ca_cn=$(hostname)
db_server_host=$ca_cn
;;
-h) usage ; exit 0 ;;
*) usage ; exit 1 ;;
esac
shift
done
if [[ -z $ca_cn || -z $db_server_host ]]; then
echo "error: CA and database host CNs not defined. (\$SITE_IDENTIFIER not set? Maybe use -s option?)" >&2
exit 1
fi
# # For testing
# set -x
gen_self_signed() {
local name=$1
local subj=$2
test -n "$name" && test -n "$subj" || { echo "self_sign: need base name and subject" >&2 ; return 1 ; }
: \
&& openssl req -new -subj "$subj" -out "$name".req -nodes -keyout "$name".key \
&& openssl req -x509 -days "$validity_days" -in "$name".req -key "$name".key -out "$name".crt \
&& chmod g=,o= "$name".key \
|| return 1
}
gen_cert() {
local name=$1 subj=$2 ca=$3
test -n "$name" && test -n "$subj" && test -n "$ca" || { echo "cert: need base name, subject, and CA" >&2 ; return 1 ; }
local serial_fn=${ca}.srl
local serial_opt=""
if test ! -e "$serial_fn"; then
serial_opt=-CAcreateserial
fi
: \
&& openssl req -new -subj "$subj" -newkey rsa:2048 -nodes -keyout "$name".key -out "$name".req \
&& openssl x509 -days "$validity_days" -req -in "$name".req \
-CA "$ca".crt -CAkey "$ca".key $serial_opt -out "$name".crt \
&& openssl pkcs8 -nocrypt -in "$name".key -topk8 -outform der -out "$name".pk8 \
&& chmod g=,o= "$name".key "$name".pk8 \
|| return 1
}
awips_og=(--owner=awips --group=fxalpha)
gen_dot_postgresql() {
local dest=$1
shift
rm -rf gtc0
local dotpg=gtc0/.postgresql
mkdir -p "$dotpg" \
&& chmod a=,u=rwx "$dotpg" \
&& cp -p "$@" "$dotpg" \
&& ln -s awips.crt "$dotpg"/postgresql.crt \
&& ln -s awips.key "$dotpg"/postgresql.key \
&& ln -s awips.pk8 "$dotpg"/postgresql.pk8 \
&& tar czf "$dest" -C gtc0 "${awips_og[@]}" .postgresql \
&& rm -rf gtc0 \
&& return 0
}
cred_files() {
local output=''
for name in "$@"; do
output="$output "$(echo $name.{crt,key,pk8})
done
echo "$output"
}
dn() {
echo "$dn_attrs/CN=$1"
}
if test -e "$output_dir"; then
echo "$output_dir: already exists" >&2
exit 1
fi
mkdir -p "$output_dir" \
&& chmod a=,u=rwx "$output_dir" \
&& cd "$output_dir" || exit 1
gen_self_signed root "$(dn "$ca_cn")" || exit 1
gen_cert server "$(dn "$db_server_host")" root || exit 1
for user in "${db_users[@]}"; do
gen_cert "$user" "$(dn "$user")" root || exit 1
done
tar czf server.tgz "${awips_og[@]}" root.crt server.{crt,key} \
&& tar czf alldb.tgz "${awips_og[@]}" root.crt $(cred_files "${db_users[@]}") \
&& gen_dot_postgresql awips.tgz root.crt $(cred_files "${db_users[@]}") \
&& gen_dot_postgresql user.tgz root.crt $(cred_files "${unpriv_db_users[@]}") \
|| exit 1
echo
echo "All credential files and archives created successfully."

View file

@ -1,22 +1,4 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
# ----------------------------------------------------------------------------
# This software is in the public domain, furnished "as is", without technical
# support, and with no warranty, express or implied, as to its usefulness for
@ -42,12 +24,6 @@
#
##
##
# This is a base file that is not intended to be overridden.
##
from xml.etree import ElementTree
from xml.etree.ElementTree import Element, SubElement
import socket
@ -584,16 +560,10 @@ class IrtAccess():
servers.append(info)
matchingServers.append(info)
# server search list in priority. The px3 entries are used for
# dual domain for AFC.
hp = [('dx4','98000000'),('px3', '98000000'), ('dx4','98000001'),
('px3', '98000001')]
hp = [('localhost','98000000')]
if findBestMatch:
# choose one server from this domain, find first dx4, 98000000
# try to use one with the same mhsidDest as the site, which
# would be the primary operational GFE. Note that the px3 entries
# are for AFC.
found = False
for matchServer, matchPort in hp:
if found:
@ -607,8 +577,6 @@ class IrtAccess():
found = True
break
# find first dx4, 98000000, but perhaps a different mhsid
# this is probably not the primary operational GFE
for matchServer, matchPort in hp:
if found:
break

View file

@ -124,7 +124,7 @@ def execIscDataRec(MSGID, SUBJECT, FILES):
logException("Malformed XML received")
return
#no XML destination information. Default to dx4f,px3 98000000, 98000001
#no XML destination information. Default to localhost 98000000
else:
# create a xml element tree to replace the missing one. This will
# occur when OB8.2 sites send ISC data to OB8.3 sites, and also when
@ -133,17 +133,17 @@ def execIscDataRec(MSGID, SUBJECT, FILES):
# This will cause log errors until everyone is on OB8.3.
iscE = Element('isc')
destinationsE = SubElement(iscE, 'destinations')
for x in xrange(98000000, 98000002):
for shost in ['dx4f', 'px3f']:
addressE = SubElement(destinationsE, 'address')
serverE = SubElement(addressE, 'server')
serverE.text = shost
portE = SubElement(addressE, 'port')
portE.text = str(x)
protocolE = SubElement(addressE, 'protocol')
protocolE.text = "20070723" #match this from IFPProtocol.C
mhsE = SubElement(addressE, 'mhsid')
mhsE.text = siteConfig.GFESUITE_MHSID
x = 98000000
shost = 'localhost'
addressE = SubElement(destinationsE, 'address')
serverE = SubElement(addressE, 'server')
serverE.text = shost
portE = SubElement(addressE, 'port')
portE.text = str(x)
protocolE = SubElement(addressE, 'protocol')
protocolE.text = "20070723" #match this from IFPProtocol.C
mhsE = SubElement(addressE, 'mhsid')
mhsE.text = siteConfig.GFESUITE_MHSID
irt = IrtAccess.IrtAccess("")

View file

@ -287,8 +287,8 @@ damcrest_res_dir : $(whfs_config_dir)/damcrest
#===================== SHEFDECODE Application Tokens ================================
shefdecode_userid : oper # controlling UNIX user
shefdecode_host : dx1f # controlling UNIX system.
shefdecode_userid : awips # controlling UNIX user
shefdecode_host : localhost # controlling UNIX system.
shefdecode_dir : $(apps_dir)/shefdecode # main directory location
shefdecode_bin : $(shefdecode_dir)/bin # executable programs location
shefdecode_input : $(shefdecode_dir)/input # SHEF parameter file location
@ -403,13 +403,13 @@ sws_home_dir : $(whfs_bin_dir)/pa # SWS dir
# The Gage Precip Processor tokens
# -----------------------------------------------------------------
gage_pp_userid : oper # controlling UNIX user
gage_pp_host : dx # controlling UNIX system
gage_pp_userid : awips # controlling UNIX user
gage_pp_host : localhost # controlling UNIX system
gage_pp_data : $(pproc_local_data)/gpp_input # input data files location
gage_pp_log : $(pproc_log)/gage_pp # daily log files location
gage_pp_sleep : 10 # sleep duration in seconds in between queries
gage_pp_enable : ON # gpp enabled; shef uses to determine post
shef_post_precip : OFF # post to Precip/CurPrecip tables
shef_post_precip : ON # post to Precip/CurPrecip tables
build_hourly_enable : ON # Enable the build_hourly application
# ----------------------------------------------------------------
@ -628,7 +628,6 @@ ens_files : $(ens_dir)/files/$(ofs_level)
ens_scripts : $(ens_dir)/scripts
# ens_pre tokens
##FXA_HOME : /px1data #taken out by kwz.2/11/04
enspre_griddb : $(FXA_DATA)/Grid/SBN/netCDF/CONUS211/CPCoutlook
ens_log_dir : $(ens_output)/$(ofs_level)
ens_msglog_level : 5
@ -1589,17 +1588,6 @@ shape_data_dir : $(apps_dir)/ffmpShapeData # Directory holding shape
# files acting as data files
#================== send_rfc Apps_defaults Tokens - 3/08/2001 =================
send_rfc_dir : $(apps_dir)/rfc/send_rfc
send_rfc_input_dir : $(send_rfc_dir)/data/send
send_rfc_id : WWW
send_hardcopy_nnn : PRI-WRK-EDI-SNO-ADM-RVF
send_rfc_hardcopy : $(send_rfc_dir)/data/sbnprods
send_rfc_hpc : 0
send_rfc_host : ds-www
send_rfc_alternate : 0
# ================== end of send_rfc tokens ====================================
#================== verify Apps_defaults Tokens - 08/03/2001 ==================
# defaults for program verify
vsys_output : $(vsys_dir)/output #location of output files
@ -1819,7 +1807,7 @@ sshp_ingest_xml_dir : $(local_data_sshp_dir)/ingest_xml
sshp_incoming_dir : $(local_data_sshp_dir)/incoming
sshp_outgoing_dir : $(local_data_sshp_dir)/outgoing
sshp_log_dir : $(whfs_log_dir)/sshp
sshp_java_process_host : px1f
sshp_java_process_host : localhost
sshp_invoke_map_preprocess: ON
sshp_map_qpe_to_use : MIXED # choices are: MIXED, LOCAL_BEST_ONLY, RFC_ONLY
sshp_fcst_ts : FZ # SSHP type-source code for generated forecasts

View file

@ -6,20 +6,8 @@ edexGrepString="edex.run.mode="
xorgLogPath="/var/log"
# the remote servers to grab top on. Use to get general state of servers
if [ ! -z "${DX_SERVERS}" ]; then
REMOTE_SERVERS_TO_CHECK="${DX_SERVERS}"
else
REMOTE_SERVERS_TO_CHECK="dx1f dx2f dx3 dx4"
fi
if [ ! -z "${PX_SERVERS}" ]; then
REMOTE_SERVERS_TO_CHECK="${REMOTE_SERVERS_TO_CHECK} ${PX_SERVERS}"
else
REMOTE_SERVERS_TO_CHECK="${REMOTE_SERVERS_TO_CHECK} px1 px2"
fi
# the database host to grab current running queries for
DATABASE_HOST="dx1f"
DATABASE_HOST="localhost"
# Flags to control what data capture grabs, to enable flag must be YES, anything else will be considered off.
RUN_JSTACK="Y"
@ -51,9 +39,6 @@ usage() {
echo -e "-quick"
echo " Turns off jmap and reduces jstack iterations to 5"
echo
echo -e "-c \"{host names}\"\tdefault [$REMOTE_SERVERS_TO_CHECK]"
echo " The servers to grab top information from, make sure list is quoted and space delimited"
echo
echo -e "-d {y/n}\t\tdefault [$RUN_JMAP]"
echo " Run jmap to grab the heap dump information"
echo
@ -179,45 +164,6 @@ grabXorgLog() {
fi
}
# runs ssh command to grab top on a remote server, requires auto login to be setup
grabRemoteTop() {
if [ "$GRAB_REMOTE_TOP" == "y" ]; then
echo "Capturing top on remote servers"
for server in ${REMOTE_SERVERS_TO_CHECK};
do
t1=`date "+%Y%m%d %H:%M:%S"`
echo "${t1}: Capturing top for $server" >> $processFile
out_file="${dataPath}/top_$server.log"
ssh $server "sh -c 'export COLUMNS=160; top -b -c -n1' " >> $out_file 2>&1 &
done
fi
}
# runs ssh command to grab vmstat on a remote server, requires auto login to be setup
grabRemoteVmstat() {
if [ "$GRAB_REMOTE_VMSTAT" == "y" ]; then
echo "Capturing vmstat on remote servers"
for server in ${REMOTE_SERVERS_TO_CHECK};
do
t1=`date "+%Y%m%d %H:%M:%S"`
echo "${t1}: Capturing vmstat for $server" >> $processFile
out_file="${dataPath}/vmstat_$server.log"
ssh $server "sh -c 'vmstat -w 1 5' " >> $out_file 2>&1 &
done
fi
}
grabCurrentDatabaseQueries() {
if [ "$GRAB_CURRENT_QUERIES" == "y" ]; then
echo "Capturing current database queries"
t1=`date "+%Y%m%d %H:%M:%S"`
echo "${t1}: Capturing current database queries" >> $processFile
out_file="${dataPath}/database_queries.log"
echo "dx1f:5432:metadata:awips:awips" > ~/.pgpass; chmod 600 ~/.pgpass
psql -d metadata -U awips -h ${DATABASE_HOST} -c "select datname, pid, client_addr, query, now()-xact_start as runningTime from pg_stat_activity where state != 'idle' order by runningTime desc;" >> $out_file 2>&1 &
fi
}
checkForProcsAsOtherUsers() {
if [ ! -z "$procs" ]; then
numMyProcs=`echo "$myProcs" | wc -l`
@ -474,7 +420,6 @@ while [ ! -z "$1" ]; do
-p) cavePid="$1"; shift 1;;
-q) RUN_QPID_STAT="$1"; shift 1;;
-Q) GRAB_CURRENT_QUERIES="$1"; shift 1;;
-r) REMOTE_SERVERS_TO_CHECK="$1"; shift 1;;
-s) RUN_JSTACK="$1"; shift 1;;
-screen) GRAB_SCREENSHOT="$1"; shift 1;;
-t) GRAB_REMOTE_TOP="$1"; shift 1;;
@ -651,15 +596,6 @@ fi
# grab Xorg logs
grabXorgLog
# grab top for servers
grabRemoteTop
# grab vm stat for servers
grabRemoteVmstat
# grab current database queries
grabCurrentDatabaseQueries
# grab screen shot, spawns background process for each screen
grabScreenShot

View file

@ -1,402 +0,0 @@
#!/bin/sh
################################################################################
# #
# Program name: rsyncGridsToCWF.sh #
# Version: 4.0 #
# Language (Perl, C-shell, etc.): bash #
# #
# Authors: Virgil Middendorf (BYZ), Steve Sigler (MSO) #
# Contributers: Ahmad Garabi, Ken Sargeant, Dave Pike, Dave Rosenberg, #
# Tim Barker, Maureen Ballard, Jay Smith, Dave Tomalak, #
# Evelyn Bersack, Juliya Dynina, Jianning Zeng, John McPherson #
# #
# Date of last revision: 08/18/17 #
# #
# Script description: This script can create a netcdf file containing IFPS #
# quality controlled grids. #
# Quality Control involves iscMosaic'ing the contents of the netcdf file #
# into the Restore database in GFE. If a failure is detected for any of the #
# grids, then the forecasters will get a red-banner alarm and the script #
# will recreate the netcdf file. #
# #
# To upload netcdf files use the following arguments: #
# ./rsyncGridsToCWF.sh wfo #
# (where wfo is the three character wfo id) #
# #
# This script is designed to work in service backup situations. This script #
# is launched from the Scripts... menu in GFE and it will work in both #
# operational and service backup situations. #
# #
# Directory program runs from: /awips2/GFESuite/bin #
# #
# Needed configuration on ls2/3: For each wfo that you run this script for, #
# you will need a #
# /awips2/GFESuite/data/grid/<wfo> #
# directory. (where wfo is the 3 character #
# wfo id) #
# #
# Revision History: #
# 02/12/07: Created Script to rsync grids to CRH from ls1. vtm #
# 03/22/07: Added rsync to gridiron and Steve's QC methodology. vtm #
# 03/26/07: Changed iscMosaic so output works in remote ssh/background. sjs #
# 04/03/07: Added code to change permissions. vtm #
# 04/03/07: Added bwlimit and timeout switches to rsync call. vtm #
# 04/03/07: Made parmlist easier to configure? vtm #
# 04/05/07: Added a check to see if netcdf file made it to the WRH farm. vtm #
# 04/05/07: Added red-banner alarm if netcdf did not make it to WRH. vtm #
# 04/05/07: Added mask setting in config section. vtm #
# 04/20/07: Fixed missing mask setting for second ifpnetCDF call. vtm #
# 04/20/07: Added -D 0.0 option to speed up iscMosaic. vtm #
# 04/20/07: Changed iscMosaic database from Restore to Test_Fcst. vtm #
# 04/23/07: Took out parmlist parameter from ifpnetCDF. vtm #
# 04/23/07: Added use of a backup rsync server. vtm #
# 04/25/07: Added red-banner notifying forecaster that ls1 is down. vtm #
# 04/25/07: Added red-banner notifying forecaster that netcdf files have been #
# rsynced. vtm #
# 05/03/07: Added functionally to allow a limited set of parms to be sent by #
# the primary site, while all parms sent for backup sites. vtm #
# 05/03/07: Added publish to official check for service backup. vtm #
# 05/03/07: Now rsync file to WR webfarm first. vtm #
# 05/07/07: Guardian issue fixed. Switched Reminder to Announcer. vtm #
# 05/20/07: Added switch to turn off sending grids to the consolidated web #
# farm. vtm #
# 06/04/07: Added code to make the number of times to attempt netcdf file #
# creation configurable. Baseline 3 times. vtm #
# 06/04/07: Script now quality controls netcdf files created in attempts #2 #
# and beyond. If the third attempt is bad, then script quits. vtm #
# 06/05/07: Added code to remove the QC logfile after each attempt, otherwise #
# the script will never send the netcdf file in the 2nd and 3rd #
# attempts. vtm #
# 06/05/07: Added code to notify forecaster that grids passed QC check and #
# included a switch to turn this notification off. vtm #
# 06/06/07: Added code to remove the netcdf file if it is too small. vtm #
# 06/06/07: Changed the name of the netcdf files so they include the process #
# id in event the script is launched again before the first launch #
# is completed. vtm #
# 06/06/07: Added a check to ensure rsync is not already running to ensure #
# multiple script launches do not conflict with each other. vtm #
# 06/07/07: Fixed the rsync already running check. vtm #
# 06/08/07: iscMosaic error files were not being deleted fixed. vtm #
# 06/11/07: Corrected name of file sent to the consolidated web farm. vtm #
# 06/14/07: Sent "sendCWFgrids" to "no" per request. vtm #
# 06/19/07: When grids are not sent to the consolidated web farm, the backed #
# up zip file was not deleted and filling up ls1. Fixed. vtm #
# 06/27/07: Illiminated a "file does not exist error message. vtm #
# 08/13/07: Increased iscMosaic delay to 1.0 and make configurable. vtm #
# 12/05/07: Added test compressed netcdf file send to WR. vtm #
# 02/18/08: Switched to use ls2/ls3. Added code to turn off send to regional #
# web farms. vtm #
# 02/22/08: Removed WR specific code from the script. vtm #
# 02/25/08: Adjusted list of parms based on list provided to me by Mark #
# Mitchell. vtm #
# 02/25/08: creationAttempts config variable wasn't used by the script. vtm #
# 02/25/08: Added QCnetCDF config variable to allow offices to bypass long #
# netcdf file QC process. vtm #
# 02/25/08: Added a ls1 section in the configuration section. vtm #
# 05/14/08: Added audio for Guardian "ANNOUNCER 0 LOCAL". Any bad message #
# will get sound with it. vtm #
# 05/14/08: If WR grids did not update, then script will echo out a message #
# for a log file. vtm #
# 05/15/08: Added code to change time stamps of netcdf files on ls1. vtm #
# 05/15/08: Moved AWIPS file clean up to the end. Directory wasn't being #
# cleaned out properly. Also included the cdl files to the list. vtm#
# 05/15/08: Added code to clean orphaned files created by this script on AWIPS#
# and on the local rsync server. vtm #
# 05/16/08: Added check to see if netcdf file made it to the consolidated web #
# farm. If not, then notify forecast via red banner. vtm #
# 05/16/08: Added switch to turn off sending grids to the WR web farm. vtm #
# 05/16/08: Eliminated the variables for the rsync, gunzip, and chmod #
# commands. vtm #
# 05/16/08: Added configuration for email notifications. vtm #
# 05/16/08: Added configuration for Guardian alert level number when grid #
# creation or transfer problems occur. vtm #
# 05/21/08: Added the --address= switch to the rysnc commands in an attempt #
# resolve rsync issues after switch to NOAANet. ls cluster only. vtm#
# 05/21/08: If rsync fails to move the file to the server, then the script #
# will now retry 4 more times. vtm #
# 05/22/08: Moved relivent code from the WR version into this script. vtm #
# 05/28/08: Fixed Bug removing CurrentFcst.?????.site.cdf file. vtm #
# 05/28/08: CWF netcdf file availability check can now be turned off. vtm #
# 05/28/08: Added flag so all banner messages can be turned off. vtm #
# 05/28/08: Added the CWFcheckWait configuration. vtm #
# 05/29/08: Bug in backup file time stamp touch. Touched vtm.opt.cdf instead #
# vtm.opt.cdf.gz. vtm #
# 06/11/08: Fixed bug. If rsync fails more than 5 times, the script was #
# supposed to stop. Instead it kept trying. vtm #
# 06/19/08: Changed the directory on AWIPS where the netcdf file is created. #
# Now using a drive local to machine for performance reasons. #
# New directory is now /awips2/fxa/netcdf. This directory will be #
# created by the script if it doesn't exist. vtm #
# 06/19/08: Changed script to remove AWIPS netcdf and log files sooner. vtm #
# 07/10/08: Made the /awips2/fxa/netcdf configurable for by DX and LX machines.#
# vtm #
# 07/11/08: Pointed most all of script feedback to a log file that can be #
# found in the /awips/GFESuite/primary/data/logfiles/yyyymmdd/ #
# directory called netcdf_rsync.log. (LX workstations put the file #
# into the /awips/GFESuite/data/logfiles/yyyymmdd directory.) vtm #
# 07/07/11: Put in code to limit the time period of the netcdf file. vtm #
# 04/16/12: Added a little error checking for work directory and replaced #
# the hard-coded path to /awips2/fxa/bin with $FXA_BIN. Removed #
# awips1 code. #
# 11/02/12: Restored error checking for AWIPS2. #
# 04/22/13: Update the permission of the log directories. #
# 02/24/14: Create the file if rsync_parms.${site} is not available, #
# and mkdir the site directory on the local rsync server if it #
# does not exist. #
# *** Version 4.0 *** #
# 08/18/17: Moved script to new /awips2/GFESuite/rsyncGridsToCWF/bin loc. vtm #
# 08/18/17: Changed directory structure of app in configuration section. #
# Replaced DXwrkDir and WRKDIR with PROGRAM_ equivalents. vtm #
# 08/18/17: Added code to purge log directory after 14 days. vtm #
# 08/18/17: Reworked rsync check. DCS 17527. vtm #
# 07/23/18: Removed all rsync and ssh commands #
################################################################################
# check to see if site id was passed as argument
# if not then exit from the script
if [ $# -lt 1 ] ;then
echo Invalid number of arguments.
echo Script stopped.
echo ./rsyncGridsToCWF.sh wfo
exit
else
SITE=$(echo ${1} | tr '[a-z]' '[A-Z]')
site=$(echo ${1} | tr '[A-Z]' '[a-z]')
fi
################################################################################
# Configuration Section #
################################################################################
GFESUITE_BIN="/awips2/GFESuite/bin"
PROGRAM_DIR="/awips2/GFESuite/rsyncGridsToCWF"
PROGRAM_BIN="${PROGRAM_DIR}/bin"
PROGRAM_ETC="${PROGRAM_DIR}/etc"
PROGRAM_CONFIG="${PROGRAM_DIR}/config"
PROGRAM_LOG="${PROGRAM_DIR}/log"
PROGRAM_DATA="${PROGRAM_DIR}/data"
################################################################################
# End of Configuration Section #
################################################################################
# Source in rsync_parms file for site. Copy in old site or baseline version if missing.
OLD_IFPS_DATA="/awips2/GFESuite/ServiceBackup/data"
if [ ! -f ${PROGRAM_ETC}/rsync_parms.${site} ] ;then
# rsync_parms for site does not exist. Check if exists in old directory and if so, copy over.
if [ -f ${OLD_IFPS_DATA}/rsync_parms.${site} ] ;then
cp ${OLD_IFPS_DATA}/rsync_parms.${site} ${PROGRAM_ETC}/rsync_parms.${site}
# rsync_parms not in old directory so get from baseline config directory
else
cp ${PROGRAM_CONFIG}/rsync_parms.ccc ${PROGRAM_ETC}/rsync_parms.${site}
fi
fi
. ${PROGRAM_ETC}/rsync_parms.${site}
################################################################################
# set current data and log file names
currdate=$(date -u +%Y%m%d)
export LOG_FILE="${PROGRAM_LOG}/${currdate}/netcdf_rsync.log"
# check to see if log directory structure exists.
if [ ! -d ${PROGRAM_LOG} ] ;then
mkdir -p ${PROGRAM_LOG}
chmod 777 ${PROGRAM_LOG}
chown awips:fxalpha ${PROGRAM_LOG}
fi
if [ ! -d ${PROGRAM_LOG}/${currdate} ] ;then
mkdir -p ${PROGRAM_LOG}/${currdate}
chmod 777 ${PROGRAM_LOG}/${currdate}
chown awips:fxalpha ${PROGRAM_LOG}/${currdate}
fi
# Purge old log files
find ${PROGRAM_LOG}/. -mtime +14 -exec rm {} -Rf \;
# Log file header
echo " " >> $LOG_FILE
echo "####################################################################################" >> $LOG_FILE
echo "# Starting Grid Rsync Script.... #" >> $LOG_FILE
echo "####################################################################################" >> $LOG_FILE
chmod 666 $LOG_FILE
# Check to see of the ${PROGRAM_DATA} directory exists. If not, then create.
echo making sure that ${PROGRAM_DATA} exists at $(date) >> $LOG_FILE
if [ ! -d ${PROGRAM_DATA} ] ;then
echo " ${PROGRAM_DATA} directory not found." >> $LOG_FILE
echo " making ${PROGRAM_DATA} directory..." >> $LOG_FILE
mkdir -p ${PROGRAM_DATA}
echo " changing permissions of ${PROGRAM_DATA} directory..." >> $LOG_FILE
chmod 777 ${PROGRAM_DATA}
echo " changing ownership of ${PROGRAM_DATA} directory to awips..." >> $LOG_FILE
chown awips:fxalpha ${PROGRAM_DATA}
else
echo " ${PROGRAM_DATA} directory exists!" >> $LOG_FILE
fi
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
# Clean up files older than 60 minutes in the ${PROGRAM_DATA} directory.
echo cleaning up files older than 60 minutes in the ${PROGRAM_DATA} directory at $(date) >> $LOG_FILE
find ${PROGRAM_DATA}/. -type f -mmin +60 -exec rm {} -f \;
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
# sending to log file parmlist and mask settings
if [ "$parmlist" != "" ] && [ "$parmlist" != " " ]; then
echo "Will trim elements to $parmlist" >> $LOG_FILE
else
echo "Will send all elements" >> $LOG_FILE
fi
echo "Using grid domain $mask" >> $LOG_FILE
# Determine the ifpnetCDF start and end times.
start_time=$(date +%Y%m%d_%H00 -d "6 hours ago")
end_time=$(date +%Y%m%d_%H00 -d "192 hours")
cdfTimeRange="-s ${start_time} -e ${end_time} "
# In this while loop, the netcdf file will be created and quality controlled.
# The script will attempt to create the netcdf file three times before failing.
creationAttemptCount=1
badGridFlag=1
while (( ( $creationAttemptCount <= $creationAttempts ) && ( $badGridFlag == 1 ) ))
do
# create the netcdf file
echo starting netcdf file creation...attempt number ${creationAttemptCount} at $(date) >> $LOG_FILE
echo " " >> $LOG_FILE
${GFESUITE_BIN}/ifpnetCDF -t -g -o ${PROGRAM_DATA}/CurrentFcst.$$.${site}.cdf -h $CDSHOST -d ${SITE}_GRID__Official_00000000_0000 -m $mask $cdfTimeRange $parmlist >> $LOG_FILE 2>&1
# Check to see if netcdf file is big enough. In service backup, publish to official may have been forgotten.
filesize=$(ls -l ${PROGRAM_DATA}/CurrentFcst.$$.${site}.cdf | awk '{print $5}')
if (( filesize < 1000000 )) ;then
echo $filesize >> $LOG_FILE
if [[ $turnOffAllNotifications == "no" ]] ;then
${GFESUITE_BIN}/sendGfeMessage -h ${CDSHOST} -c NDFD -m "${SITE} netcdf file determined to be incomplete and not sent to webfarms. Did you publish to official? Is EDEX down?" -s
fi
rm -f ${PROGRAM_DATA}/CurrentFcst.$$.${site}.cdf
echo netcdf file is too small. Either the Official database is empty OR EDEX is down. >> $LOG_FILE
echo Script stopped. >> $LOG_FILE
exit
fi
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
##############################################
# STOP HERE RIGHT NOW
##############################################
if [[ $QCnetCDF == "yes" ]] ;then
#Check netcdf file for errors.
echo started netcdf file checking at $(date) >> $LOG_FILE
${GFESUITE_BIN}/iscMosaic -h $CDSHOST $parmlist -f ${PROGRAM_DATA}/CurrentFcst.$$.${site}.cdf -d ${SITE}_GRID_Test_Fcst_00000000_0000 -D $iscMosaicDelay
if [[ $? > 0 ]] ;then
if [[ $creationAttemptCount == $creationAttempts ]] ;then
if [[ $turnOffAllNotifications == "no" ]] ;then
${GFESUITE_BIN}/sendGfeMessage -h ${CDSHOST} -c NDFD -m "Errors detected in ${SITE} netcdf file again and not sent to webfarms. Send Grids Manually." -s
fi
echo "Errors detected in ${SITE} netcdf file again and not sent to webfarms. Script stopped." >> $LOG_FILE
exit
else
if [[ $turnOffAllNotifications == "no" ]] ;then
${GFESUITE_BIN}/sendGfeMessage -h ${CDSHOST} -c NDFD -m "Errors detected in ${SITE} netcdf file again. Regenerating netcdf file attempt # ${creationAttemptCount}." -s
fi
echo "Errors detected in ${SITE} netcdf file. Regenerating netcdf file." >> $LOG_FILE
fi
rm -f ${PROGRAM_DATA}/CurrentFcst.$$.${site}.cdf
(( creationAttemptCount = $creationAttemptCount + 1 ))
else
echo The netcdf file appears to be good. >> $LOG_FILE
badGridFlag=0
fi
else
echo netcdf file checking bypassed at $(date) >> $LOG_FILE
badGridFlag=0
fi
done
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
# create the optimized netcdf file
echo creating optimzed netcdf file at $(date) >> $LOG_FILE
cp ${PROGRAM_DATA}/CurrentFcst.$$.${site}.cdf ${PROGRAM_DATA}/CurrentFcst.$$.${site}.opt.cdf >> $LOG_FILE 2>&1
${PROGRAM_BIN}/convert_netcdf.pl ${PROGRAM_DATA}/CurrentFcst.$$.${site}.cdf ${PROGRAM_DATA}/CurrentFcst.$$.${site}.opt.cdf >> $LOG_FILE 2>&1
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
# check space used by the process of creating the opt netcdf file in the netcdf directory
echo "space used in ${PROGRAM_DATA}: $(cd ${PROGRAM_DATA}; du -m --max-depth=1) mb" >> $LOG_FILE
echo ...finished >> $LOG_FILE
echo " " >> $LOG_FILE
# cleaning up files on AWIPS created by the optimizing process.
echo cleaning up files on AWIPS created by the optimizing process at $(date) >> $LOG_FILE
rm -f ${PROGRAM_DATA}/CurrentFcst.$$.${site}.cdf >> $LOG_FILE 2>&1
rm -f ${PROGRAM_DATA}/CurrentFcst.$$.${site}.cdf.cdl >> $LOG_FILE 2>&1
rm -f ${PROGRAM_DATA}/CurrentFcst.$$.${site}.opt.cdf.opt.cdl >> $LOG_FILE 2>&1
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
# zip up the optimized netcdf file
echo starting optimized netcdf file zipping at $(date) >> $LOG_FILE
gzip -9 ${PROGRAM_DATA}/CurrentFcst.$$.${site}.opt.cdf >> $LOG_FILE 2>&1
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
# check spaced used by the zipped opt netcdf file in the netcdf directory
echo "space used in ${PROGRAM_DATA}: $(cd ${PROGRAM_DATA}; du -m --max-depth=1) mb" >> $LOG_FILE
echo ... finished >> $LOG_FILE
echo " " >> $LOG_FILE
if ! [ -d ${locDirectory} ] 2> /dev/null ;then
mkdir ${locDirectory}
fi
# Clean up orphaned files
echo cleaning up orphaned files in the ${locDirectory}/${site} directory at $(date) >> $LOG_FILE
find ${locDirectory}/${site} -mmin +720 -exec rm {} -f \; >> $LOG_FILE 2>&1
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
# cleaning up the zipped optimized file on AWIPS.
#echo cleaning up the zipped optimized file on AWIPS at $(date) >> $LOG_FILE
#rm -f ${PROGRAM_DATA}/CurrentFcst.$$.${site}.opt.cdf.gz
#echo ...finished. >> $LOG_FILE
#echo " " >> $LOG_FILE
# Notify forecaster that the quality control check passed
if [[ $SendQCgoodNotification == "yes" ]] ;then
echo sending forecaster notification that QC passed at $(date) >> $LOG_FILE
if [[ $turnOffAllNotifications == "no" ]] ;then
${GFESUITE_BIN}/sendGfeMessage -h ${CDSHOST} -c NDFD -m "${SITE} netcdf file passed quality control check." -s
fi
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
fi
# change file permissions
echo changed permissions of the uncompressed netcdf files at $(date) >> $LOG_FILE
/bin/chmod 2775 ${locDirectory}/${site}/CurrentFcst.$$.${site}.opt.cdf.gz
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
# make backup copies of the netcdf files
echo make backup of the compressed netcdf files at $(date) >> $LOG_FILE
cp ${locDirectory}/${site}/CurrentFcst.$$.${site}.opt.cdf.gz ${locDirectory}/${site}/vtm.opt.cdf.gz
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
# change the timestamp on the files
echo updated the time stamps of the compressed and uncompressed netcdf files at $(date) >> $LOG_FILE
touch ${locDirectory}/${site}/CurrentFcst.$$.${site}.opt.cdf.gz
touch ${locDirectory}/${site}/vtm.opt.cdf.gz
echo ...finished. >> $LOG_FILE
echo " " >> $LOG_FILE
echo script finished at $(date) >> $LOG_FILE
echo " " >> $LOG_FILE
exit

View file

@ -1,59 +0,0 @@
#!/bin/sh
################################################################################
#
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
##############################################################################
# Program name: rsyncGridsToCWF_client.sh
#
# Executes rsynceGridsToCWF.sh locally or remotely as needed
#
# Author: Juliya Dynina
#
# Revisions:
# Date Ticket# Engineer Description
# ------------ ---------- ----------- -------------------------------
# 04/25/2012 jdynina Created Script
# 07/15/2015 #4013 randerso Changed to use a thrift request that can
# be handled on any EDEX cluster server to
# run rsyncGridsToCWF.sh
#
################################################################################
# this allows you to run this script from outside of ./bin
path_to_script=`readlink -f $0`
RUN_FROM_DIR=`dirname $path_to_script`
BASE_AWIPS_DIR=`dirname $RUN_FROM_DIR`
# get the base environment
source ${RUN_FROM_DIR}/setup.env
# setup the environment needed to run the the Python
export LD_LIBRARY_PATH=${BASE_AWIPS_DIR}/src/lib:${PYTHON_INSTALL}/lib
export PYTHONPATH=${RUN_FROM_DIR}/src:$PYTHONPATH
# execute the rsyncGridsToCWF Python module
_PYTHON="${PYTHON_INSTALL}/bin/python"
_MODULE="${RUN_FROM_DIR}/src/rsyncGridsToCWF/rsyncGridsToCWF.py"
# quoting of '$@' is used to prevent command line interpretation
$_PYTHON $_MODULE "$@"

View file

@ -1,85 +0,0 @@
##
# This software was developed and / or modified by Raytheon Company,
# pursuant to Contract DG133W-05-CQ-1067 with the US Government.
#
# U.S. EXPORT CONTROLLED TECHNICAL DATA
# This software product contains export-restricted data whose
# export/transfer/disclosure is restricted by U.S. law. Dissemination
# to non-U.S. persons whether in the United States or abroad requires
# an export license or other authorization.
#
# Contractor Name: Raytheon Company
# Contractor Address: 6825 Pine Street, Suite 340
# Mail Stop B8
# Omaha, NE 68106
# 402.291.0100
#
# See the AWIPS II Master Rights File ("Master Rights File.pdf") for
# further licensing information.
##
#
# Revisions:
# Date Ticket# Engineer Description
# ------------ ---------- ----------- -------------------------------
# 07/15/2015 #4013 randerso Initial creation
#
##
from dynamicserialize.dstypes.com.raytheon.uf.common.dataplugin.gfe.request import RsyncGridsToCWFRequest
from dynamicserialize.dstypes.com.raytheon.uf.common.message import WsId
from awips import ThriftClient
from awips import UsageArgumentParser
import os
def main():
args = validateArgs()
request = createRequest(args.site)
thriftClient = ThriftClient.ThriftClient(args.host, args.port, "/services")
try:
thriftClient.sendRequest(request)
except Exception, ex:
print "Caught exception submitting RsyncGridsToCWFRequest: ", str(ex)
return 1
def validateArgs():
parser = UsageArgumentParser.UsageArgumentParser(conflict_handler="resolve", prog='ifpInit')
parser.add_argument("-h", action="store", dest="host",
help="ifpServer host name",
metavar="hostname")
parser.add_argument("-p", action="store", type=int, dest="port",
help="rpc port number of ifpServer",
metavar="port")
parser.add_argument("site", action="store",
help="site to rsync grids for",
metavar="site")
args = parser.parse_args()
if args.host is None and "CDSHOST" in os.environ:
args.host = os.environ["CDSHOST"]
if args.port is None and "CDSPORT" in os.environ:
args.port = int(os.environ["CDSPORT"])
if args.host is None:
args.host = str(os.getenv("DEFAULT_HOST", "localhost"))
if args.port is None:
args.port = int(os.getenv("DEFAULT_PORT", "9581"))
return args
def createRequest(site):
request = RsyncGridsToCWFRequest(site)
wsId = WsId(progName="rsyncGridsToCWF")
request.setWorkstationID(wsId)
return request
if __name__ == '__main__':
main()

View file

@ -5,10 +5,10 @@
# Shell Used: BASH shell
# Original Author(s): Joseph.Maloney@noaa.gov
# File Creation Date: 01/27/2009
# Date Last Modified: 07/23/2018 - For unidata_18.1.1
# Date Last Modified: 09/23/2016 - For 17.1.1
#
# Contributors:
# Joe Maloney (MFL), Pablo Santos (MFL), Michael James (UCAR)
# Joe Maloney (MFL), Pablo Santos (MFL)
# -----------------------------------------------------------
# ------------- Program Description and Details -------------
# -----------------------------------------------------------
@ -25,10 +25,6 @@
#
# Directory program runs from: /awips2/GFESuite/hti/bin
#
# Needed configuration on ls2/3: /awips2/GFESuite/data/hti needs to exist. The
# script will create this directory if
# it is missing.
#
# History:
# 20 OCT 2014 - jcm - created from make_ghls.sh, broke out of webapps
# package. Renamed make_hti.sh.
@ -40,38 +36,17 @@
# code instead of just looking at the timestamps
# of the KML files themselves.
# 22 SEP 2016 - jcm - clean out $PRODUCTdir every time you run
# 23 JUL 2018 - mj - Remove all LDAD ssh/rsync
# 29 JAN 2019 - mj@ucar - Clean up: no ssh, rsync
#
########################################################################
# CHECK TO SEE IF SITE ID WAS PASSED AS ARGUMENT
# IF NOT THEN EXIT FROM THE SCRIPT
########################################################################
if [ $# -lt 1 ] ;then
echo Invalid number of arguments.
echo Script stopped.
echo ./rsyncGridsToCWF.sh wfo
exit
else
SITE=$(echo ${1} | tr '[a-z]' '[A-Z]')
site=$(echo ${1} | tr '[A-Z]' '[a-z]')
fi
SITE=$(grep AW_SITE_IDENTIFIER /awips2/edex/setup.env | head -1 | cut -d = -f 2 )
site=$(echo $SITE | tr '[:upper:]' '[:lower:]' )
########################################################################
# CONFIGURATION SECTION BELOW
########################################################################
GFESUITE_HOME="/awips2/GFESuite"
HTI_HOME="${GFESUITE_HOME}/hti"
if [ ! -f ${HTI_HOME}/etc/sitevars.${site} ]; then
cp ${HTI_HOME}/etc/sitevars.ccc ${HTI_HOME}/etc/sitevars.${site}
fi
# SITES CAN CUSTOMIZE THE SITEVARS AS NEEDED
. ${HTI_HOME}/etc/sitevars.${site}
########################################################################
# BEGIN MAIN SCRIPT
########################################################################
AWIPS_USER="awips"
AWIPS_GRP="fxalpha"
. ${HTI_HOME}/etc/sitevars.ccc
# set current data and log file name
currdate=$(date -u +%Y%m%d)
@ -81,12 +56,12 @@ export LOG_FILE="${HTI_HOME}/logs/${currdate}/make_hti.log"
if [ ! -d ${HTI_HOME}/logs ] ;then
mkdir -p ${HTI_HOME}/logs
chmod 777 ${HTI_HOME}/logs
chown awips:fxalpha ${HTI_HOME}/logs
chown ${AWIPS_USER}:${AWIPS_GRP} ${HTI_HOME}/logs
fi
if [ ! -d ${HTI_HOME}/logs/${currdate} ] ;then
mkdir -p ${HTI_HOME}/logs/${currdate}
chmod 777 ${HTI_HOME}/logs/${currdate}
chown awips:fxalpha ${HTI_HOME}/logs/${currdate}
chown ${AWIPS_USER}:${AWIPS_GRP} ${HTI_HOME}/logs/${currdate}
fi
# Log file header
@ -105,10 +80,10 @@ if [ ! -d ${PRODUCTdir} ]; then
echo " **** Changing permissions of ${PRODUCTdir} directory..." >> $LOG_FILE
chmod 777 $PRODUCTdir
echo " **** Changing ownership of ${PRODUCTdir} directory..." >> $LOG_FILE
chown awips:fxalpha $PRODUCTdir
chown ${AWIPS_USER}:${AWIPS_GRP} $PRODUCTdir
else
echo " ${PRODUCTdir} exists." >> $LOG_FILE
# clean old png and kml.txt files from $PRODUCTdir and on LDAD
# clean old png and kml.txt files from $PRODUCTdir
echo "Removing old png and kml.txt files from ${PRODUCTdir}." >> $LOG_FILE
rm -f ${PRODUCTdir}/*png ${PRODUCTdir}/*kml.txt
fi
@ -121,9 +96,8 @@ PARMS="WindThreat FloodingRainThreat StormSurgeThreat TornadoThreat"
echo "Starting ifpIMAGE loop." >> $LOG_FILE
for PARM in $PARMS
do
# run ifpIMAGE
echo "Creating ${PARM} image..." >> $LOG_FILE
unset DISPLAY; ${GFEBINdir}/ifpIMAGE -site ${SITE} -c ${PARM} -o ${PRODUCTdir}
${GFEBINdir}/ifpIMAGE -site ${SITE} -c ${PARM} -o ${PRODUCTdir}
convert ${PRODUCTdir}/${SITE}${PARM}.png -resize 104x148 ${PRODUCTdir}/${SITE}${PARM}_sm.png
done
@ -131,7 +105,7 @@ rm -f ${PRODUCTdir}/*.info
# Generate KML automatically via runProcedure
echo "Running KML procedure." >> $LOG_FILE
unset DISPLAY; ${GFEBINdir}/runProcedure -site ${SITE} -n TCImpactGraphics_KML -c gfeConfig
${GFEBINdir}/runProcedure -site ${SITE} -n TCImpactGraphics_KML -c gfeConfig
# Create legends for KML
${HTI_HOME}/bin/kml_legend.sh
@ -181,7 +155,7 @@ then
echo " Changing permissions on ${PRODUCTdir}/archive..." >> $LOG_FILE
chmod 777 ${PRODUCTdir}/archive
echo " Changing ownership on ${PRODUCTdir}/archive..." >> $LOG_FILE
chown awips:fxalpha ${PRODUCTdir}/archive
chown ${AWIPS_USER}:${AWIPS_GRP} ${PRODUCTdir}/archive
else
echo " ${PRODUCTdir}/archive directory exists!" >> $LOG_FILE
fi

View file

@ -1,30 +1,7 @@
####################################################
# History:
# 09/15/2017 DR-20302 lshi make_hti.sh issue in svcbu mode
#####################################################
# Configuration file for make_hti.sh & kml_legend.sh
GFESUITE_HOME="/awips2/GFESuite" # Where is GFESuite?
GFEBINdir="${GFESUITE_HOME}/bin" # Where is GFESuite/bin?
HTI_HOME="${GFESUITE_HOME}/hti" # Where the hti stuff lives
PRODUCTdir="${HTI_HOME}/data" # Where the hti output/data will go
ARCHIVE="YES" # YES, graphics/kml will be archived with
# each run, NO otherwise.
# OPTIONAL: Local office processing
# The setup below is for site that do processing on local office Linux
# workstations using Doug Gaer's webapps package. (NOTE: the webapps
# package is not supported by NCF or AWIPS.)
# If you wish to use this, set LOCALlw_PROCESSING to TRUE,
# then configure the following variables as appropriate
LOCALlw_PROCESSING="FALSE" # Set to TRUE to enable local processing
LOCALlwserver="machine.xxx.noaa.gov" # Name of local workstation where webapps
# is installed
LOCALlwuser="username" # User on local workstation where webapps
# is installed
LOCAL_LWDATA="/data/webapps" # Directory where output is stored on
# local workstation above

View file

@ -1,204 +0,0 @@
#!/bin/bash
# Last Modified: 10/15/2015
# By: Pablo Santos and Joseph Maloney
# Version: AWIPS 2 Version 16.2.1
NWPSLOCAL="/awips2/GFESuite/nwps"
umask 002
#
PATH="/awips2/GFESuite/bin:/bin:/usr/bin:/usr/local/bin"
siteid=$(hostname -s|cut -c 5-)
source ${NWPSLOCAL}/etc/sitevars.${siteid} ${siteid}
#
SSHARGS="-x -o stricthostkeychecking=no"
SCPARGS="-o stricthostkeychecking=no"
#
Program="/awips2/GFESuite/bin/ifpnetCDF"
GFESERVER="ec"
RUNSERVER="px"
# FUNCTIONS
function logit () {
echo "$@" | tee -a $logfile
}
GFEDomainname=${1}
logit "Processing $GFEDomainname"
gfedomainname=$(echo ${GFEDomainname} | tr [:upper:] [:lower:])
cd ${NWPSLOCAL}
if [ ! -e ${NWPSLOCAL}/${GFEDomainname} ]
then
mkdir ${NWPSLOCAL}/${GFEDomainname}
chmod 777 ${NWPSLOCAL}/${GFEDomainname}
fi
if [ ! -e ${NWPSLOCAL}/input/${GFEDomainname} ]
then
mkdir -p ${NWPSLOCAL}/input/${GFEDomainname}
chmod -R 777 ${NWPSLOCAL}/input
fi
if [ ! -e ${NWPSLOCAL}/wcoss/${GFEDomainname} ]
then
mkdir -p ${NWPSLOCAL}/wcoss/${GFEDomainname}
chmod -R 777 ${NWPSLOCAL}/wcoss
fi
if [ ! -e ${NWPSLOCAL}/logs ]
then
mkdir ${NWPSLOCAL}/logs
chmod 777 ${NWPSLOCAL}/logs
fi
logfile=${NWPSLOCAL}/logs/${GFEDomainname}_nwps_runManual_Outside_AWIPS.log
##################################################################
### START A CLEAN LOG FILE:
#
rm -f $logfile
echo " " > $logfile
STARTED=$(date)
logit "STARTED FOR ${GFEDomainname}: $STARTED"
DB="${GFEDomainname}_GRID__Fcst_00000000_0000"
if [ ${MULTISITE} == "Yes" ]
then
Output_Dir="${NWPSLOCAL}/input/${GFEDomainname}"
else
Output_Dir="${NWPSLOCAL}/input"
fi
WRKSWN="${NWPSLOCAL}/${GFEDomainname}/SUAWRKNWP.dat"
date=$(date "+%D %H:%M:%S")
Output_File="${Output_Dir}/Wind_File"
textfile="${Output_Dir}/$(date +%Y%m%d%H%M)_WIND.txt"
wcoss_textfile="${gfedomainname}_$(date +%Y%m%d%H%M)_WIND.txt"
flagfile="${Output_Dir}/SWANflag"
### LOCK FILE STUFF:
source ${NWPSLOCAL}/bin/process_lock.sh
PROGRAMname="$0"
LOCKfile="${NWPSLOCAL}/logs/runManual_Outside_AWIPS_${GFEDomainname}.lck"
MINold="300"
LockFileCheck $MINold
CreateLockFile
### CHECK THAT THIS IS THE px2 (or px1 if failed over) HOST MACHINE:
HOST=$(hostname|cut -c1-2)
if [[ $HOST != $RUNSERVER ]]
then
logit "YOU ARE RUNNING FROM $HOST. THIS SCRIPT SHOULD ONLY BE RAN FROM $RUNSERVER."
logit "Exiting ... "
RemoveLockFile
exit 1
fi
### RUN OPTIONS:
if [ -e ${NWPSLOCAL}/${GFEDomainname}_var/inp_args ]
then
inp_args=`cat ${NWPSLOCAL}/${GFEDomainname}_var/inp_args`
IFS=':' read -a inp <<< "${inp_args}"
RUNLEN=${inp[0]}
WNA=${inp[1]}
NEST=${inp[2]}
GS=${inp[3]}
WINDS=${inp[4]}
WEB=${inp[5]}
PLOT=${inp[6]}
DELTAC=${inp[7]}
HOTSTART=${inp[8]}
WATERLEVELS=${inp[9]}
CORE=${inp[10]}
EXCD=${inp[11]}
WHERETORUN=${inp[12]}
logit " "
logit "Arguments are: $RUNLEN $WNA $NEST $GS $WINDS $WEB $PLOT $DELTAC $HOTSTART $WATERLEVELS $CORE $EXCD $WHERETORUN"
logit " "
rm -f ${NWPSLOCAL}/${GFEDomainname}_var/inp_args
rm -f ${NWPSLOCAL}/wcoss/${GFEDomainname}/* | tee -a $logfile
rm -f ${Output_Dir}/* | tee -a $logfile
cp ${NWPSLOCAL}/domains/${GFEDomainname} ${NWPSLOCAL}/wcoss/${GFEDomainname}/${gfedomainname}_domain_setup.cfg
chmod 666 ${NWPSLOCAL}/wcoss/${GFEDomainname}/${gfedomainname}_domain_setup.cfg
else
logit "No arguments or arguments file provided. No run to process. Exiting ${GFEDomainname}."
RemoveLockFile
continue
fi
logit " "
##################################################################
logit "### Setting Up SWAN Input Model Forcing Time Range"
logit " "
##################################################################
echo "" > $WRKSWN
echo "____________________NWPS RUN REQUEST DETAILS__________" >> $WRKSWN
echo "" >> $WRKSWN
echo "Run for ${GFEDomainname} initiated at: ${date}" >> $WRKSWN
echo "" >> $WRKSWN
echo "Runlength: ${RUNLEN}" >> $WRKSWN
echo "Boundary Conditions: ${WNA}" >> $WRKSWN
echo "Nest: ${NEST}" >> $WRKSWN
echo "Current: ${GS}" >> $WRKSWN
echo "Winds: ${WINDS}" >> $WRKSWN
echo "Timestep: ${DELTAC}" >> $WRKSWN
echo "Hotstart: ${HOTSTART}" >> $WRKSWN
echo "WATERLEVELS: ${WATERLEVELS}" >> $WRKSWN
echo "Model Core: ${CORE}" >> $WRKSWN
echo "Psurge % Exceedance: ${EXCD}" >> $WRKSWN
echo "Running model in: ${WHERETORUN}" >> $WRKSWN
echo "" >> $WRKSWN
echo "______________________________________________________" >> $WRKSWN
##################################################################
logit "### CREATE THE WIND NETCDF FILE AND SEND OVER TO SWAN BOX FOR PROCESSING:"
logit "$Program -o $Output_File -d $DB -h $GFESERVER -g -p NWPSwind"
$Program -o $Output_File -d $DB -h $GFESERVER -g -p NWPSwind | tee -a $logfile
/usr/local/netcdf/bin/ncdump $Output_File > $textfile
sed -i "s/NWPSwind/Wind/g" $textfile
cp $textfile ${NWPSLOCAL}/wcoss/${GFEDomainname}/${wcoss_textfile}
chmod 666 ${NWPSLOCAL}/wcoss/${GFEDomainname}/${wcoss_textfile}
gzip $textfile
touch $flagfile
chmod 666 $textfile.gz
chmod 666 $flagfile
echo "$RUNLEN:$WNA:$NEST:$GS:$WINDS:$WEB:$PLOT:$DELTAC:$HOTSTART:$WATERLEVELS:$CORE:$EXCD" > ${NWPSLOCAL}/wcoss/${GFEDomainname}/${gfedomainname}_inp_args.ctl
chmod 666 ${NWPSLOCAL}/wcoss/${GFEDomainname}/${gfedomainname}_inp_args.ctl
cd ${NWPSLOCAL}/wcoss/${GFEDomainname}
NWPSWINDGRID="NWPSWINDGRID_${gfedomainname}_$(date +%Y%m%d%H%M)_$$.tar.gz"
tar cvfz ${NWPSWINDGRID} ${gfedomainname}_inp_args.ctl ${gfedomainname}_domain_setup.cfg ${wcoss_textfile}
logit " "
##################################################################
logit " "
RemoveLockFile
##################################################################
logit " "
date
logit "FINISHED ${GFEDomainname}: $(date)"
logit " "
exit 0

View file

@ -1,101 +1,9 @@
#!/usr/bin/env bash
#
# Configuration file for nwps
#
########################################################################
# Regional LDM setup
########################################################################
#
# Uncomment the appropriate lines for your region to configure REGION, LDMSEND,
# LDMSERVER1 and LDMSERVER2 variables. These are the same ldm
# servers you can reach from your ldad. If you do not know this info
# contact your regional folks. This is how the input files for NWPS run
# requests will be routed to NCO/WCOSS when that option is chosen from
# the GUI.
#
# Also, if you are in SR, you have two options for LDMSEND. If SR has
# installed in your ldad the ldmsend_nws version use that then. It
# has the capability of testing if the wind file was successfully sent
# to WCOSS through SR including a message about that in the WRKNWP
# message sent back to the forecaster via the text workstations.
# Otherwise use the baseline ldmsend. You can
# find out if ldmsend_nws is installed in your ldad by checking in
# your ldad or following up with Doug Gaer or Paul Kirkwood at SR.
#
########################################################################
# SOUTHERN REGION
########################################################################
#REGION="SR"
#LDMSEND="/usr/local/ldm/util/ldmsend_nws"
#LDMSEND="/usr/local/ldm/bin/ldmsend"
#LDMSERVER1="216.38.81.28"
#LDMSERVER2="216.38.81.29"
########################################################################
# EASTERN REGION
########################################################################
#REGION="ER"
#LDMSEND="/usr/local/ldm/bin/ldmsend"
#LDMSERVER1="198.206.32.98"
#LDMSERVER2="198.206.32.99"
########################################################################
# CENTRAL REGION
########################################################################
#REGION="CR"
#LDMSEND="/usr/local/ldm/bin/ldmsend"
#LDMSERVER1=""
#LDMSERVER2=""
########################################################################
# WESTERN REGION
########################################################################
#REGION="WR"
#LDMSEND="/usr/local/ldm/bin/ldmsend"
#LDMSERVER1=""
#LDMSERVER2=""
########################################################################
# ALASKA REGION
########################################################################
#REGION="AR"
#LDMSEND="/usr/local/ldm/bin/ldmsend"
#LDMSERVER1=""
#LDMSERVER2=""
########################################################################
# PACIFIC REGION
########################################################################
#REGION="PR"
#LDMSEND="/usr/local/ldm/bin/ldmsend"
#LDMSERVER1=""
#LDMSERVER2=""
########################################################################
# NATIONAL CENTERS
########################################################################
#REGION="NCEP"
#LDMSEND="/usr/local/ldm/bin/ldmsend"
#LDMSERVER1=""
#LDMSERVER2=""
########################################################################
# MULTISITE
#
# If you are runnning the model in a local workstation also and your model
# version is 2.0.X or earlier set MULTISITE="No". Otherwise set it to Yes.
#
REGION="NCEP"
MULTISITE="No"
#
# DIR - If MULTISITE = No this means you are still using ldad scripts to
# pull in runs through ldad from model box running locallty.
# Depending on how you set up those scripts you will need to define this
# directory. If you followed standard configuration, then the
# Default below should do. This is where the script will push out the
# wind file to run the model locally if you are using that option.
# In the multisite worstation version, the file is pushed to the workstation
# through ldad using ldm. old ldad scripts no longer used.
#
DIR="/awips2/GFESuite/data/nwps/input"
#
# If running model in a local workstation also outside AWIPS, Make sure the WORKSTATION VARIABLE NAME BELOW MATCHES HOSTNAME OF YOUR
# LOCAL MODELING BOX. The arfument passed for the variable comes from the runManualNWPS_OutsideAWIPS.sh script
#
WORKSTATION="${1}-lw-swan"
#
WORKSTATION="localhost"
########################################################################
########################################################################
# DO NOT EDIT BELOW HERE

View file

@ -1,59 +0,0 @@
# Configuration file for rsyncGridsToCWF.sh
### Turn On/Off certain script functionality. ###
QCnetCDF="no" # Do you want the netCDF file checked before
# it is rsynced? This takes 5-15 minutes to
# complete depending on size of domain.
# *** Leave "no" for AWIPS 2. QC doesn't work ***
checkCWFavailability="no" # Do you want the script to check to see if
# the netcdf file made it to the Consolidated
# web farm?
### Banner notification configuration ###
SendQCgoodNotification="no" # Tell forecaster that netcdf file passed
# QC check.
sendCWFnotification="no" # Tell forecaster when netcdf rsync complete
# to the consolidated web farm.
turnOffAllNotifications="yes" # This will turn off all banner messages.
### new ldad configuration ###
locDirectory="/awips2/GFESuite/data/grid" # Directory where grids are stored
# Edit area to limit the portion of the grid domain to send to the webfarms.
mask=ISC_Send_Area
# Parameter list for the netcdf file
parmlist1="-p MaxT -p MinT -p MaxRH -p MinRH -p T -p Td -p RH -p WindChill -p HeatIndex -p ApparentT"
parmlist2="-p PoP -p PoP12 -p Sky -p Wx -p Hazards -p SnowLevel -p QPF -p SnowAmt -p IceAccum -p Wind -p WindGust"
parmlist3="-p ClearIndex -p FreeWind -p LAL -p Haines -p MixHgt -p VentRate -p TransWind -p Wind20ft -p CLRIndx"
parmlist4="-p StormSurgeThreat -p WindThreat -p FloodingRainThreat -p TornadoThreat"
parmlist5=""
parmlist6=""
parmlist="$parmlist1 $parmlist2 $parmlist3 $parmlist4 $parmlist5 $parmlist6"
parmlist="" #uncomment to send all parameters
creationAttempts=3 # How many times do you want script to create and
# quality control netcdf files if bad netcdf files
# are detected?
rsyncWait=30 # Minutes to wait for free rsync connection.
CWFcheckWait=360 # Delay in seconds to wait to check to see if file made it
# to the consolidated web farm.
iscMosaicDelay=0.0 # Delay of 0.0 causes GFE pauses.
probAlertNum=1 # Guardian alert level when problems occur.
# Email notification configuration
sendEmailNotification="no" # Do you want to send email notification of grids are not sent?
emailAddress1=""
emailAddress2=""
emailAddress3=""
# Set some paths
FXA_BIN="/awips2/fxa/bin"
CDSHOST="ec"
CDSPORT="9581"