Configure History Fetcher Retries (Cloud)
To ensure that application history is successfully fetched from the applicable component (MapReduce Job History Server for MapReduce apps, Spark History Server for Spark apps, or YARN Timeline Server for Tez apps), the Pepperdata Supervisor uses a two-phase approach. Phase 1 makes the initial attempt to fetch the history, and if it fails, makes up to three retries. Phase 2 adds an additional try and by default up to five retries, with the interval between retries increased by a factor of five every time. You can customize the number of retries for each phase, which might be required for environments with extreme network latency or frequent connectivity issues.
In your cloud environment (such as GDP or AWS), configure the history fetcher retries.
If there are no already-running hosts with Pepperdata, you are done with this procedure. Do not perform the remaining steps.
From the environment’s cluster configuration folder (in the cloud), download the Pepperdata configuration file,
/etc/pepperdata/pepperdata-config.sh, to a location where you can edit it.
Open the file for editing, and add the environment variables for the number of history fetcher retries, in the following format. Be sure to replace the default number of retries for the first and second phases (
5, respectively) with your custom values.
export PD_JOBHISTORY_MONITOR_FIRST_RETRY_COUNT=3 export PD_JOBHISTORY_MONITOR_SECOND_RETRY_COUNT=5
Save your changes and close the file.
Upload the revised file to overwrite the original
Open a command shell (terminal session) and log in to any already-running host as a user with
sudoprivileges.Important: You can begin with any host on which Pepperdata is running, but be sure to repeat the login (this step), copying the bootstrap file (next step), and loading the revised Pepperdata configuration (the following step) on every already-running host.
From the command line, copy the Pepperdata bootstrap script that you extracted from the Pepperdata package from its local location to any location; in this procedure’s steps, we’ve copied it to
For Amazon EMR clusters:
aws s3 cp s3://<pd-bootstrap-script-from-install-packages> /tmp/bootstrap
For Google Dataproc clusters:
sudo gsutil cp gs://<pd-bootstrap-script-from-install-packages> /tmp/bootstrap
Load the revised configuration by running the Pepperdata bootstrap script.
For EMR clusters:
You can use the --long-options form of the
--is-runningarguments as shown or their -short-option equivalents,
-r) option is required for bootstrapping an already-running host prior to Supervisor version 7.0.13.
Optionally, you can specify a proxy server for the AWS Command Line Interface (CLI) and Pepperdata-enabled cluster hosts.
--proxy-addressargument when running the Pepperdata bootstrap script, specifying its value as a fully-qualified host address that uses
If you’re using a non-default EMR API endpoint (by using the
--endpoint-urlargument), include the
--emr-api-endpointargument when running the Pepperdata bootstrap script. Its value must be a fully-qualified host address. (It can use
If you are using a script from an earlier Supervisor version that has the
-carguments instead of the
-uarguments (which were introduced in Supervisor v6.5), respectively, you can continue using the script and its old arguments. They are backward compatible.
Optionally, you can override the default exponential backoff and jitter retry logic for the
describe-clustercommand that the Pepperdata bootstrapping uses to retrieve the cluster’s metadata.
Specify either or both of the following options in the bootstrap’s Optional arguments. Be sure to substitute your values for the
<my-timeout>placeholders that are shown in the command.
max-retry-attempts—(default=10) Maximum number of retry attempts to make after the initial
max-timeout—(default=60) Maximum number of seconds to wait before the next retry call to
describe-cluster. The actual wait time for a given retry is assigned as a random number, 1–calculated timeout (inclusive), which introduces the desired jitter.
# For Supervisor versions before 7.0.13: sudo bash /tmp/bootstrap --bucket <bucket-name> --upload-realm <realm-name> --is-running [--proxy-address <proxy-url:proxy-port>] [--emr-api-endpoint <endpoint-url:endpoint-port>] [--max-retry-attempts <my-retries>] [--max-timeout <my-timeout>] # For Supervisor versions 7.0.13 and later: sudo bash /tmp/bootstrap --bucket <bucket-name> --upload-realm <realm-name> [--proxy-address <proxy-url:proxy-port>] [--emr-api-endpoint <endpoint-url:endpoint-port>] [--max-retry-attempts <my-retries>] [--max-timeout <my-timeout>]
For Dataproc clusters:
sudo bash /tmp/bootstrap <bucket-name> <realm-name>
The script finishes with a
Pepperdata installation succeededmessage.
Repeat steps 2–4 on every already-running host in your cluster.