Configure Spark History Servers (RPM/DEB)

If you’re using Application Profiler to fetch history data for Spark apps, you can customize the connection timeout value and/or add a second Spark History Server for monitoring.

Configure Connection Timeout for Spark History Server

In environments with extreme network latency or frequent connectivity issues, it can be helpful to increase the connection timeout setting for REST requests that fetch Spark app data from the Spark History Server. By default, Pepperdata waits five (5) seconds before timing out, but you can change this value to suit your environment.

Related Topics

  • Configure History Fetcher Retries. To ensure that application history is successfully fetched from the applicable component (MapReduce Job History Server for MapReduce apps, Spark History Server for Spark apps, or YARN Timeline Server for Tez apps), the Pepperdata Supervisor uses a two-phase approach. Phase 1 makes the initial attempt to fetch the history, and if it fails, makes up to three retries. Phase 2 adds an additional try and by default up to five retries, with the interval between retries increased by a factor of five every time. You can customize the number of retries for each phase, which might be required for environments with extreme network latency or frequent connectivity issues.

Procedure

  1. Add the environment variable for the connection timeout, PD_JOBHISTORY_SPARK_CONNECTION_TIMEOUT_SEC.

    1. On any host in the cluster, open the Pepperdata configuration file, /etc/pepperdata/pepperdata-config.sh, for editing.

    2. Add the environment variable for the connection timeout, in the following format. Be sure to replace the default connection timeout (5 seconds) with your custom value.

      export PD_JOBHISTORY_SPARK_CONNECTION_TIMEOUT_SEC=5
      
    3. Save your changes and close the file.

  2. On every host in the cluster, restart the PepCollector and PepAgent services.

    1. Restart the Pepperdata Collector.

      You can use either the service (if provided by your OS) or systemctl command:

      • sudo service pepcollectd restart
      • sudo systemctl restart pepcollectd
    1. Restart the PepAgent.

      You can use either the service (if provided by your OS) or systemctl command:

      • sudo service pepagentd restart
      • sudo systemctl restart pepagentd

Read from Two Spark History Servers

If you have two Spark History Servers (typically because you’re in the midst of upgrading to a newer version, and you’re temporarily running both versions during the migration), you can configure Pepperdata to read from both of them. Perform the configuration for the second Spark History Server on any host other than the MapReduce Job History Server host (which is the primary history fetcher host for Pepperdata). On the chosen host, add environment variables for the second Spark History Server to the Pepperdata configuration.

Prerequisites

  1. Complete the regular Pepperdata installation steps; see Installing Pepperdata.

  2. Complete the regular Pepperdata configuration steps, including the configuration of the first Spark History Server; see Configuring Pepperdata.

  3. Start up Pepperdata; see Starting Up Pepperdata.

Procedure

  1. Choose any host other than the MapReduce Job History Server host, which is the primary history fetcher host for Pepperdata.

  2. On the chosen host, add the configuration for the second Spark History Server.

    1. Open the Pepperdata configuration file, /etc/pepperdata/pepperdata-config.sh, for editing.

    2. Edit the /etc/pepperdata/pepperdata-config.sh file as follows.

      • If the spark-defaults.conf file contains the correct assignment for spark.yarn.historyServer.address for the second Spark History Server, configure the SPARK_CONF_DIR environment variable to match:

        export SPARK_CONF_DIR=your-path-to-second-spark-conf-directory

        Where your-path-to-second-spark-conf-directory is the directory that contains the spark-defaults.conf file.

      • If the spark-defaults.conf file does not include spark.yarn.historyServer.address or its value is incorrect, and you can edit the spark-defaults.conf file:

        1. Edit the spark-defaults.conf file so that it includes the correct assignment for spark.yarn.historyServer.address for the second Spark History Server.

        2. In the pepperdata-config.sh file, configure the SPARK_CONF_DIR environment variable to match: export SPARK_CONF_DIR=your-path-to-second-spark-conf-directory, where your-path-to-second-spark-conf-directory is the directory that contains the spark-defaults.conf file.

      • For all other cases, edit the pepperdata-config.sh file to include the PD_SPARK_HISTORY_SERVER_ADDRESS environment variable, and set its value to the second Spark History Server’s fully-qualified URL.

    3. Disambiguate the two Spark History Servers by adding the following configuration options.

      export PD_BYPASS_JOBHISTORY_IS_LOCAL_CHECK=1
      export PD_DO_JOBHISTORY_STARTUP_CHECK=0
      export PD_JOBHISTORY_FETCHERS="spark"
      
    4. Save your changes and close the file.

    1. Restart the PepAgent.

      You can use either the service (if provided by your OS) or systemctl command:

      • sudo service pepagentd restart
      • sudo systemctl restart pepagentd