MapReduce Recommendations

Pepperdata recommendations for MapReduce applications are generated by the Application Profiler (which must be configured, enabled, and running before an app begins in order for recommendations to be generated). The MapReduce tile in the Recommendations section of the Pepperdata dashboard shows how many recommendations were made during the last 24 hours, along with their severity levels. (If there are no recommendations for MapReduce apps, the MapReduce tile does not appear on the dashboard.)

Recommendations information is shown in several places in the Pepperdata dashboard.

  • To see a table of all the MapReduce apps that received recommendations at a given severity level, click the linked severity text in the MapReduce tile.

  • To see the recommendations’ severity levels for all recently run MapReduce applications, show the Applications Overview page by using the left-nav menu to select App Spotlight > Applications (and filter by MapReduce apps).

  • To view the Application Profiler report, click the title of the MapReduce tile, or use the left-nav menu to select App Spotlight > Application Profiler (and filter by MapReduce apps).

Although heuristics and recommendations are closely related, the terms are not interchangeable.

Heuristics are the rules and triggering/firing thresholds against which Pepperdata compares the actual metrics values for your applications.

• When thresholds are crossed, Pepperdata analyzes the data, and provides relevant recommendations.

For example, a single heuristic might have a low and a high threshold, from which Pepperdata can provide distinct recommendations such as "Too long average task runtime" and "Too short average task runtime". That is, there is not a 1:1 correspondence of heuristics to recommendations.

The table describes the Pepperdata recommendations for MapReduce applications: each recommendation’s name, for which phase it’s generated (mapper and/or reducer), its type (general guidance or specific tuning values to change), what triggered the recommendation (the cause), the text of the actual recommendation, and notes that provide additional information.

MapReduce Recommendations
Name Phase Type Cause Recommendation Notes
Mapper Reducer Guidance Tuning

Avg Physical Memory (MB) of {mapper/reducer} tasks

Excessive wasted physical memory. <N> {mappers/reducers} each asked for <N> GB of memory, but used an average of only <N> GB each. (Firing threshold, which is a ratio of {mapper’s/reducer’s} average memory used to its requested memory, is <= <N>).

Change {mapreduce.map.memory.mb/mapreduce.reduce.memory.mb} from CURRENT VALUE to PROPOSED VALUE.

Change {mapreduce.map.java.opts/mapreduce.reduce.java.opts} from CURRENT VALUE to PROPOSED VALUE.

  • The application requested much more memory than was actually used.

  • This recommendation might not be given for an app even when the app wasted a lot of memory on average. This is because the recommendation is based on the average of the app containers’ (or tasks’) peak memory use, but the wasted memory value (as shown in the Resource Usage tab of the App Details page) is calculated from the average memory use over the app’s entire runtime.

Avg Physical Memory (MB) of {mapper/reducer} tasks

Excessive wasted physical memory. <N> {mappers/reducers} each asked for <N> GB of memory, but used an average of only <N> GB each. (Firing threshold, which is a ratio of {mapper’s/reducer’s} average memory used to its requested memory, is <= <N>).

To decrease wasted memory: Decrease {mapper/reducers}s’ maximum memory by setting {mapreduce.map.memory.mb/mapreduce.reduce.memory.mb}=PROPOSED VALUE. Also, revise the {mapper/reducers}s’ heap size by setting {mapreduce.map.java.opts/mapreduce.reduce.java.opts}=PROPOSED VALUE

  • The application requested much more memory than was actually used.

  • This recommendation might not be given for an app even when the app wasted a lot of memory on average. This is because the recommendation is based on the average of the app containers’ (or tasks’) peak memory use, but the wasted memory value (as shown in the Resource Usage tab of the App Details page) is calculated from the average memory use over the app’s entire runtime.

Too short average reducer task runtime

Runtimes too short for reducers. <N> reducers took on average <= the threshold of <N> min.

Change mapreduce.job.reduces from CURRENT VALUE to PROPOSED VALUE.

Short runtimes indicate that the reducers are not being given the appropriate computational load. Fewer reducers leads to more work for the remaining reducers.

Too short average reducer task runtime

Runtimes too short for reducers. <N> reducers took on average <= the threshold of <N> min.

To speed up your app, decrease the number of reducers by increasing the minimum size for the app’s configured split block mapreduce.input.fileinputformat.split.minsize (default: <N> MB).

Short runtimes indicate that the reducers are not being given the appropriate computational load. Fewer reducers leads to more work for the remaining reducers.

Too short average mapper task runtime

Runtimes too short for mappers. <N> mappers took on average <= the threshold of <N> min.

Change mapreduce.input.fileinputformat.split.minsize value from CURRENT VALUE to PROPOSED VALUE

Analyzes mapper task runtimes.

Too short average mapper task runtime

Runtimes too short for mappers. <N> mappers took on average <= the threshold of <N> min.

To speed up your app, decrease the number of mappers by increasing the minimum size for the app’s configured split block mapreduce.input.fileinputformat.split.minsize to PROPOSED VALUE

Analyzes mapper task runtimes.

Too long average reducer task runtime

Runtimes too long for reducers. <N> reducers took on average >= the threshold of <N> min.

Change mapreduce.job.reduces from CURRENT VALUE to PROPOSED VALUE.

Long runtimes indicate that the reducers are given too much computational load. More reducers leads to less work for all the reducers.

Too long average reducer task runtime

Runtimes too long for reducers. <N> reducers took on average >= the threshold of <N> min.

To speed up your app, increase the number of reducers by decreasing the minimum size for the app’s configured split block mapreduce.input.fileinputformat.split.minsize (default: <N> MB).

Long runtimes indicate that the reducers are being given too much computational load. More reducers leads to less work for all the reducers.

Too long average mapper task runtime

Runtimes too long for mappers. <N> mappers took on average >= the threshold of <N> min.

Change {mapreduce.input.fileinputformat.split.minsize/mapreduce.input.fileinputformat.split.maxsize} value from CURRENT VALUE to PROPOSED VALUE

Analyzes mapper task runtimes.

Too long average mapper task runtime

Runtimes too long for mappers. <N> mappers took on average >= the threshold of <N> min.

To speed up your app, increase the number of mappers by decreasing the minimum size for the app’s cofigured split block mapreduce.input.fileinputformat.split.minsize to PROPOSED VALUE.

Analyzes mapper task runtimes.

Task GC/CPU ratio of {mapper/reducer} tasks

Excessive time spent on garbage collection (GC). <N> {mappers/reducers} each spent an average of <N>% of their execution time on GC. (Firing threshold >= <N>%.)

Change {mapreduce.map.memory.mb/mapreduce.reduce.memory.mb from CURRENT VALUE to PROPOSED VALUE.

Change {mapreduce.map.java.opts/mapreduce.reduce.java.opts} from CURRENT VALUE to PROPOSED VALUE.

Increasing the size of the containers and heap can increase GC efficiency.

Task GC/CPU ratio of {mapper/reducer} tasks

Excessive time spent on garbage collection (GC). <N> {mappers/reducers} each spent an average of <N>% of their execution time on GC. (Firing threshold >= <N>%.)

To speed up your app: Increase {mapper/reducer}s’ maximum memory by setting {mapreduce.map.memory.mb/mapreduce.reduce.memory.mb}=PROPOSED VALUE. Also, revise the {mapper/reducer}s’ heap size by setting {mapreduce.map.java.opts/mapreduce.reduce.java.opts}=PROPOSED VALUE

Analyzes garbage collection efficiency.

Median mapper tasks speed

Mappers spent an excessive time ingesting data. <N> mappers each ingested a median of <N> GB, at a median rate of <N> MB/sec. (Firing threshold <= <N> MB/sec.)

Change {mapreduce.input.fileinputformat.split.maxsize/mapreduce.inputfileinputformat.split.minsize} from CURRENT VALUE to PROPOSED VALUE.

Decreasing the block size leads to more mappers, which generally provides better throughput.

Median mapper tasks speed

Mappers spent an excessive time ingesting data. <N> mappers each ingested a median of <N> GB, at a median rate of <N> MB/sec. (Firing threshold <= <N> MB/sec.)

To speed up your app: Increase the number of mappers for the app by decreasing the app’s configured maximum size for a split block mapreduce.input.fileinputformat.split.maxsize from CURRENT VALUE to PROPOSED VALUE.

Ratio of spilled records to output records

Excessive mapper spill. <N> mappers averaged <N>spills/record. (Firing threshold >= <N> spills/record.)

Change {mapreduce.map.output.compress/mapreduce.map.output.compress.codec/mapreduce.task.io.sort.mb/mapreduce.map.sort.spill.percent} from CURRENT VALUE to PROPOSED VALUE

Ratio of spilled records to output records

Excessive mapper spill. <N> mappers averaged <N>spills/record. (Firing threshold >= <N> spills/record.)

To speed up your app: Compress mapper output by setting the app’s configured mapreduce.map.output.compress=true and mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec

To speed up your app: Increase the size of the app’s configured in-memory sort buffer mapreduce.task.io.sort.mb to PROPOSED VALUE

To speed up your app: Increase the app’s configured buffer spill percentage mapreduce.map.sort.spill.percent to PROPOSED VALUE

To speed up your app: Use the CombinedFileInputFormat class to reduce the map output size by adding the following code to your MapReduce program: job.setInputFormatClass(CombinedInputFormat.class);

Imbalanced work across {mapper/reducer} tasks

Imbalanced work across {mappers/reducers}. One group (<N> tasks that worked on an average of <N SIZE> of data each) worked on >= the firing threshold of <N> times more data than the other group (<N tasks that worked on an average of <N SIZE> of data each).

To speed up your app: Use the CombinedFileInputFormat class to decrease the {mapper/reducer} output size by adding the following code to your MapReduce program: job.setinputFormatClass(CombinedInputFormat.class);.

To speed up your app: Ensure that all input files are smaller than the dfs.blocksize value to prevent the creating of new {mappers/reducers} for small file pieces.

Unbalanced computational load is referred to as skew.

Imbalanced time spent across {mapper/reducer} tasks

Imbalanced time spent across {mapper/reducer} tasks. One group (<N> tasks that spent on an average of <N TIME>) spent >= the firing threshold of <N> times more than the other group (<N tasks that spent on an average of <N TIME>).

To speed up your app: Use the CombinedFileInputFormat class to decrease the {mapper/reducer} output size by adding the following code to your MapReduce program: job.setinputFormatClass(CombinedInputFormat.class);.

To speed up your app: Ensure that all input files are smaller than the dfs.blocksize value to prevent the creating of new {mappers/reducers} for small file pieces.

Average shuffle time

Excessive time spent shuffling. Reducers each spent an average of <N> ms shuffling data and <N> ms doing actual task execution. The shuffle/execution ratio >= the firing threshold of <N>.

To speed up your app: Tune the app slowstart configuration by setting the app’s configured mapreduce.job.reduce.slowstart.completedmaps currently set to <N> to <N> to 1.0 (inclusive). For multitenant clusters, use a value of 1.0.

Average shuffle time

Excessive time spent shuffling. Reducers each spent an average of <N> ms shuffling data and <N> ms doing actual task execution. The shuffle/execution ratio >= the firing threshold of <N>.

Change mapreduce.job.reduce.slowstart.completedmaps from CURRENT VALUE to PROPOSED VALUE

Average sort time

Excessive time spent sorting. Reducers each spent an average of <N> ms sorting data and <N> ms doing actual task execution. The sort/execution ratio >= the firing threshold of <N>.

To speed up your app: Tune the app slowstart configuration by setting the app’s configured mapreduce.job.reduce.slowstart.completedmaps currently set to <N> to <N> to 1.0 (inclusive). For multitenant clusters, use a value of 1.0.

Average sort time

Excessive time spent sorting. Reducers each spent an average of <N> ms sorting data and <N> ms doing actual task execution. The sort/execution ratio >= the firing threshold of <N>.

Change mapreduce.job.reduce.slowstart.completedmaps from CURRENT VALUE to PROPOSED VALUE

Distributed Cache Limit

This heuristic is triggered when jobs put files more than 500MB in the distributed cache

Change mapreduce.job.cache.files from CURRENT VALUE to PROPOSED VALUE

Change mapreduce.job.cache.archives from CURRENT VALUE to PROPOSED VALUE

Change mapreduce.job.cache.files.filesizes from CURRENT VALUE to PROPOSED VALUE

Change mapreduce.job.cache.archives.filesizes from CURRENT VALUE to PROPOSED VALUE