Suggested Memory settings

Cluster SizeRecommended Mode

Collector Heapsize

ams-env : metrics_collector_heapsize

HBase Master Heapsize

ams-hbase-env : hbase_master_heapsize

HBase RS Heapsize

ams-hbase-env : hbase_regionserver_heapsize

HBase Master xmn size

ams-hbase-env : hbase_master_xmn_size

HBase RS xmn size

ams-hbase-env : regionserver_xmn_size

1 - 10Embedded5121408512192-
11 - 20Embedded10241920512256-
21 - 100Embedded16645120512768-

100 - 300

Embedded4352130565122048-
300 - 500Distributed4352512130561022048
500 - 800Distributed7040512211201023072
800 - 1000Distributed11008512327681025120
1000+Distributed with 2 Metric Collectors (From Ambari 2.5.2)13696512327681025120

Identifying and tackling scale problems in AMS

Understanding scale issues in AMS (Why)

The Metrics Collector component is the central daemon that receives metrics from ALL the service sinks and monitors that sends metrics. The collector uses HBase as its store and phoenix as the data accessor layer.

In a high level, the metrics collector performs 2 operations related to scale in a continuos basis.

  • Handle raw writes -  A raw write is a bunch of metric data points received from services written onto HBase through phoenix. There is no read or aggregation involved. 
  • Periodically aggregate data - AMS aggregates data across cluster and across time. 
    • Cluster Aggregator - Computing the min,max,avg and sum of memory across all hosts is done by a cluster aggregator. This is called a 'TimelineClusterAggregatorSecond' which runs every 2 mins. In every run it reads the entire last 2 mins data and calculates aggregates and writes back. The read is expensive since it has to read non-aggregated data, while the write volume is smaller since it is aggregated data. For example, in a 100 node cluster, mem_free from 100 hosts becomes 1 aggregate metric value in this aggregator.
    • Time Aggregator - Also called 'downsampling', this aggregator rolls up the data in the time dimension. This helps AMS TTL out smaller precision seconds data and hold aggregate data for a longer time. For example, if we have data point for every 10 seconds, the 5min time aggregator takes the 30 data points every 5 mins and creates 1 rolled up value. There are higher level downsamplers (1hour, 1day) as well, and they use their immediate predecessors data (1hr => 5mins, 1day => 1hr ). However, it is the 5min aggregator that is high compute since it reads the entire last 5 mins data  and downsamples it. Again, the read is very expensive since it has to read non-aggregated data, while the write volume is smaller. This downsampler is called 'TimelineHostAggregatorMinute'

Scale problems occur in AMS when one or both of the above operations cannot happen smoothly. The 'load' on AMS is decided based on following factors

  • How many hosts in the cluster?
  • How many metrics each component is sending to AMS?

Either of the above can cause performance issues in AMS. 

How do we find out if AMS is experiencing scale problems?

One or more of the following consequences can be seen on the cluster.

  • Metrics Collector shuts down intermittently. Since Auto Restart is enabled for Metrics collector by default, this will up show as an alert stating 'Metrics collector has been auto restarted # times the last 1 hour'.
  • Partial metrics data is seen.
    • All non-aggregated host metrics are seen (HDFS Namenode metrics  / Host summary page on Ambari / System - Servers Grafana dashboard).
    • Aggregated data is not seen. (AMS Summary page / System - Home Grafana dashboard / HBase - Home Grafana dashboard).

Get the current state of the system

#What information to gather?How to get that information?How to identify if there is a red flag?
1Is AMS able to handle raw writes*?

Look for log lines like 'AsyncProcess:1597 - #1, waiting for 13948 actions to finish' in the log.

 

If the number of actions to finish keep increasing and eventually AMS shuts down,

then it could mean AMS is not able to handle raw writes.

2How long does it take for 2 min cluster aggregator to finish?

grep "TimelineClusterAggregatorSecond" /var/log/ambari-metrics-collector/ambari-metrics-collector.log | less.

Look for the time taken between 'Start aggregation cycle....' and 'Saving ## metric aggregates'

>2 mins aggregation time
3How long does it take for 5 min host aggregator to finish?

grep "TimelineMetricHostAggregatorMinute" /var/log/ambari-metrics-collector/ambari-metrics-collector.log | less.

Look for the time taken between 'Start aggregation cycle....' and 'Saving ## metric aggregates'

>5 mins aggregation time
4How many metrics are being collected?

curl -K http://<ams-host>:6188/ws/v1/timeline/metrics/metadata -o /tmp/metrics_metadata.txt

Number of metrics is the output of the command 'grep -o "metricname" /tmp/metrics_metadata.txt | wc -l'

 

> 15000 metrics

Find out which component is sending a lot of metrics.

5What is the number of regions and store files in AMS HBase?

This can be got from AMS HBase Master UI.

http://<METRICS_COLLECTOR_HOST>:61310

> 150 regions

> 2000 store files

6How fast is AMS HBase flushing, and how much data is being flushed?

Check for master log in embedded mode and RS log in distributed mode.

grep "memstore flush" /var/log/metric_collector/hbase-ams-<>.log | less

Check how often METRIC_RECORD flushes happen and how much data is flushed?

 

>10 flushes in a minute could be a problem.

The flush size should be approx equal to flush size config in ams-hbase-site

7If AMS is in distributed mode, is there a local Datanode?From the cluster.

In distributed mode, a local datanode helps with HBase read shortcircuit feature.

(http://hbase.apache.org/0.94/book/perf.hdfs.html)


Fixing / Recovering from the problem.

The above problems could occur because of a 2-3 underlying reasons. 

Underlying ProblemWhat it could causeFix / Workaround 
Too many metrics (#4 from above)It could cause ALL of the problems mentioned above.

#1 : Trying out config changes

  • First, we can try increasing memory of Metrics collector, HBase Master / RS based on mode. (Refer to memory configurations table at the top of the page)
  • Configure AMS to read more data in a single phoenix fetch
    • Set ams-site: timeline.metrics.service.resultset.fetchSize = 5000 (for <100 nodes) or 10000 (>100 nodes)
  • Increase Hbase regionserver handler count.
    • Set ams-hbase-site: hbase.regionserver.handler.count = 30
  • If Hive is sending a lot of metrics. Do not aggregate hive table metrics.
    • Set ams-site:timeline.metrics.cluster.aggregation.sql.filters = sdisk_%,boottime,default.General% (Only From Ambari-2.5.0)

#2 : Reducing number of metrics

If the above config changes do not increase AMS stability, you can whitelist selected metrics or blacklist certain components' metrics that are causing the load issue.

 

AMS node has slow disk speed.

Disk is not able to keep up with high volume data.

It can cause raw writes and aggregation problems.
  • On larger clusters (>800 nodes) with distributed mode, suggest 3-5 SSDs on metrics collector node and create a config group for DN on that host to use those 3-5 disks as directories.
  • ams-hbase-site :: hbase.rootdir - Change this path to a disk mount that is not heavily contended.
  • ams-hbase-ste :: hbase.tmp.dir - Change this path to a location different from hbase.rootdir
  • ams-hbase-ste :: hbase.wal.dir - Change this path to a location different from hbase.root.dir (From Ambari-2.5.1)
  • Metric whitelisting will help in decreasing metric load.
 

Known issues around HBase normalier and FIFO compaction.

Documented in Known Issues (#11 and #13)

This can be identified in #5 in the above table.Follow workaround steps in Known issue doc. 

 

Other Advanced Configurations

ConfigurationPropertyDescriptionMinimum Recommended values (Host Count => MB)
ams-sitephoenix.query.maxGlobalMemoryPercentage

Percentage of total heap memory used by Phoenix

threads in the Metrics Collector API/Aggregator daemon.

20 - 30, based on available memory. Default = 25.
ams-sitephoenix.spool.directorySet directory for Phoenix spill files. (Client side)Set this to different disk from hbase.rootdir dir if possible.
ams-hbase-sitephoenix.spool.directorySet directory for Phoenix spill files. (Server side)Set this to different disk from hbase.rootdir dir if possible.
ams-hbase-sitephoenix.query.spoolThresholdBytes

Threshold size in bytes after which results from parallelly

executed query results are spooled to disk.

Set this to higher value based on available memory.

Default is 12 mb.

  • No labels