首页 > 代码库 > ResourceManger Restart

ResourceManger Restart

https://hadoop.apache.org/docs/r2.5.2/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html

ResourceManger Restart

  • ResourceManger Restart
    • Overview
    • Feature
    • Configurations

 

Overview

综述

ResourceManager is the central authority that manages resources and schedules applications running atop of YARN. Hence, it is potentially a single point of failure in a Apache YARN cluster.

ResourceManager 是管理资源和安排应用运行于YARN之上的中心机构;因此,在YARN集群中它是一个潜在的单点故障。

This document gives an overview of ResourceManager Restart, a feature that enhances ResourceManager to keep functioning across restarts and also makes ResourceManager down-time invisible to end-users.

本文概述了ResourceManager重启,这个特性能够使得RM在重启过程继续提供功能,并使得RM宕机时间对终端用户透明;

ResourceManager Restart feature is divided into two phases:

RM重启特性包括两个阶段:

ResourceManager Restart Phase 1: Enhance RM to persist application/attempt state and other credentials information in a pluggable state-store. RM will reload this information from state-store upon restart and re-kick the previously running applications. Users are not required to re-submit the applications.

RM重启阶段1:提高RM持久化应用/尝试状态和其他凭证信息,存储在一个可插拔的state-store中。RM将从state-store重载这些信息用于重启和反踢以前的运行的应用。用户无需重新提交这些应用。

ResourceManager Restart Phase 2: Focus on re-constructing the running state of ResourceManger by reading back the container statuses from NodeMangers and container requests from ApplicationMasters upon restart. The key difference from phase 1 is that previously running applications will not be killed after RM restarts, and so applications won‘t lose its work because of outage.

RM重启阶段2:聚焦重建运行的RM状态,通过从NodeMangers中读回container状态和从appMaster中读回在重启过程中container的请求;与第一阶段的关键区别是,以前运行的应用在RM重启后不会被kill,因此在RM运行中断过程中应用不会丢失。

As of Hadoop 2.4.0 release, only ResourceManager Restart Phase 1 is implemented which is described below.

从hadoop2.4发行版起,只有RM重启阶段1被实现了,如下所述。

Feature

The overall concept is that RM will persist the application metadata (i.e. ApplicationSubmissionContext) in a pluggable state-store when client submits an application and also saves the final status of the application such as the completion state (failed, killed, finished) and diagnostics when the application completes. Besides, RM also saves the credentials like security keys, tokens to work in a secure environment. Any time RM shuts down, as long as the required information (i.e.application metadata and the alongside credentials if running in a secure environment) is available in the state-store, when RM restarts, it can pick up the application metadata from the state-store and re-submit the application. RM won‘t re-submit the applications if they were already completed (i.e. failed, killed, finished) before RM went down.

整体的概念是当客户端提交应用程序并保存应用程序的最终状态(例如完成状态【failed, killed, finished】和应用程序完成时的诊断信息),RM将持久化应用的元数据(例如ApplicationSubmissionContext)在一个可插拔的state-store中。此外,在保密的环境中RM也保存凭证,例如安全密钥,令牌。任何时候RM宕机,只要存储在state-store中的信息(例如应用程序元数据和凭证【如果运行在一个安全的环境中】)是可用的,RM重启时,它可以从state-store中获取应用元数据并重新提交应用;在RM回落前RM不会重新提交那些已经完成【例如 failed, killed, finished】的应用程序。

NodeMangers and clients during the down-time of RM will keep polling RM until RM comes up. When RM becomes alive, it will send a re-sync command to all the NodeMangers and ApplicationMasters it was talking to via heartbeats. Today, the behaviors for NodeMangers and ApplicationMasters to handle this command are: NMs will kill all its managed containers and re-register with RM. From the RM‘s perspective, these re-registered NodeManagers are similar to the newly joining NMs. AMs(e.g. MapReduce AM) today are expected to shutdown when they receive the re-sync command. After RM restarts and loads all the application metadata, credentials from state-store and populates them into memory, it will create a new attempt (i.e. ApplicationMaster) for each application that was not yet completed and re-kick that application as usual. As described before, the previously running applications‘ work is lost in this manner since they are essentially killed by RM via the re-sync command on restart.

Configurations

This section describes the configurations involved to enable RM Restart feature.

  • Enable ResourceManager Restart functionality.

    To enable RM Restart functionality, set the following property in conf/yarn-site.xml to true:

    PropertyValue
    yarn.resourcemanager.recovery.enabled true
  • Configure the state-store that is used to persist the RM state.
    PropertyDescription
    yarn.resourcemanager.store.class The class name of the state-store to be used for saving application/attempt state and the credentials. The available state-store implementations are org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore , a ZooKeeper based state-store implementation and org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore , a Hadoop FileSystem based state-store implementation like HDFS. The default value is set to org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.
    • Configurations when using Hadoop FileSystem based state-store implementation.

      Configure the URI where the RM state will be saved in the Hadoop FileSystem state-store.

      PropertyDescription
      yarn.resourcemanager.fs.state-store.uri URI pointing to the location of the FileSystem path where RM state will be stored (e.g. hdfs://localhost:9000/rmstore). Default value is $hadoop.tmp.dir/yarn/system/rmstore. If FileSystem name is not provided, fs.default.name specified in conf/core-site.xml will be used.

      Configure the retry policy state-store client uses to connect with the Hadoop FileSystem.

      PropertyDescription
      yarn.resourcemanager.fs.state-store.retry-policy-spec Hadoop FileSystem client retry policy specification. Hadoop FileSystem client retry is always enabled. Specified in pairs of sleep-time and number-of-retries i.e. (t0, n0), (t1, n1), ..., the first n0 retries sleep t0 milliseconds on average, the following n1 retries sleep t1 milliseconds on average, and so on. Default value is (2000, 500)
    • Configurations when using ZooKeeper based state-store implementation.

      Configure the ZooKeeper server address and the root path where the RM state is stored.

      PropertyDescription
      yarn.resourcemanager.zk-address Comma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server (e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002") to be used by the RM for storing RM state.
      yarn.resourcemanager.zk-state-store.parent-path The full path of the root znode where RM state will be stored. Default value is /rmstore.

      Configure the retry policy state-store client uses to connect with the ZooKeeper server.

      PropertyDescription
      yarn.resourcemanager.zk-num-retries Number of times RM tries to connect to ZooKeeper server if the connection is lost. Default value is 500.
      yarn.resourcemanager.zk-retry-interval-ms The interval in milliseconds between retries when connecting to a ZooKeeper server. Default value is 2 seconds.
      yarn.resourcemanager.zk-timeout-ms ZooKeeper session timeout in milliseconds. This configuration is used by the ZooKeeper server to determine when the session expires. Session expiration happens when the server does not hear from the client (i.e. no heartbeat) within the session timeout period specified by this configuration. Default value is 10 seconds

      Configure the ACLs to be used for setting permissions on ZooKeeper znodes.

      PropertyDescription
      yarn.resourcemanager.zk-acl ACLs to be used for setting permissions on ZooKeeper znodes. Default value is world:anyone:rwcda
  • Configure the max number of application attempt retries.
    PropertyDescription
    yarn.resourcemanager.am.max-attempts The maximum number of application attempts. It‘s a global setting for all application masters. Each application master can specify its individual maximum number of application attempts via the API, but the individual number cannot be more than the global upper bound. If it is, the RM will override it. The default number is set to 2, to allow at least one retry for AM.

    This configuration‘s impact is in fact beyond RM restart scope. It controls the max number of attempts an application can have. In RM Restart Phase 1, this configuration is needed since as described earlier each time RM restarts, it kills the previously running attempt (i.e. ApplicationMaster) and creates a new attempt. Therefore, each occurrence of RM restart causes the attempt count to increase by 1. In RM Restart phase 2, this configuration is not needed since the previously running ApplicationMaster will not be killed and the AM will just re-sync back with RM after RM restarts.

ResourceManger Restart