Hive Replication builds on the metastore event and ExIm features to provide a framework for replicating Hive metadata and data changes between clusters. There is no requirement for the source cluster and replica to run the same Hadoop distribution, Hive version, or metastore RDBMS. The replication system has a fairly 'light touch', exhibiting a low degree of coupling and using the Hive-metastore Thrift service as an integration point. However, the current implementation is not an 'out of the box' solution. In particular it is necessary to provide some kind of orchestration service that is responsible for requesting replication tasks and executing them.
See HiveReplicationDevelopment for information on the design of replication in Hive.
A more advanced replication mechanism is being implemented in Hive to address some of the limitations of this mode. See HiveReplicationv2Development for details.
DbNotificationListener
support).IMPORT
support).ReplicationTasks
. This is not a cluster requirement; it is needed only for the service orchestrating the replication.MetaStoreEventListener
, the implementation of the replication feature can only source events from the metastore database and hence the DbNotificationListener
must be used.To configure the persistence of metastore notification events it is necessary to set the following hive-site.xml
properties on the source cluster. A restart of the metastore service will be required for the settings to take effect.
<property> <name>hive.metastore.event.listeners</name> <value>org.apache.hive.hcatalog.listener.DbNotificationListener</value> </property> <property> <name>hive.metastore.event.db.listener.timetolive</name> <value>86400s</value> </property> |
The system uses the org.apache.hive.hcatalog.api.repl.exim.EximReplicationTaskFactory
by default. This uses EXPORT
and IMPORT
commands to capture, move, and ingest the metadata and data that need to be replicated. However, it is possible to provide custom implementations by setting the hive.repl.task.factory
Hive configuration property.
NOTIFICATION_LOG
table in the metastore will be populated with events on the successful execution of metadata operations such as CREATE
, ALTER
, and DROP
.ReplicationTasks
using org.apache.hive.hcatalog.api.HCatClient.getReplicationTasks(long, int, String, String)
.ReplicationTasks
encapsulate a set of commands to execute on the source Hive instance (typically to export data) and another set to execute on the replica instance (typically to import data). The commands are provided as Hive SQL strings.ReplicationTask
also serves as a place where database and table name mappings can be declared and StagingDirectoryProvider
implementations configured for the resolution of paths at both the source and destination:org.apache.hive.hcatalog.api.repl.ReplicationTask.withDbNameMapping(Function<String, String>)
org.apache.hive.hcatalog.api.repl.ReplicationTask.withTableNameMapping(Function<String, String>)
org.apache.hive.hcatalog.api.repl.ReplicationTask.withSrcStagingDirProvider(StagingDirectoryProvider)
org.apache.hive.hcatalog.api.repl.ReplicationTask.withDstStagingDirProvider(StagingDirectoryProvider)
task.getEvent().getEventId()
) and providing this as an offset when sourcing the next batch of events.hive.metastore.event.db.listener.timetolive
property. If notifications are not consumed in a timely manner they may be purged from the table before they can be actioned by the replication service.At this time it is not possible to replicate to tables on EMR that have a path location in S3. This is due to a bug in the dependency of the IMPORT
command in the EMR distribution (checked in AMI-4.2.0). Also, if using the EximReplicationTaskFactory
you may need to add the relevant S3 protocols to your Hive configurations:
<property> <name>hive.exim.uri.scheme.whitelist</name> <value>hdfs,s3a</value> </property> |