Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Table of Contents

Components

TableOfContents(4)

This effort is still a "work in progress". Please feel free to add comments. BRBut please make the content less visible by using smaller fonts. – Edward J. Yoon

Overview

This is intended to explain and illustrate the concept of Hama. There are three main parts:

Building Block

Wiki Markup
\[http://wiki.apache.org/hama-data/attachments/Architecture/attachments/block.png\]

Wiki Markup
To store the matrices, Hama use a \[http://hadoop.apache.org/hbase/ Hbase\] -- Matrices are basically tables. They are ways of storing numbers and other things. Typical matrix has rows and columns. Actually called a 2-way matrix because it has two dimensions. For example, you might have respondents-by-attitudes. Of course, you might collect the same data on the same people at 5 points in time. In that case, you either have 5 different 2-way matrices, or you could think of it as a 3-way matrix, that is respondent-by-attitude-by-time.

Dense Matrix

For dense matrix computations, The block-partitioned algorithms used to minimize data movement and network cost. Dense Matrix and Blocked Dense Matrix are both stored in one table with other metadata. But, Blocked dense matrix can't be synchronized by dense matrix update.

No Format

  // Generate matrix with random elements
  DenseMatrix a = DenseMatrix.random(conf, 1000, 1000);
  DenseMatrix b = DenseMatrix.random(conf, 1000, 1000);
  
  // The type of the Matrix to be blocked, must be dense.
  a.blocking(2);
  b.blocking(2);

  DenseMatrix c = a.mult(b);

For example, The matrix multiplication of the original arrays can be transformed into matrix multiplication of blocks as describe below.

No Format

C_block(1,1)=A_block(1,1)*B_block(1,1) + A_block(1,2)*B_block(2,1)

C                 A               B
+-----+-----+     +-----+-----+   +-----+-----+
| x x |     |     | --> | --> |   | | | |     |
| x x |     |     | --> | --> |   | ↓ ↓ |     |
+-----+-----+  =  +-----+-----+ * +-----+-----+
|     |     |     |     |     |   | | | |     |
|     |     |     |     |     |   | ↓ ↓ |     |
+-----+-----+     +-----+-----+   +-----+-----+

Apache Hama, based on Bulk Synchronous Parallel model\[1\], comprises three major components: 

It is very similar with Hadoop architecture, only except the portion of communication and synchronization mechanisms.

In a normal usecase the user submits a so called "Job" which is a definition of how to run a computation. A job once submitted will have multiple tasks that are launched across the cluster.

BSPMaster

BSPMaster is responsible for the following:

  • Maintaining its own state.
  • Maintaining groom server status.
  • Maintaining supersteps and other counters in a cluster.
  • Maintaining jobs and tasks.
  • Scheduling Jobs and assigning tasks to groom servers
  • Distributing execution classes and configuration across groom servers.
  • Providing users with the cluster control interface (web and console based).

A BSP Master and multiple grooms are started by the script. Then, the bsp master starts up with a RPC server to which groom servers can dynamically register itself. Groom servers starts up with a BSPPeer instance - later, BSPPeer needs to be integrated with GroomServer - and a RPC proxy to contact the bsp master. After started, each groom periodically sends a heartbeat message that encloses its groom server status, including maximum task capacity, unused memory, and so on.

Each time the bsp master receives a heartbeat message, it brings up-to-date groom server status - the bsp master makes use of groom servers' status in order to effectively assign tasks to idle groom servers - and returns a heartbeat response that contains assigned tasks and others actions that a groom server has to do. For now, we have a FIFO job scheduler and very simple task assignment algorithms.

GroomServer

Wiki Markup
A [Groom Server|GroomServer] (shortly referred to as groom) is a process that manages life cycle of bsp tasks assigned by BSPMaster. Each groom contacts the BSPMaster, and reports task statuses by means of periodical piggybacks with BSPMaster. Each groom is designed to run with HDFS or other distributed storages. Basically, a groom server and a data node should run on one physical node to get the best performance for data-locality. Note that in a massive parallel environment, the benefit of data locality is lost when large amount of virtual processes must be multiplexed onto physical processes\[2\].

Zookeeper

A Zookeeper is used to manage the efficient barrier synchronization of the BSPPeers. Later, it will also be used for the area of a fault tolerance system.

Communication and Synchronization Process

Each BSP task has a set of Outgoing Message Manager and Incoming Queue.

Outgoing Message Manager collects the message to be sent, serializes it, compresses it and puts it in a bundles. At barrier synchronization phase, each BSP task exchanges the bundles, deserializes it, decompresses it and puts it into the Incoming Queue.

System Diagram

Image Added

  1. BSPMaster starts up
  2. GroomServer starts up
  3. ZooKeeper cluster starts up
  4. GroomServer dynamically registers itself to BSPMaster
  5. GroomServer forks/ manages BSPPeer(s)
  6. BSPPeers communicate/ perform barrier synchronization through ZooKeeper cluster.

Reference

Wiki Markup
\[1\]. Valiant, Leslie G., A bridging model for parallel computation. 

Wiki Markup
\[2\]. David B. Skillicorn, Jonathan M. D. Hill, and W. F. [McColl]. Questions and Answers about BSP. Scientific Programming, 6(3):249-274, Fall 1997.
– To be statically sized blocks, What should we do? – Edward