Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For each step of feed-forward, the calculated results are propagated one layer close to the output layer. To be added...When the calculate results are propagated to the output layer, the procedure of feed-forward finishes and the neurons of the output layer contain the final results. More details about the feed-forward calculation can be seen at UFLDL tutorial

How Multilayer Perceptron is trained in Hama?

In general, the training data is stored in HDFS and is distributed in multiple machines. In Hama, the current implementation (0.6.2 and later) allows to train the MLP in parallel. Two kinds of components are involved in the training procedure: the master task and the groom task. The master task is in charge of merging the model updating information and sending model updating information to all the groom tasks. The groom tasks is in charge of calculate the weight updates according to the training data.

The training procedure is iterative and each iteration consists of two phases: update weights and merge update. In the update weights phase, each groom task would first update the local model according to the received message from the master task. Then they would compute the weight updates locally with assigned data partitions and finally send the updated weights to the master task. In the merge update phase, the master task would update the model according to the messages received from the groom tasks. Then it would distribute the updated model to all groom tasks. The two phases will repeat alternatively until the termination condition is met (reach a specified number of iterations)To be added...

How to use Multilayer Perceptron in Hama?

To be added..MLP can be used for both regression and classification. For both tasks, we need first initialize the MLP model by specifying the parameters.

Two class learning problem

...