You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 26 Next »

Node: This page is always under construction.

What is Multilayer Perceptron?

A multilayer perceptron (MLP) is a kind of Too feed forward artificial neural network, which is a mathematical model inspired by the biological neural network. The multilayer perceptron can be used for various machine learning tasks such as classification and regression.

The basic component of a multilayer perceptron is the neuron. In a multilayer perceptron, the neurons are aligned in layers and in any two adjacent layers the neurons are connected in pairs with weighted edges. A practical multilayer perceptron consists of at least three layers of neurons, including one input layer, one or more hidden layers, and one output layer.

The size of input layer and output layer determines what kind of data a MLP can accept. Specifically, the number of neurons in the input layer determines the dimensions of the input feature, the number of neurons in the output layer determines the dimension of the output labels. Typically, the two-class classification and regression problem requires the size of output layer to be one, while the multi-class problem requires the size of output layer equals to the number of classes. As for hidden layer, the number of neurons is a design issue. If the neurons are too few, the model will not be able to learn complex decision boundaries. On the contrary, too many neurons will decrease the generalization of the model.

Here is an example MLP with 1 input layer, 1 hidden layer and 1 output layer:

https://docs.google.com/drawings/d/1DCsL5UiT6eqglZDaVS1Ur0uqQyNiXbZDAbDWtiSPWX8/pub?w=813&h=368

How Multilayer Perceptron works?

In general, people use the (already prepared) MLP by feeding the input features to the input layer and get the result from the output layer. The results are calculated in a feed-forward approach, from the input layer to the output layer.

One step of feed-forward is illustrated in the below figure.

https://docs.google.com/drawings/d/1hJ2glrKKIWokQOy6RI8iw1T8TmuZFcbaCwnzGoKc8gk/pub?w=586&h=302

For each layer except the input layer, the value of the current neuron is calculated by taking the linear combination of the values output by the neurons of the previous layer, where the weight determines the contribution of a neuron in the previous layer to current neuron (as equation (1) shown). Obtaining the linear combination result z, a non-linear squashing function is used to constrain the output into a restricted range (as equation (2) shown). Typically, sigmoid function or tanh function are used.

http://people.apache.org/~yxjiang/downloads/equ1.png

http://people.apache.org/~yxjiang/downloads/equ2.png

For each step of feed-forward, the calculated results are propagated one layer close to the output layer. When the calculate results are propagated to the output layer, the procedure of feed-forward finishes and the neurons of the output layer contain the final results. More details about the feed-forward calculation can be seen at UFLDL tutorial

How Multilayer Perceptron is trained in Hama?

In general, the training data is stored in HDFS and is distributed in multiple machines. In Hama, the current implementation (0.6.2 and later) allows to train the MLP in parallel. Two kinds of components are involved in the training procedure: the master task and the groom task. The master task is in charge of merging the model updating information and sending model updating information to all the groom tasks. The groom tasks is in charge of calculate the weight updates according to the training data.

The training procedure is iterative and each iteration consists of two phases: update weights and merge update. In the update weights phase, each groom task would first update the local model according to the received message from the master task. Then they would compute the weight updates locally with assigned data partitions and finally send the updated weights to the master task. In the merge update phase, the master task would update the model according to the messages received from the groom tasks. Then it would distribute the updated model to all groom tasks. The two phases will repeat alternatively until the termination condition is met (reach a specified number of iterations).

How to use Multilayer Perceptron in Hama?

MLP can be used for both regression and classification. For both tasks, we need first initialize the MLP model by specifying the parameters. The parameters are listed as follows:

Parameter

Description

model path

The path to specify the location to store the model.

learningRate

Control the aggressive of learning. A big learning rate can accelerate the training speed,
but may also cause oscillation. Typically in range (0, 1).

regularization

Control the complexity of the model. A large regularization value can make the weights between
neurons to be small, and increase the generalization of MLP, but it may reduce the model precision.
Typically in range (0, 0.1).

momentum

Control the speed of training. A big momentum can accelerate the training speed, but it may
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="120681d0-9f8a-4d62-abee-f2c9f066aec0"><ac:plain-text-body><![CDATA[ also mislead the model update. Typically in range [0.5, 1)

]]></ac:plain-text-body></ac:structured-macro>

squashing function

Activate function used by MLP. Candidate squashing function: sigmoid, tanh.

cost function

Evaluate the error made during training. Candidate cost function: squared error, cross entropy (logistic).

layer size array

An array specify the number of neurons (exclude bias neurons) in each layer (include input and output layer).

The following is the sample code regarding model initialization.

    String modelPath = "/tmp/xorModel-training-by-xor.data";
    double learningRate = 0.6;
    double regularization = 0.02; // no regularization
    double momentum = 0.3; // no momentum
    String squashingFunctionName = "Tanh";
    String costFunctionName = "SquaredError";
    int[] layerSizeArray = new int[] { 2, 5, 1 };
    SmallMultiLayerPerceptron mlp = new SmallMultiLayerPerceptron(learningRate,
        regularization, momentum, squashingFunctionName, costFunctionName,
        layerSizeArray);

Two class learning problem

To be added...

Example: XOR problem

To be added...

Multi class learning problem

To be added...

Example:

To be added...

Regression problem

To be added...

Example: Predict the sunspot activity

To be added...

Advanced Topics

To be added...

Parameter setting

To be added...

Reference

[1] Tom Mitchell. Machine Learning. [McGraw] Hill, 1997.

[2] Stanford Unsupervised Feature Learning and Deep Learning tutorial. http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial.

[3] Jiawei Han and Micheline Kamber. Data Mining Concept and Technology. The Morgan Kaufmann Series in Data Management Systems. 2011.

[4] Christopher M. Bishop. Neural Networks and Pattern Recognition. Oxford University Press. 1995.

  • No labels