Horn \[hɔ:n\] (korean meaning of Horn is a "Spirit") is a neuron-centric programming APIs and execution framework for large-scale deep learning, built on top of Apache Hama. |
It is a goal of the Horn to provide a neuron-centric programming APIs which allows user to easily define the characteristic of artificial neural network model and its structure, and its execution framework that leverages the heterogeneous resources on Hama and Hadoop YARN cluster.
The initial ANN code was developed at Apache Hama project by a committer, Yexi Jiang (Facebook) in 2013. The motivation behind this work is to build a framework that provides more intuitive programming APIs like Google's MapReduce or Pregel and supports applications needing large model with huge memory consumptions in distributed way.
While many of deep learning open source softwares such as Caffe, DeepDist, DL4j, and NeuralGiraph are still data or model parallel only, we aim to support both data and model parallelism and also fault-tolerant system design. The basic idea of data and model parallelism is use of the remote parameter server to parallelize model creation and distribute training across machines, and the BSP framework of Apache Hama for performing asynchronous mini-batches. Within single BSP job, each task group works asynchronously using region barrier synchronization instead of global barrier synchronization, and trains large-scale neural network model using assigned data sets in BSP paradigm. Thus, we achieve data and model parallelism. This architecture is inspired by Google's DistBelief (Jeff Dean et al, 2012).
Some current goals include:
The core developers understand what it means to have a process based on meritocracy. We will provide continuous efforts to build an environment that supports this, encouraging community members to contribute.
A small community has formed within the Apache Hama project community, universities, and companies such as deep learning startup, instant messenger service company, and mobile manufacturing company. And many people are interested in the large-scale deep learning platform itself. By bringing Horn into Apache, we believe that the community will grow even bigger.
Edward J. Yoon, Thomas Jungblut, Jungin Lee, and Minho Kim
Apache Hama is already a core open source component at Samsung Electronics, and Horn also will be used by Samsung Electronics and Cldi Inc., and so there is no direct risk for this project to be orphaned.
Some are very new and the others have experience using and/or working on Apache open source projects.
The initial committers are from different organizations such as, Microsoft, Samsung Electronics, Seoul National University, Technical University of Munich, KAIST, LINE plus, and Cldi Inc.
Few will be worked as a full-time open source developer. Other developers will also start working on the project in their spare time.
Horn itself will hopefully have benefits from Apache, in terms of attracting a community and establishing a solid group of developers, but also the relation with Apache Hadoop, Zookeeper, and Hama. These are the main reasons for us to send this proposal.
Initial plan about Horn can be found at http://blog.udanax.org/2015/06/googles-distbelief-clone-project-on.html
The initial source code has been release as part of Apache Hama project developed under Apache Software Foundation. The source code is currently hosted at https://svn.apache.org/repos/asf/hama/trunk/ml/src/main/java/org/apache/hama/ml/ann/
Not applicable.
The Apache Incubator