ProtocolBuffers is an open source project supporting Google's ProtocolBuffer's platform-neutral and language-neutral interprocess-communication (IPC) and serialization framework. It has an Interface Definition Language (IDL) that is used to describe the wire- and file formats; this IDL is then pre-compiled into source code for the target languages (Python, Java and C++ included), which are then used in the applications.

Hadoop 0.23+ requires the protocol buffers JAR (protobufs.jar) to be on the classpath of both clients and servers; the native binaries are required to compile this and later versions of Hadoop.

In comparison with previous IDLs (such as CORBA, DCOM and SunOS RPC), ProtocolBuffers are designed to be

It's closest equivalent formats are Apache Thrift and the -in Apache incubation- Etch protocol

The protocol is significantly different from the Web Services WS-* stack, that has been criticised by Steve Loughran and Edmund Smith in Rethinking the Java SOAP Stack and RPC under fire in that the WS-* language for describing data XML-Schema, is not completely mappable to the Object-Oriented model of today's languages, yet the WS-* stacks attempt to seamlessly do so, even across languages. Loughran and Smith regard such an O/X mapping to be as insolvable as a perfect O/R Mapping, and hence doomed. Instead SOAP stacks should embrace the XML nature of documents and use mechanisms such as !XPath to directly work with the XML content. No widely used SOAP stack does this, as WS-* developers appear to prefer to write implementation-first code in which the datatypes are written in their native language, the interface specification reverse-engineered from this and then everyone hopes that this specification will be convertable into usable datatypes in other languages, and stable across protocol versions.

ProtocolBuffers and Thrift both require the IDL to be specified first, and have a code generation stage that generates language-specific code from it. Version support is explicitly handled,

One criticism of both ProtocolBuffers and Thrift is that the content is not self-describing; it is expected that the reader has compile-time expectations for the specific datatypes and interfaces, though possibly different versions. Apache Avro does include in-content type declarations and runtime parsing, which is why some organizations using Hadoop consider it a significantly better format for persistent data: it becomes possible to parse files without advance knowledge of their structure.

Installation Guide

Hadoop 0.23+ must have Google's ProtocolBuffers for compilation to work. These are native binaries which need to be downloaded, compiled and then installed locally. See BUILDING.txt.

This is a good opportunity to get the GNU C/C++ toolchain installed, which is useful for working on the native code used in the HDFS project.

To install and use ProtocolBuffers


Install the protobuf packages provided they are current enough -see the README file for the current version. If they are too old, uninstall any version you have and follow the instructions.

Local build and installation

Testing your Protocol Buffers installation

The test for this is verifying that protoc is on the command line. You should expect something like

$ protoc
Missing input file.

You may see the error message

$ protoc
protoc: error while loading shared libraries: cannot open shared object file: No such file or directory

This is a known issue for Linux, and is caused by a stale cache of libraries. Run ldconfig and try again.

ProtocolBuffers (last edited 2014-09-04 21:15:00 by ArpitAgarwal)