Apache Hadoop is a framework for running applications on large cluster built of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named Map/Reduce, where the application is divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster. In addition, it provides a distributed file system (HDFS) that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both MapReduce and the Hadoop Distributed File System are designed so that node failures are automatically handled by the framework.
Official Apache Hadoop Website: download, bug-tracking, mailing-lists, etc.
Overview of Apache Hadoop
FAQ Frequently Asked Questions.
Distributions and Commercial Support for Hadoop (RPMs, Debs, AMIs, etc)
PoweredBy, a growing list of sites and applications powered by Apache Hadoop
HBase, a Bigtable-like structured storage system for Hadoop HDFS
Apache Pig is a high-level data-flow language and execution framework for parallel computation. It is built on top of Hadoop Core.
Hive a data warehouse infrastructure which allows sql-like adhoc querying of data (in any format) stored in Hadoop
ZooKeeper is a high-performance coordination service for distributed applications.
Hama, a Google's Pregel-like distributed computing framework based on BSP (Bulk Synchronous Parallel) computing techniques for massive scientific computations.
Mahout, scalable Machine Learning algorithms using Hadoop
GettingStartedWithHadoop (lots of details and explanation)
QuickStart (for those who just want it to work now)
Command Line Options for the Hadoop shell scripts.
Troubleshooting What do when things go wrong
Setting up a Hadoop Cluster
HowToConfigure Hadoop software
Performance: getting extra throughput
- Virtual Clusters including Amazon AWS
Running Hadoop On Ubuntu Linux (Single-Node_Cluster) Tutorial by Michael Noll on installing, configuring and running Hadoop on a single Ubuntu Linux machine.
Running Hadoop On Ubuntu Linux (Multi-Node Cluster) Tutorial by Michael Noll on how to setup a multi-node Hadoop cluster.
Hadoop Windows/Eclipse Tutorial: How to develop Hadoop with Eclipse on Windows.
The MapReduce algorithm is the foundational algorithm of Hadoop, and is critical to understand.
Contributed parts of the Hadoop codebase
- These are independent modules that are in the Hadoop codebase but not tightly integrated with the main project -yet.
HadoopStreaming (Useful for using Hadoop with other programming languages)
DistributedLucene, a Proposal for a distributed Lucene index in Hadoop
MountableHDFS, Fuse-DFS & other Tools to mount HDFS as a standard filesystem on Linux (and some other Unix OSs)
HDFS-APIs in Perl, Python, PHP and other languages.
Chukwa a data collection, storage, and analysis framework
HDFS-RAID Erasure Coding in HDFS
Roadmap, listing release plans.
Jira usage guidelines
Nutch Hadoop Tutorial (Useful for understanding Hadoop in an application context)
IBM MapReduce Tools for Eclipse - Out of date. Use the Eclipse Plugin in the MapReduce/Contrib instead
- Hadoop IRC channel is #hadoop at irc.freenode.net.
Using Spring and Hadoop (Discussion of possibilities to use Hadoop and Dependency Injection with Spring)
Univa Grid Engine Integration A blog post about the integration of Hadoop with the Grid Engine successor Univa Grid Engine
Hadoop Grid Engine Integration Open Grid Scheduler/Grid Engine Hadoop integration setup instructions.
Hadoop Tutorial Series Learning progressively important core Hadoop concepts with hands-on experiments using the Cloudera Virtual Machine
Dumbo Dumbo is a project that allows you to easily write and run Hadoop programs in Python.
Hadoop distributed file system New Hadoop Connector Enables Ultra-Fast Transfer of Data between Hadoop and Aster Data's MPP Data Warehouse.
HDFS Architecture Documentation An overview of the HDFS architecture, intended for contributors.