...
- All public classes and methods should have informative Javadoc comments.
- Do not use @author tags.
- Code should be formatted according to Sun's conventions. We use four spaces (not tabs) for indentation.
- Contributions should pass unit tests.
- New unit tests should be provided to demonstrate bugs and fixes. JUnit is our test framework:
- You must implement a class that extends
junit.framework.TestCase
and whose class class whose class name containsTest
. - If an
HDFS
cluster and/or aMapReduce
cluster is needed by your test, add a field of typeMiniCluster
MiniGenericCluster
to the class and initialize it with a statement like the following (the name of the field is not important).TestAlgebraicEval.java
is an example of a test that uses cluster. The test will then run on a cluster created on the local machine.
MiniCluster MiniGenericCluster cluster = MiniCluster MiniGenericCluster.buildCluster();
- You must implement a class that extends
- Define methods within your class and annotate it with
@Test
, and call JUnit's many assert methods to verify conditions; these methods will be executed when you runant test
. - Place your class in the
test
tree. - You can then run the core unit test with the command
ant test-commit
. Similarly, you can run a specific unit test with the commandant test -Dtestcase=<ClassName>
(For exampleant test -Dtestcase=TestPigFile
)
...
Make sure that your code introduces no new warnings into the javac compilation.
To compile with Hadoop 1.x
Code Block |
---|
> ant clean jar |
To compile with Hadoop 2.x x:
Code Block |
---|
> ant clean jar -Dhadoopversion=23 |
The hadoopversion setting has 2 values - 20 and 23. -Dhadoopversion=20
which is the default denotes the Hadoop 0.20.x and 1.x releases which are the old versions with JobTracker. -Dhadoopversion=23
denotes the Hadoop 0.23.x and Hadoop 2.x releases which are the next gen versions of Hadoop which are latest Pig codebase only supports Hadoop 2.x which is based on YARN and have has separate Resource Manager and Application Masters instead of a single JobTracker that managed both resources (cpu, memory) and running of mapreduce applications. The exact versions of Hadoop 1.x or 2.x pig compiles against is configured in in ivy/libraries.properties
and is usually updated to compile against the latest stable releases.
Please note that in earlier versions Pig used to support older Hadoop versions too, and there was an option to select a certain Hadoop version at build time. If you would like to contribute to older release branches (0.16.0 or below) you will have to set the hadoopversion property. It has 2 values - 20 and 23. -Dhadoopversion=20
which is the default denotes the Hadoop 0.20.x and 1.x releases which are the old versions with JobTracker. The other option, -Dhadoopversion=23
denotes the Hadoop 0.23.x and Hadoop 2.x releases.
Unit Tests
The full suite of pig unit tests has a huge number of tests and there are multiple hadoop versions - Hadoop 1.x and Hadoop 2.x and multiple execution modes - mapreduce (default), spark, tez against which the whole test suite can be run. Since it takes a really long time, you are not expected to run the full suite of tests before submitting the patch. You can just run and verify the test classes affected by your patch and also run test-commit which runs a core set of tests that takes 20 mins. If the fix is specific to a particular execution mode (For eg: tez or spark), run the tests with that exectype. The Pig commit build (https://builds.apache.org/job/Pig-trunk-commit) which runs daily will report any additional failures on the committed patch and a new patch can be submitted that fixes those failures later. Some of the different test goals are test
- full suite of unit tests in mapreduce mode, test-tez
- full suite of unit tests in tez mode, test-commit
- core set of tests in mapreduce mode. In the below examples, remove -Dhadoopversion=23
to run the tests with Hadoop 1.x instead of Hadoop 2.x. The tez and spark execution modes are only applicable with Hadoop 2.x.
To run the full suite of testcases in mapreduce mode with Hadoop 2.x. Usually you don't have to run this unless you are doing major changes.
Code Block |
---|
> ant clean test -Dhadoopversion=23 |
To run the full suite of testcases in tez mode with Hadoop 2.x. This is a shortcut which takes care of adding -Dhadoopversion=23
-Dexectype=tez
. Usually you don't have to run this unless you are doing major changes.
...
Code Block |
---|
> ant clean test -Dtestcase=TestEvalPipeline -Dhadoopversion=23 |
To run a single testcase with Hadoop 2.x and tez as execution engine
Code Block |
---|
> ant clean test -Dtestcase=TestEvalPipeline2 -Dhadoopversion=23 -Dexectype=tez |
To run the core set of unit tests follow below steps. Please make sure that all the core unit tests and the tests you wrote succeed before constructing your patch.
Code Block |
---|
> cd trunk
> ant -Djavac.args="-Xlint -Xmaxwarns 1000" clean test-commit -Dhadoopversion=23
|
This should run in around 20 minutes.
...