Github Setup and Pull Requests (PRs)

There are several ways to setup Git for committers and contributors. Contributors can safely setup Git any way they choose but committers should take extra care since they can push new commits to the trunk at Apache and various policies there make backing out mistakes problematic. To keep the commit history clean take note of the use of --squash below when merging into apache/trunk.

Git setup for Committers

This describes setup for one local repo and two remotes. It allows you to push the code on your machine to either your Github repo or to git-wip-us.apache.org. You will want to fork github's apache/hadoop to your own account on github, this will enable Pull Requests of your own. Cloning this fork locally will set up "origin" to point to your remote fork on github as the default remote. So if you perform git push origin trunk it will go to github.

To attach to the apache git repo do the following:

git remote add apache https://git-wip-us.apache.org/repos/asf/hadoop.git

To check your remote setup:

git remote -v

you should see something like this:

origin    https://github.com/your-github-id/hadoop.git (fetch)
origin    https://github.com/your-github-id/hadoop.git (push)
apache    https://git-wip-us.apache.org/repos/asf/hadoop.git (fetch)
apache    https://git-wip-us.apache.org/repos/asf/hadoop.git (push)

Now if you want to experiment with a branch everything, by default, points to your github account because origin is the. You can work as normal using only github until you are ready to merge with the apache remote. Some conventions will integrate with Apache Jira ticket numbers.

git checkout -b feature/hadoop-xxxx #xxxx typically is a Jira ticket number
#do some work on the branch
git commit -a -m "doing some work"
git push origin feature/hadoop-xxxx # notice pushing to **origin** not **apache**

Once you are ready to commit to the apache remote you can merge and push them directly or better yet create a PR.

We recommend creating new branches under feature/ to help group ongoing work, especially now that as of November 2015, forced updates are disabled on ASF branches. We hope to reinstate that ability on feature branches to aid development.

How to create a PR (committers)

Push your branch to Github:

git checkout `feature/hadoop-xxxx`
git rebase apache/trunk # to make it apply to the current trunk
git push origin `feature/hadoop-xxxx`
  1. Go to your feature/hadoop-xxxx branch on Github. Since you forked it from Github's apache/hadoop it will default any PR to go to apache/trunk.

  2. Click the green "Compare, review, and create pull request" button.
  3. You can edit the to and from for the PR if it isn't correct. The "base fork" should be apache/hadoop unless you are collaborating separately with one of the committers on the list. The "base" will be trunk. Don't submit a PR to one of the other branches unless you know what you are doing. The "head fork" will be your forked repo and the "compare" will be your feature/hadoop-xxxx branch.

  4. Click the "Create pull request" button and name the request "HADOOP-XXXX" all caps. This will connect the comments of the PR to the mailing list and Jira comments.

From now on the PR lives on github's apache/hadoop repository. You use the commenting UI there.

If you are looking for a review or sharing with someone else say so in the comments but don't worry about automated merging of your PR —you will have to do that later. The PR is tied to your branch so you can respond to comments, make fixes, and commit them from your local repo. They will appear on the PR page and be mirrored to Jira and the mailing list. When you are satisfied and want to push it to Apache's remote repo proceed with Merging a PR

How to create a PR (contributors)

Create pull requests: GitHub PR docs.

Pull requests are made to apache/hadoop repository on Github. In the Github UI you should pick the trunk branch to target the PR as described for committers. This will be reviewed and commented on so the merge is not automatic. This can be used for discussing a contributions in progress.

Merging a PR (yours or contributors)

Start with reading GitHub PR merging locally. Remember that pull requests are equivalent to a remote github branch with potentially a multitude of commits. In this case it is recommended to squash remote commit history to have one commit per issue, rather than merging in a multitude of contributor's commits. In order to do that, as well as close the PR at the same time, it is recommended to use squash commits. Merging pull requests are equivalent to a "pull" of a contributor's branch:

git checkout trunk      # switch to local trunk branch
git pull apache trunk   # fast-forward to current remote HEAD
git pull --squash https://github.com/cuser/hadoop cbranch  # merge to trunk

The --squash option ensures all PR history is squashed into single commit, and allows committer to use his/her own message. Read git help for merge or pull for more information about --squash option. In this example we assume that the contributor's Github handle is "cuser" and the PR branch name is "cbranch". Next, resolve conflicts, if any, or ask a contributor to rebase on top of trunk, if PR went out of sync.

If you are ready to merge your own (committer's) PR you probably only need to merge (not pull), since you have a local copy that you've been working on. This is the branch that you used to create the PR.

git checkout trunk      # switch to local trunk branch
git pull apache trunk   # fast-forward to current remote HEAD
git merge --squash feature/hadoop-xxxx

Remember to run regular patch checks, build with tests enabled, and change CHANGES.TXT for the appropriate part of the project.

If everything is fine, you now can commit the squashed request along the lines

git commit -a -m "HADOOP-XXXX description (cuser via your-apache-id) closes apache/hadoop#ZZ"

HADOOP-XXXX is all caps and where ZZ is the pull request number on apache/hadoop repository. Including closes apache/hadoop#ZZ will close the PR automatically. More information is found at GitHub PR closing docs. Next, push to git-wip-us.apache.org:

push apache trunk

(this will require Apache handle credentials).

The PR, once pushed, will get mirrored to github. To update your personal github version push there too:

push origin trunk

Note on squashing: Since squash discards remote branch history, repeated PRs from the same remote branch are difficult for merging. The workflow implies that every new PR starts with a new rebased branch. This is more important for contributors to know, rather than for committers, because if new PR is not mergeable, github would warn to begin with. Anyway, watch for dupe PRs (based on same source branches). This is a bad practice.

Closing a PR without committing (for committers)

When we want to reject a PR (close without committing), we can just issue an empty commit on trunk HEAD without merging the PR:

git commit --allow-empty -m "closes apache/hadoop#ZZ *Won't fix*"
git push apache trunk

That will close PR ZZ on github mirror without merging and any code modifications in the master repository.

Apache/github integration features

Read infra blog. Comments and PRs with Hadoop issue handles should post to mailing lists and Jira. Hadoop issue handles must in the form HADOOP-YYYYY (all capitals). Usually it makes sense to file a JIRA issue first, and then create a PR with description

HADOOP-YYYY: <jira-issue-description>

In this case all subsequent comments will automatically be copied to JIRA without having to mention the JIRA issue explicitly in each comment of the PR.

Best Practises

Avoiding accidentally committing private branches to the ASF repo

Its dangerously easy —especially when using IDEs— to accidentally commit changes to the ASF repo, be it direct to the trunk, branch-2 or other standard branch on which you are developing, or to a private branch you had intended to keep on github (or a private repo).

Committers can avoid this by having the directory in which they develop code set up with read only access to the ASF repository on github, without the apache repository added. A separate directory should be set up with write access to the ASF repository as well as read access to your other repositories. Merging operations and pushes back to the ASF repo are done from this directory —so isolated from all local development.

If you accidentally commit a patch to an ASF branch, do not attempt to roll back the branch and force out a new update. Simply commit and push out a new patch revoking the change.

If you do accidentally commit a branch to the ASF repo, the infrastructure team can delete it —but they cannot stop it propagating to github and potentially being visible. Try not to do that.

Avoiding accidentally committing private keys to Amazon AWS, Microsoft Azure or other cloud infrastructures

All the cloud integration projects under hadoop-tools expect a resource file, resources/auth-keys.xml to contain the credentials for authenticating with cloud infrastructures. These files are explicitly excluded from git through entries in .gitignore. To avoid running up large bills and/or exposing private data, it is critical to keep any of your credentials secret.

For maximum security here, clone your hadoop repository into create separate directory for cloud tests, one with read-only access. Create the auth-keys.xml files there. This guarantees that you cannot commit the credentials, albeit with a somewhat more complex workflow, as patches must be pushed to a git repository before being pulled and tested into the cloud-enabled directory.

Accidentally committing secret credentials can be very expensive. You will not only need to revoke your keys, you will need to kill all bitcoining machines created on all EC2 zones, and all outstanding spot-price bids for them.

GithubIntegration (last edited 2015-11-10 14:10:35 by SteveLoughran)