Google Summer of Code 2016

Project Proposal [TAJO-2046]

Support Kudu as one of Tajo’s storage

 


Table of Content

  1. Background

  2. Deliverables

  3. Implementation

  4. Timeline

  5. Community Engagement

  6. Further Development

  7. Other Commitments

  8. About Me

Background

       As the amount of data has been increased intensively, storing and analyzing massive data in their storage has been an important issue for hundreds of thousands of enterprises. For this reason, many distributed systems such as Apache Hadoop, Apache HBase or Apache Cassandra have been introduced. Hadoop guarantees high-throughput sequential access, but is weak at updating each record and efficient random access. On the other hand, systems like HBase or Cassandra is good at low-latency record-level reads and writes, but weak at sequential read throughput. Apache Kudu is a new storage system designed to fill a gap between those issues. Kudu aims for both high-throughput sequential-access and low-latency random access.

       This document proposes making Tajo support Kudu as one of Tajo’s storage.

Deliverables

  1. A new submodule that supports Kudu as one of Tajo’s storage

  2. Documentation for connection establishment between Tajo and Kudu

  3. A user guide : How to integrate Kudu with Tajo

  4. Unit tests and results

Implementation


       The above image is how storage module connects to other storage. The storargeManager is inside of Tajo Worker and it tries to connect other external storage to get data.

       The following things are mandatory issues to consider when implementing the submodule that connects between Tajo and Kudu:


  • Implement KuduScanner and KuduAppender

    • Split read

      • Consider How we can read the part of data specified in the given fragment.

    • Type conversion

      • Data types and internal representation should contain compatibility.

    • Projection push down

      • Tajo needs to be able to access only necessary columns.

  • Implement KuduTableSpace

    • Split generation

      • Decide a rule to divide data for distributed processing.

  • Implement KuduFragment

    • Contain the information of which part of data will be processed by each task.

Timeline

  • April 22 - May 22

    • Get in touch with the Tajo and the Kudu communities.

    • Analyze other storage modules like tajo-storage-hbase or tajo-storage-hdfs.

    • Analyze Kudu architecture.

    • Architectural drafting.

  • May 22 - June 22

    • Confirm the architecture.

    • Start to implement the actual code.

    • Implement KuduFragment

    • Implement KuduScanner/KuduAppender

  • June 23 - July 30

    • Implement KuduScanner/KuduAppender

    • Implement KuduTableSpace 

    • Unit Tests

  • July 30 - August 15

    • Fix minor bugs

    • Write documentation.

Community Engagement

  1. Engineers working for Apache Tajo communicate with mailing list (dev@tajo.apache.org).

    1. I can ask an opinion and even get some tips there.

  2. The project issues including bugs and new features are listed on JIRA. (http://issues.apache.org/jira/browse/TAJO)

  3. The actual code is managed by Github repository (http://github.com/apache/tajo)

    1. When the code is uploaded by the contributors and they send a pull request, the committers review the code and decide it to get involved.

    2. If the code is rejected by some reasons, the committers leave a comment.

Further Development

       When the development for the submodule is done, I will not stop contributing to the TAJO project. With an experience that I gain from this program, I will keep developing other similar modules like mysql connector. Of course, I will keep my eyes on the Kudu model if there’s some bugs in the future.

Other commitments

       Nothing special. I can focus on this project every day.

About Me

Lim, Byunghoon

Email  : seian.hoon@gmail.com

Computer Engineering

Kyunghee University

South Korea


       I am Byunghoon Lim, an undergraduate student majoring in Computer Engineering at Kyunghee University in South Korea. My main interests lie on Distributed Computation and Data Analysis with Machine Learning. I am familiar with C++, Java and Python. I have done following projects;cosine similarity based item recommendation using MongoDB and AWS EMR, movie recommendation system on Hadoop ecosystem and etc.

       Participating in the Apache Tajo project by implementing this issue will be an honor for me.

  • No labels