Based on community interests and user feedbacks, roadmap is summarized in 7 different categories. (discussion mailing list thread)

0.11.0 - Prepare for 1.0.0

  • Junit5
  • Spark 3.4.0
  • Minimize default interpreters
    • Exclude Shell interpreter to minimize security issues by default
    • Download interpreters easily
      • Could use Github release for individual interpreters
  • Fluent docker support
  • JDK 11

OUTDATED

  • Enterprise ready
    • Authentication 
    • Authorization 
      • Notebook authorization PR-681
    • Security
    • Multi-tenancy
      • user impersonation
    • Stability
      • Better memory management for notebook
    • Job scheduler
    • High availability and Disaster Recovery
    • Monitoring (exposing JMX metrics)
  • Usability Improvement
    • UX improvement
      • Folder structure for notebook
    • Better Table data support
      • Download data as csv, etc PR-725PR-714PR-6PR-89
      • Featureful table data display (pagenation, filter, sort etc)
  • Pluggability ZEPPELIN-533
    • Pluggable visualization
    • Dynamic Interpreter, notebook, visualization loading
    • Repository and registry for pluggable components
  • Improve documentation
    • Improve contents and readability
    • more tutorials, examples
  • Interpreter
    • Generic JDBC Interpreter
    • (spark)R Interpreter
    • Cluster manager for interpreter (Proposal)
    • more interpreters
    • Developer support (including easier debugging)
  • Notebook storage
    • Versioning ZEPPELIN-540
    • more notebook storages (github push/pull)
  • Visualization

0.7.0 - Enterprise-ready

  • Enterprise support
    • multi user support (ZEPPELIN-1337)
      • Impersonation
    • Job management
    • Monitoring support (e.g. JMX)
  • Interpreter
    • Improve JDBC / Python interpreter
    • New Interpreters
  • Front end performance improvement
  • Pluggable visualization


0.6.0 - Next major release.
  1. Job management
    • New menu 'Job'. Which is displaying all job status, job histories. Display scheduled job information at a glance.
  2. Better Python support
    • Better integration with libraries such as matplotlib. Better python repl integration (like auto completion, etc)
  3. R language support
    • Implementation of sparkR interpreter
  4. Output streaming
    • Output stream to the front-end side.
  5. Pluggable visualization
    • Visualization can be pluggable
  6. Pivot in a backend-side
    • Pivot runs in backend-side so large dataset can be transformed in backend-side before transfer to the front-end side.
  7. Folder structure for notebook
    • Let's organize notebooks
  8. Bring AngularDisplay feature from 'experimental' to 'beta'


0.5.0 - First release in Apache incubator

Focusing on basic features and backend integration
  • Tajo Interpreter
  • Hive Interpreter
  • Flink interpreter
  • Any other driver that can be included by release.
  • No labels