This page contains some general observations about the performance characteristics of the Derby in-memory database feature.
The performance benefit you'll see with the in-memory back end is highly dependent on the load and the underlying disk subsystem.
- For write intensive loads the boost can be in orders of magnitude.
- For read intensive loads the boost can be close to zero.
If you have a read-only database, it may be better in some cases to keep the database on disk, maximize the page cache size and then prime the cache (pulling all pages into the cache).
The downside of using the in-memory back end in such a scenario, is that some of the data will be stored twice: once in the "virtual in-memory file system" and once in the page cache. For the same reason, you should tweak the page cache size accordingly to your amount of data and heap size. Minimizing the page cache (i.e. allowing only 40 pages) to avoid the "data duplication" problem is not a good idea for optimal performance..
For some more information about the effects of page cache size and page size, see these notes from DERBY-646. It is really a comparison between two implementations of an in-memory back end, but closer to the end of the document there are some relevant experiments.
For more detailed performance numbers in your environment, you can try running the various performance clients found in the source code repository (under trunk/testing/.../perf/clients). The simplest ones are the single record operation clients and the bank_tx load.
We feel that the primary use cases for the current in-memory back end are testing and development. In a future release it may be better suited for storing purely transient data in a production environment as well (with a proper delete mechanism and maybe a size limit feature).