Benchmark FAQ

It is essential to read this FAQ in order to understand what is the scope of this benchmark, what the performance results really reflect, in which circumstances and situations these results are relevant and in which they are not.

Benchmark Scope and Limitations

Every benchmark has limitations. It is important to understand the scope and limitations of this benchmark before using the performance results that are published on this website.

What is the Java Persistence API (JPA)?

The Java Persistence API (JPA) is a standard API for working with RDBMS (Relational Database Management Systems) in Java, using ORM (Object Relational Mapping). It is easier to write a fair and unbiased benchmark using a standard API, because exactly the same code runs against all the tested products (as opposed, for example, to the popular PolePosition benchmark, which is often criticized for using different code, with different levels of efficiency, for different tested DBMS products).

Most of the results of this benchmark are relevant also for using the tested DBMS products without JPA and ORM (e.g. using JDBC), because mainly simple database operations have been tested, in which the ORM layer overhead must be minimal (at least when using a good ORM implementation with good support of the specific DBMS in use).

Does this benchmark test a real life application?

This benchmark definitely tests real life realistic and common database operations, but not as a full application (as done by the TPC benchmarks) but rather as separate operations, as done for example by the PolePosition benchmark.

The main benefit in separating the tests to operations is the ability to measure performance of different operations separately and provide more focused result information, rather than just one undivided final performance score (tpsE) and cost per performance (Price/tpsE) as done by the TPC benchmarks.

Unlike the PolePosition benchmark, this benchmark does test multithreading and concurrent database operations.

What is the main limitation of this benchmark?

DBMS scalability has not been tested in this benchmark. This is not a limitation of the benchmark programs but rather a limitation of our test runs. We have tested more than 30 combinations of DBMS/JPA implementations in 58 tests, and every test has been run twice. Databases in our test runs contained 100,000 entity objects. Using much larger databases with so many tested products and test runs would consume too much time.

Good DBMS products should work well with much larger databases. We know that ObjectDB does. To check the scalability of specific DBMS products you may run the benchmark by yourself with larger databases (it is a configurable parameter). You can also run the tests on the subset of product combinations that is relevant to you.

A notable side effect of the databases size is that in all the tests the database size was smaller than the system RAM. In the past, this alone could cause the entire benchmark to be considered as unusable, since the main bottleneck of database activity used to be I/O operations. When the database size is smaller than RAM the operating system can cache the entire database files in memory.

But things have changed. Today RAM is relatively cheap and in the 64 bit operating system era, practically it is limited only by budget. In most applications, if performance matters, it is possible to use RAM that is large enough at least to cache the most used data if not the entire database. There are also other possibilities today, such as using fast solid state drives. So testing databases that are smaller than RAM is very realistic today. For more information on this issue, search Google for "disk is the new tape", "ram is the new disk" and "memory is the new disk".

If your case is different, however, the database size is very large and there is a strict RAM limit, then much smaller performance gaps between different products are expected because I/O operations and disk speed become the bottleneck.

Are ACID Transactions used?

JPA uses ACID transactions automatically when supported, but only partial ACID support is provided by some of the tested DBMS. To avoid distortion of the results because of different levels of durability support (the D in ACID) by different tested DBMS, synchronization to the disk on every commit has been disabled when possible (in PostreSQL and in MySQL InnoDB). In most of the other tested DBMS it is either disabled by default or not supported.

Durability ensures that after transaction commit updates are never lost. 100% durability is impossible, because it is always possible that the hard disk will be damaged just after transaction commit (e.g. by hardware failure, virus or fire).

Synchronization to the disk on every commit is relatively inefficient. There are more efficient ways to achieve a reasonable level of durability, such as using power backup (UPS), replication, or even writing the database journal to a separate remote network disk. If, however, synchronization to the disk is enforced on every commit, expect smaller performance gaps in WRITE tests with small transactions, because the disk synchronization becomes the bottleneck.

What sort of database operations are tested?

All the basic CRUD database operations are tested. See the Test Description page for more details.

Notice that the focus of this test is on simple operations that are part of every real life application.

How to use this Benchmark Results

The Running and Results page explains how tests have been run and how results have been calculated.

Can I reproduce the results of this benchmark?

Yes you can, and it is even very easy.

The benchmark code is open source and it is available under the GPL license.

You can run the benchmark on any computer with Java 6 or above. Just download it and follow the README.

Are the benchmark results reliable and objective?

We have made our best efforts to provide results that are accurate as possible, as also explained on the Running and Results page. Errors, however, are always possible, so if you find that we have made a mistake please let us know so we will be able to fix it.

This benchmark is published by ObjectDB Software Ltd. A benchmark that is published by a vendor is never considered as objective and always raises legitimate questions - after all the publisher has an interest. Moreover, since the performance results of ObjectDB in this benchmark seem almost too good to be true, it is certainly fine to be suspicious.

On the other hand, please remember that other vendors that provide competitive products also have their interests. Therefore, if some people discard the results of this benchmark as unusable - remember that they might be serving their own interest. Notice also, that in this benchmark the same code is used for all the products and it is available as open source, so cheating and providing false results is not really an option.

Eventually, if you are interested in high performance and ObjectDB is an option, you will have to test it by yourself.

I am only interested in RDBMS/ORM - is this benchmark useful for me?

Of course. We already had the data, so we made it available on this website and you can see a head to head comparison of any two DBMS/JPA combinations, unrelated to ObjectDB.

Notice that if you are interested only in a combination of JPA ORM and RDBMS products (and not in ObjectDB) then for your needs this benchmark is 100% objective, since we don't have a preference for any particular ORM or RDBMS product.

How can I use the results of this benchmark?

Because every application is different, it is highly recommended that you run your own tests against your real application or against a test that simulate the database activity of your specific application better.

The results of this benchmark can help you in the initial selection of candidate DBMS/JPA combinations, out of the dozens of possible combinations. Besides the performance, consider the stability of various RDBMS/ORM combinations by checking in how many tests they have failed and by looking at the exception stack traces. You should still do your own checks, of course, because every project has its own unique needs.

Finally remember that performance is not everything and you have to take into account also other factors.

ObjectDB Performance Results

The performance gap between the results of ObjectDB and the results of the other tested DBMS/ORM combinations in this benchmark require some explanations.

The performance gap is really large - is it possible?

Large performance gaps are possible and even common in software. For example, consider Oracle's TimesTen in-memory database. As its name indicates, it is expected to be 10 times faster than ordinary RDBMS that uses disk heavily. This is possible because a different technology (memory rather than disk) is used.

ObjectDB also has built in memory database capabilities (like other DBMS today), but it takes performance one step forward by using additional performance improvements. One obvious performance improvement is the elimination of the intermediary layers.

Let's take a simple operation of storing an object with 20 fields. When using a JPA ORM provider, the ORM layer has to extract the field values, wrap primitive values with wrapper objects and send these values with an SQL query string to the JDBC driver. The JDBC driver has to process the SQL query and send the processed request to the RDBMS. Finally, the RDBMS itself has to store the new values in the database. By eliminating the intermediary layers ObjectDB gains significant performance improvement.

Notice that the big overhead in the above example is not necessarily the ORM layer, because Java application that uses an object model might have to do similar operations also without ORM to convert between objects and JDBC/SQL queries.

Is ObjectDB always faster than RDBMS?

Of course not. Every product, and ObjectDB is not different, has its weaknesses, so obviously there are cases in which ObjectDB is slower. When such a case is found we usually try to improve ObjectDB.

If you are interested in ObjectDB your expectations should be as follows:

  • A similar performance gap as demonstrated in this benchmark results is expected if your application is similar to the benchmark tests and run in similar conditions.
  • Expect smaller performance gap if your application uses the hard drive heavily.
  • Expect larger performance gap if your application object model is more complex.