Low latency FIX engine in Java

Overview

Chronicle FIX is our Low Latency FIX engine and database for Java.

What makes it different is that it;
  • is designed for ultra low GC* in Java.
  • supports Strings and date times in a way which minimises garbage and overhead.
  • is customisable to include only the fields you expect.
  • uses optimisations normally used in binary parsers and generators such as reading/writing 4 or 8 bytes at a time to improve efficiency.
  • built on low latency persistence to minimise the latency of logging.
  • is optimised for low latency network cards such as Solarflare.
* Ultra low GC means it can produce less than a byte of garbage per message on average
If you keep your total garbage rate to less than 1 GB per hour a 24 GB Eden can take all day to fill up and you don't get any minor GCs.  Produce less than 200 MB/hour and you can run for a week without a GC.

But isn't Java slow?

Java can be slower than C++ but written well Java can be faster than a C++ application not written so well. i.e. Just because something is written in C++ doesn't guarantee it will be faster.

What is being tested?

The "Parser test" times how long it takes to parse a 214 byte New Order Single FIX message in native memory eg. after reading from a SocketChannel and set all the values of the fields into an object.  In this test, the textual fields are set with Strings as this is the more natural way to deal with text data in Java.  We have alternatives which are faster such as support for 8 bit character strings.

The "Generator test" times how long it takes to generate the 214 byte New Order Single FIX message  from data which contain Strings, and timestamps and write it to native memory. e.g. ready to write to a Socket Channel.

Note: Strings and Timestamps fields are the most expensive. There are 6 Strings and two timestamps.

JMH configured to use SampleTime was run for 10 minutes in each test.



This graph shows the latency of parsing and generating a moderately sized FIX message.  In both parsing and generating, the latency was less than a micro-second, more than 99.9% of the time.

But what about the higher percentiles?  These don't look as good.  This is because the machine I am using has some noise from the OS such as interrupts which I have minimised but can't turn off.


The higher delays are caused by the OS. Around once a milli-second there is an interrupt for about 2 micro-seconds and even rarer delays of 5 and 7-8 micro-seconds.  On a better tuned server I will still expect there to be interrupts, but they would occur less often.

What is next?

The next step is to performance test integration with Chronicle Journal to see the impact of persistence.  Journal is a specialised persister which is similar to Chronicle Queue v4 but have been tuned for specific use cases. In this case, we need Journal to not only persist at around 150 nano-seconds per message, but have a higher consistency than Queue.  While Queue performs very well writing to SSD, around 1 in 1000 to 1 in 100 writes will have a signature delay which reflects the choice of disk subsystem you have. i.e. it directly impacts the 99.9% latency.  What we can do with Journal is buffer this delay to significantly reduce the impact.

What is a FIX Database?

MongoDB is a database optimised for JSON messages.  Chronicle FIX is a database optimised for FIX messages. It stores data in FIX and support queries on FIX fields such as; give me all the messages for a client order ID, or give me all the messages sent a specific time, or give me the messages most delayed between transmission time and the time we received them.

Is Chronicle-FIX the fastest FIX engine for Java code?

We've seen a number of benchmark stats quoted for various FIX engines.  Whilst benchmark numbers give you a general insight into the order of magnitude in which you dealing they almost certainly don't give you an exact idea as to how fast your code will run.

It's easy for anyone to claim that they have the fastest FIX engine with some benchmark figures to back it up but very hard to actually compare like for like.  Benchmarks will always be optimised to suit the software against which it is being run. So what exactly is a fair test for all engines? Even if you find a fair test on which everyone agreed how much do you have to manipulate and tune the code to get the benchmark? Is this something users would naturally do when writing their code?

So the question, is Chronicle-FIX the fastest FIX engine is somewhat irrelevant.  What we certainly know is that we are in the right ball park. Most importantly, the way Chronicle-FIX is licensed with consulting to ensure it is optimised for your use case, we will work with you to make sure that your code can achieve the sort of results we published in the benchmark.

How to use Chronicle FIX?

The source for Chronicle FIX is on github but only available to those with a license. 
The thinking is that if you need a very fast FIX Engine (measuring your times in sub-microseconds) we can help integrate it into your software in the most optimal way for you.  It may be that you are tied into an existing data model and code base in which case we have techniques which vastly reduce the cost of transforming data - in fact we don't even hold an intermediary data model. On a green field project we can show you how to best build your code around Chronicle-FIX.

Please contact us at sales@chronicle.software for any further information

Conclusion

Chronicle FIX is quick.  While QuickFIX struggles to be under 50 micro-seconds to parse + generate, Chronicle FIX is comfortably under two micro-seconds to do both, most of the time.

We will be providing more documentation on how persistence performs and the database works.


Comments

Popular posts from this blog

Java is Very Fast, If You Don’t Create Many Objects

System wide unique nanosecond timestamps

Unusual Java: StackTrace Extends Throwable