Double your money again

Overview

A long time ago I wrote an article on using double for money. However, it is still a common fear for many developers when the solution is fairly simple.

Why not use BigDecimal

@myfear (Markus) asks a good question, doesn't BigDecimal do rounding already, (and with more options)

IMHO, There are two reason you may want to avoid BigDecimal

  1. Clarity.  This may be a matter of option but I find x * y / z clearer than x.multiply(y).divide(z, 10, BigDecimal.ROUND_HALF_UP)
  2. Performance. BigDecimal is often 100x slower.

Took an average of 6 ns for rounding using cast
Took an average of 17 ns for rounding using Math.round
Took an average of 932 ns for rounding using BigDecimal.setScale

You might like to test the difference on your machine.

The code

RoundingPerformanceMain.java

When to use BigDecimal


The better question is why would you ever use BigDecimal?

  1. Arbitary precision. If you want more than 15 decimal places of precision, use BigDecimal.
  2. Precise rounding. If you need to have full control over rounding options, it will be simpler using BigDecimal
  3. The project standard is to use BigDecimal. Human factors can be more important than technical arguments.

The problem with using double for money

double has two types of errors. It have representation error. i.e. it cannot represent all possible decimal values exactly. Even 0.1 is not exactly this value. It also has rounding error from calculations. i.e. as you perform calculations, the error increases.

double[] ds = {
        0.1,
        0.2,
        -0.3,
        0.1 + 0.2 - 0.3};
for (double d : ds) {
    System.out.println(d + " => " + new BigDecimal(d));
}
prints
0.1 => 0.1000000000000000055511151231257827021181583404541015625
0.2 => 0.200000000000000011102230246251565404236316680908203125
-0.3 => -0.299999999999999988897769753748434595763683319091796875
5.551115123125783E-17 => 5.5511151231257827021181583404541015625E-17

You can see that the representation for 0.1 and 0.2 is slightly higher than those values, and -0.3 is also slightly higher. When you print them, you get the nicer 0.1 instead of the actual value represented 0.1000000000000000055511151231257827021181583404541015625

However, when you add these values together, you get a value which is slightly higher than 0.

The important thing to remember is that these errors are not random errors. They are manageable and bounded.

Correcting for rounding error

Like many data types, such as date, you have an internal representation for a value and how you represent this as a string.

This is true for double. You need to control how the value is represented as a string. This can can be surprise as Java does a small amount of rounding for representation error is not obvious, however once you have rounding error for operations as well, it can some as a shock.

A common reaction is to assume, there is nothing you can do about it, the error is uncontrollable, unknowable and dangerous. Abandon double and use BigDecimal

However, the error is limited in the IEE-754 standards and accumulate slowly.

Round the result


And just like the need to use a TimeZone and Local for dates, you need to determine the precision of the result before converting to a String.

To resolve this issue, you need to provide appropriate rounding. With money this is easy as you know how many decimal places are appropriate and unless you have $70 trillion you won't get a rounding error large enough you cannot correct it.

// uses round half up, or bankers' rounding
public static double roundToTwoPlaces(double d) {
    return Math.round(d * 100) / 100.0;
}
// OR
public static double roundToTwoPlaces(double d) {
    return ((long) (d < 0 ? d * 100 - 0.5 : d * 100 + 0.5)) / 100.0;
}
If you add this into the result, there is still a small representation error, however it is not large enough that the Double.toString(d) cannot correct for it.
double[] ds = {
        0.1,
        0.2,
        -0.3,
        0.1 + 0.2 - 0.3};
for (double d : ds) {
    System.out.println(d + " to two places " + roundToTwoPlaces(d) + " => " + new BigDecimal(roundToTwoPlaces(d)));
}
prints
0.1 to two places 0.1 => 0.1000000000000000055511151231257827021181583404541015625
0.2 to two places 0.2 => 0.200000000000000011102230246251565404236316680908203125
-0.3 to two places -0.3 => -0.299999999999999988897769753748434595763683319091796875
5.551115123125783E-17 to two places 0.0 => 0

Conclusion

If you have a project standard which says you should use BigDecimal or double, that is what you should follow. However, there is not a good technical reason to fear using double for money.

Related Links


Working with Money
- In favour of BigDecimal. A comment which has a good suggestion that a generic Money wrapper be used if performance allows it.

JodaMoney

Comments

  1. Hi Peter,

    thanks for this post! I like the work you do a lot and it's a pleasure to read.
    Stumbling about your rounding topic it seems as if you introduce something, that is there already.

    What about:
    new BigDecimal(value).setScale(2, BigDecimal.ROUND_HALF_UP);

    Thanks,
    Markus

    ReplyDelete
  2. @Markus, if you are working with double I think you mean

    value = new BigDecimal(value).setScale(2, BigDecimal.ROUND_HALF_UP).doubleValue();

    You might like to do a comparison of the performance of this approach. :D

    ReplyDelete
  3. Peter:
    Thanks, I didn't compare the performance on this and converted the result to a BigDecimal for further display (final step if you like). But does it make a difference for the _average_ usecase? (not banking and finance related? ;))

    -M

    ReplyDelete
  4. Markus, I have added a section with a benchmark of the difference it makes. BigDecimal was about 150x slower on my PC but still about one micro-second. It could make a big difference if called often enough, but if its called a few hundred times a second (or less), its unlikely to matter.

    ReplyDelete
  5. Thanks! I hate reinventing the wheel, so it seems obvious to point people to what's there and tell them how to improve afterwards ;)
    Looking forward to your next posts!

    ReplyDelete
  6. Definitely worth point out when people come up with new wheels. However, you would be surprised how much scope there is to improve basic libraries. I am working on a library which can store more than one billion objects efficiently (persisted to disk), many orders of magnitude faster than anything I am aware of.

    ReplyDelete
  7. Peter,
    have seen it on google code? Looks interesting. Will follow you and hope to learn a lot :)

    Thanks,
    M

    ReplyDelete
  8. In Scala, you can write x * y / z for BigDecimal values.

    ReplyDelete
  9. @Cay, An improvement. I think Java should consider taking some of the improvements in Scala.

    ReplyDelete
  10. Just a note, most places where I worked with money used 4-5 decimal places. Complex operations sometimes mess up the numbers...

    ReplyDelete
  11. Good post as always, though I disagree with your conclusion: "However, there is not a good technical reason to fear using double for money".

    I guess it is fairly obvious, but in banking and finance, using double is usually not an option unless you are doing something like similations or forecasts (where the absolute numbers don't matter too much, and it's better to be fast).

    For a start, anything that needs to make decisions based on value is just too easy to get wrong:

    (0.1+0.2-0.3) > 0.0 // == true

    It's possible to "fix" this of course. But it is not intuative, it's an easy trap to fall into, and really, there are enough other things to worry about :)

    ReplyDelete
  12. The risks related to using double in situations where exact results are required (especially financial applications) have already been mentioned several times.

    However, there is an alternative solution ( especially in situations where performance is a priority) which did not get mentioned yet: using integer types (int or long) with an implicit scale.

    For example, if you have amounts that are always limited to 2 decimal places then use int or long to represent "hundredths". It introduces complexity in the sense that you have to remember that the values are pennies (not dollars) and there is no rounding on division (only truncation) but performance is great (better than doubles on many architectures) and the results are always exact.

    ReplyDelete
  13. @Vyadh, I would agree there is enough to worry about but using BigDecimal in Java can add even more headaches than it saves.

    @Andy Brook, I have re-written systems from double to use int and long instead and they haven't always had the improvement you might expect. The FPU in x64 has a dedicated pipeline and greater instruction re-ordering and perhaps greater optimisation at the JVM level. In any case the performance improvement is not obvious.

    You can support rounding options with int division as well, with a work around. e.g. examining the modulus.

    ReplyDelete
  14. Hello,

    what do you think using float vs double for an entire java trading api. is it worth thinking you can grab some performance using float instead double ???

    ReplyDelete
    Replies
    1. float can be useful in back testing when processing many billions of data points and keep them in memory.

      Otherwise I would stick to double in Java since it is much faster. Ie tends to use double any way cast the result.

      Delete
  15. thx for your fast reply

    ReplyDelete
  16. Hi Peter,

    I was searching for some advice along this thought model. I had come up with this

    private static double add(double aPrice,
    double aSlippage,
    double aPrecision) {
    long price = (long) (aPrice * aPrecision);
    long slippage = (long) (aSlippage * aPrecision);
    long adjustedPrice = price + slippage;
    return adjustedPrice / aPrecision;
    }

    Thanks for your post.

    ReplyDelete

Post a Comment

Popular posts from this blog

Java is Very Fast, If You Don’t Create Many Objects

System wide unique nanosecond timestamps

Unusual Java: StackTrace Extends Throwable