Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p><strong>Why don't applications typically use [whole numbers] to internally represent currency values?</strong></p> <ol> <li><p>It does not make for simple coding. $1.10 translates to 110¢. Okay, but what about when you need to calculate tax (i.e $1.10 * 4.225% - Missouri's tax rate which results in $0.046475). To keep all money in whole numbers you'd have to also convert the sales tax to a whole number (4225), which would require converting 110¢ further to 11000000. The math then can be 11000000 * 4225 / 100000 = 464750. This is a problem as now we have values in fractions of cents (11000000 and 464750 respectively). All this for the sake of storing money as whole numbers.</p></li> <li><p>Therefore, it's easier to think and code in terms of the native currency. In the United States, this would be in dollars with the cents being a decimal fraction (i.e. $1.10). Coding such in terms of 110¢ isn't as natural. Using base-10 floating point numbers (such as Java's <code>BigDecimal</code>, and .NET's <code>Decimal</code>) are usually precise enough for currency values (compared to base-2 floating point numbers like <code>Float</code> and <code>Double</code>).</p></li> </ol> <p><strong>Why isn't this approach a best practice among applications that don't need to represent fractions of cents (or the equivalent in other currency types)?</strong></p> <p>I think number 1 above shows that it's hard to get away from needing to represent fractions of cents, at least when it comes to calculating sales tax - something common in business applications.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload