Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>You seem spot on with the benefits of using a floating point type. I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs. In those cases, I will "down cast" to double or float, but only do it internally, and carefully try to manage precision loss by limiting the number of significant digits in the mathematical operation being performed.</p> <p>In general, if your value is transient (not reused), you're safe to use a floating point type. The real problem with floating point types is the following three scenarios.</p> <ol> <li>You are aggregating floating point values (in which case the precision errors compound)</li> <li>You build values based on the floating point value (for example in a recursive algorithm)</li> <li>You are doing math with a very wide number of significant digits (for example, <code>123456789.1 * .000000000000000987654321</code>)</li> </ol> <p><strong>EDIT</strong></p> <p>According to the <a href="http://msdn.microsoft.com/en-us/library/364x0z75(VS.80).aspx" rel="noreferrer">reference documentation on C# decimals</a>:</p> <blockquote> <p>The <strong>decimal</strong> keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations.</p> </blockquote> <p>So to clarify my above statement:</p> <blockquote> <p>I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs.</p> </blockquote> <p>I have only ever worked in industries where decimals are favorable. If you're working on phsyics or graphics engines, it's probably much more beneficial to design for a floating point type (float or double).</p> <p>Decimal is not infinitely precise (it is impossible to represent infinite precision for non-integral in a primitive data type), but it is far more precise than double:</p> <ul> <li>decimal = 28-29 significant digits</li> <li>double = 15-16 significant digits</li> <li>float = 7 significant digits</li> </ul> <p><strong>EDIT 2</strong></p> <p>In response to <a href="https://stackoverflow.com/users/1968/konrad-rudolph">Konrad Rudolph</a>'s comment, item # 1 (above) is definitely correct. Aggregation of imprecision does indeed compound. See the below code for an example:</p> <pre><code>private const float THREE_FIFTHS = 3f / 5f; private const int ONE_MILLION = 1000000; public static void Main(string[] args) { Console.WriteLine("Three Fifths: {0}", THREE_FIFTHS.ToString("F10")); float asSingle = 0f; double asDouble = 0d; decimal asDecimal = 0M; for (int i = 0; i &lt; ONE_MILLION; i++) { asSingle += THREE_FIFTHS; asDouble += THREE_FIFTHS; asDecimal += (decimal) THREE_FIFTHS; } Console.WriteLine("Six Hundred Thousand: {0:F10}", THREE_FIFTHS * ONE_MILLION); Console.WriteLine("Single: {0}", asSingle.ToString("F10")); Console.WriteLine("Double: {0}", asDouble.ToString("F10")); Console.WriteLine("Decimal: {0}", asDecimal.ToString("F10")); Console.ReadLine(); } </code></pre> <p>This outputs the following:</p> <pre><code>Three Fifths: 0.6000000000 Six Hundred Thousand: 600000.0000000000 Single: 599093.4000000000 Double: 599999.9999886850 Decimal: 600000.0000000000 </code></pre> <p>As you can see, even though we are adding from the same source constant, the results of the double is less precise (although probably will round correctly), and the float is far less precise, to the point where it has been reduced to only two significant digits.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload