Note that there are some explanatory texts on larger screens.

plurals
  1. POIs floating-point math consistent in C#? Can it be?
    text
    copied!<p>No, this is not another <em>"Why is (1/3.0)*3 != 1"</em> question.</p> <p>I've been reading about floating-points a lot lately; specifically, how the <strong>same calculation might give different results</strong> on different architectures or optimization settings.</p> <p>This is a problem for video games which store replays, or are <a href="https://gamedev.stackexchange.com/questions/15192/rts-game-protocol">peer-to-peer networked</a> (as opposed to server-client), which rely on all clients generating exactly the same results every time they run the program - a small discrepancy in one floating-point calculation can lead to a drastically different game-state on different machines (or even <a href="http://www.parashift.com/c%2B%2B-faq-lite/newbie.html#faq-29.18" rel="noreferrer">on the same machine!</a>)</p> <p>This happens even amongst processors that "follow" <a href="http://en.wikipedia.org/wiki/IEEE_754-2008" rel="noreferrer">IEEE-754</a>, primarily because some processors (namely x86) use <a href="http://en.wikipedia.org/wiki/Extended_precision" rel="noreferrer">double extended precision</a>. That is, they use 80-bit registers to do all the calculations, then truncate to 64- or 32-bits, leading to different rounding results than machines which use 64- or 32- bits for the calculations.</p> <p>I've seen several solutions to this problem online, but all for C++, not C#:</p> <ul> <li>Disable double extended-precision mode (so that all <code>double</code> calculations use IEEE-754 64-bits) using <a href="http://msdn.microsoft.com/en-us/library/c9676k6h.aspx" rel="noreferrer"><code>_controlfp_s</code></a> (Windows), <code>_FPU_SETCW</code> (Linux?), or <a href="http://www.gsp.com/cgi-bin/man.cgi?section=3&amp;topic=fpsetprec" rel="noreferrer"><code>fpsetprec</code></a> (BSD).</li> <li>Always run the same compiler with the same optimization settings, and require all users to have the same CPU architecture (no cross-platform play). Because my "compiler" is actually the JIT, which <strong>may optimize differently every time the program is run</strong>, I don't think this is possible.</li> <li>Use fixed-point arithmetic, and avoid <code>float</code> and <code>double</code> altogether. <code>decimal</code> would work for this purpose, but would be much slower, and none of the <code>System.Math</code> library functions support it.</li> </ul> <hr> <p>So, <strong>is this even a problem in C#?</strong> What if I only intend to support Windows (not Mono)?</p> <p>If it is, <strong>is there any way to force my program to run at normal double-precision?</strong></p> <p>If not, <strong>are there any libraries that would help</strong> keep floating-point calculations consistent?</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload