Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>The problem you're having relates to how <a href="https://www.mathworks.com/help/matlab/matlab_prog/floating-point-numbers.html" rel="noreferrer">floating-point numbers</a> are represented on a computer. A more detailed discussion of floating-point representations appears towards the end of my answer (The "Floating-point representation" section). The <strong>TL;DR</strong> version: because computers have finite amounts of memory, numbers can only be represented with finite precision. Thus, the accuracy of floating-point numbers is limited to a certain number of decimal places (about 16 significant digits for <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="noreferrer">double-precision values</a>, the default used in MATLAB).</p> <h2>Actual vs. displayed precision</h2> <p>Now to address the specific example in the question... <strong>while <code>24.0000</code> and <code>24.0000</code> are <em>displayed</em> in the same manner, it turns out that they actually differ by very small decimal amounts in this case. You don't see it because MATLAB <a href="https://www.mathworks.com/help/matlab/ref/format.html" rel="noreferrer">only displays 4 significant digits by default</a>, keeping the overall display neat and tidy.</strong> If you want to see the full precision, you should either issue the <code>format long</code> command or view a <a href="https://www.mathworks.com/help/matlab/ref/num2hex.html" rel="noreferrer">hexadecimal representation</a> of the number:</p> <pre><code>&gt;&gt; pi ans = 3.1416 &gt;&gt; format long &gt;&gt; pi ans = 3.141592653589793 &gt;&gt; num2hex(pi) ans = 400921fb54442d18 </code></pre> <h2>Initialized values vs. computed values</h2> <p>Since there are only a finite number of values that can be represented for a floating-point number, it's possible for a computation to result in a value that falls between two of these representations. In such a case, the result has to be rounded off to one of them. This introduces a small <a href="https://en.wikipedia.org/wiki/Machine_epsilon" rel="noreferrer">machine-precision error</a>. This also means that initializing a value directly or by some computation can give slightly different results. For example, the value <code>0.1</code> doesn't have an <em>exact</em> floating-point representation (i.e. it gets slightly rounded off), and so you end up with counter-intuitive results like this due to the way round-off errors accumulate:</p> <pre><code>&gt;&gt; a=sum([0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1]); % Sum 10 0.1s &gt;&gt; b=1; % Initialize to 1 &gt;&gt; a == b ans = logical 0 % They are unequal! &gt;&gt; num2hex(a) % Let's check their hex representation to confirm ans = 3fefffffffffffff &gt;&gt; num2hex(b) ans = 3ff0000000000000 </code></pre> <h2>How to correctly handle floating-point comparisons</h2> <p>Since floating-point values can differ by very small amounts, any comparisons should be done by checking that the values are within some range (i.e. tolerance) of one another, as opposed to exactly equal to each other. For example:</p> <pre><code>a = 24; b = 24.000001; tolerance = 0.001; if abs(a-b) &lt; tolerance, disp('Equal!'); end </code></pre> <p>will display "Equal!".</p> <p>You could then change your code to something like:</p> <pre><code>points = points((abs(points(:,1)-vertex1(1)) &gt; tolerance) | ... (abs(points(:,2)-vertex1(2)) &gt; tolerance),:) </code></pre> <hr> <h1>Floating-point representation</h1> <p>A good overview of floating-point numbers (and specifically the <a href="https://en.wikipedia.org/wiki/IEEE_floating_point" rel="noreferrer">IEEE 754 standard for floating-point arithmetic</a>) is <a href="http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html" rel="noreferrer"><em>What Every Computer Scientist Should Know About Floating-Point Arithmetic</em></a> by David Goldberg.</p> <p>A binary floating-point number is actually represented by three integers: a sign bit <code>s</code>, a significand (or coefficient/fraction) <code>b</code>, and an exponent <code>e</code>. <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="noreferrer">For double-precision floating-point format</a>, each number is represented by 64 bits laid out in memory as follows:</p> <p><a href="https://i.stack.imgur.com/KTTPX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KTTPX.png" alt="enter image description here"></a></p> <p>The real value can then be found with the following formula:</p> <p><a href="https://i.stack.imgur.com/nV0ly.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nV0ly.png" alt="enter image description here"></a></p> <p>This format allows for number representations in the range 10^-308 to 10^308. For MATLAB you can get these limits from <a href="https://www.mathworks.com/help/matlab/ref/realmin.html" rel="noreferrer"><code>realmin</code></a> and <a href="https://www.mathworks.com/help/matlab/ref/realmax.html" rel="noreferrer"><code>realmax</code></a>:</p> <pre><code>&gt;&gt; realmin ans = 2.225073858507201e-308 &gt;&gt; realmax ans = 1.797693134862316e+308 </code></pre> <p>Since there are a finite number of bits used to represent a floating-point number, there are only so many finite numbers that can be represented within the above given range. Computations will often result in a value that doesn't exactly match one of these finite representations, so the values must be rounded off. These <a href="https://en.wikipedia.org/wiki/Machine_epsilon" rel="noreferrer">machine-precision errors</a> make themselves evident in different ways, as discussed in the above examples.</p> <p>In order to better understand these round-off errors it's useful to look at the relative floating-point accuracy provided by the function <a href="https://www.mathworks.com/help/matlab/ref/eps.html" rel="noreferrer"><code>eps</code></a>, which quantifies the distance from a given number to the next largest floating-point representation:</p> <pre><code>&gt;&gt; eps(1) ans = 2.220446049250313e-16 &gt;&gt; eps(1000) ans = 1.136868377216160e-13 </code></pre> <p>Notice that the precision is <em>relative</em> to the size of a given number being represented; larger numbers will have larger distances between floating-point representations, and will thus have fewer digits of precision following the decimal point. This can be an important consideration with some calculations. Consider the following example:</p> <pre><code>&gt;&gt; format long % Display full precision &gt;&gt; x = rand(1, 10); % Get 10 random values between 0 and 1 &gt;&gt; a = mean(x) % Take the mean a = 0.587307428244141 &gt;&gt; b = mean(x+10000)-10000 % Take the mean at a different scale, then shift back b = 0.587307428244458 </code></pre> <p>Note that when we shift the values of <code>x</code> from the range <code>[0 1]</code> to the range <code>[10000 10001]</code>, compute a mean, then subtract the mean offset for comparison, we get a value that differs for the last 3 significant digits. This illustrates how an offset or scaling of data can change the accuracy of calculations performed on it, which is something that has to be accounted for with certain problems.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload