Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Thanks for everyone's replies. Here is another attempt at summarizing them. Pardon if I say too many "obvious" things: I knew nothing about least squares before, so everything was new to me.</p> <h2>NOT polynomial interpolation</h2> <p><a href="http://en.wikipedia.org/wiki/Polynomial_interpolation" rel="noreferrer">Polynomial interpolation</a> is fitting a polynomial of degree <code>n</code> given <code>n+1</code> data points, e.g. finding a cubic that passes exactly through four given points. As said in the question, this was not want I wanted—I had a lot of points and wanted a small-degree polynomial (which will only <em>approximately</em> fit, unless we've been lucky)—but since some of the answers insisted on talking about it, I should mention them :) <a href="http://en.wikipedia.org/wiki/Lagrange_polynomial" rel="noreferrer">Lagrange polynomial</a>, <a href="http://en.wikipedia.org/wiki/Vandermonde_matrix" rel="noreferrer">Vandermonde matrix</a>, etc.</p> <h2>What is least-squares?</h2> <p>"Least squares" is a particular definition/criterion/"metric" of "how well" a polynomial fits. (There are others, but this is simplest.) Say you are trying to fit a polynomial p(x,y) = a + bx + cy + dx<sup>2</sup> + ey<sup>2</sup> + fxy to some given data points (x<sub>i</sub>,y<sub>i</sub>,Z<sub>i</sub>) (where "Z<sub>i</sub>" was "f(x<sub>i</sub>,y<sub>i</sub>)" in the question). With least-squares the problem is to find the "best" coefficients (a,b,c,d,e,f), such that what is minimized (kept "least") is the "sum of squared residuals", namely</p> <p> S = &sum;<sub>i</sub> (a + bx<sub>i</sub> + cy<sub>i</sub> + dx<sub>i</sub><sup>2</sup> + ey<sub>i</sub><sup>2</sup> + fx<sub>i</sub>y<sub>i</sub> - Z<sub>i</sub>)<sup>2</sup> </p> <h2>Theory</h2> <p>The important idea is that if you look at S as a function of (a,b,c,d,e,f), then S is <a href="http://en.wikipedia.org/wiki/Maxima_and_minima" rel="noreferrer">minimized</a> at a point at which its <a href="http://en.wikipedia.org/wiki/Partial_derivative" rel="noreferrer">gradient</a> <a href="http://en.wikipedia.org/wiki/Critical_point_(mathematics)" rel="noreferrer">is 0</a>. This means that for example &part;S/&part;f=0, i.e. that </p> <p>&sum;<sub>i</sub>2(a + &hellip; + fx<sub>i</sub>y<sub>i</sub> - Z<sub>i</sub>)x<sub>i</sub>y<sub>i</sub> = 0</p> <p>and similar equations for a, b, c, d, e. Note that these are just linear equations in a&hellip;f. So we can solve them with <a href="http://en.wikipedia.org/wiki/Gaussian_elimination" rel="noreferrer">Gaussian elimination</a> or any of <a href="http://en.wikipedia.org/wiki/System_of_linear_equations#Other_methods" rel="noreferrer">the usual methods</a>.</p> <p>This is still called "linear least squares", because although the function we wanted was a quadratic polynomial, it is still linear <em>in the parameters</em> (a,b,c,d,e,f). Note that the same thing works when we want p(x,y) to be any "linear combination" of <em>arbitrary</em> functions f<sub>j</sub>, instead of just a polynomial (= "linear combination of monomials").</p> <h2>Code</h2> <p>For the univariate case (when there is only variable x — the f<sub>j</sub> are monomials x<sup>j</sup>), there is Numpy's <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html" rel="noreferrer"><code>polyfit</code></a>:</p> <pre><code>&gt;&gt;&gt; import numpy &gt;&gt;&gt; xs = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] &gt;&gt;&gt; ys = [1.1, 3.9, 11.2, 21.5, 34.8, 51, 70.2, 92.3, 117.4, 145.5] &gt;&gt;&gt; p = numpy.poly1d(numpy.polyfit(xs, ys, deg=2)) &gt;&gt;&gt; print p 2 1.517 x + 2.483 x + 0.4927 </code></pre> <p>For the multivariate case, or linear least squares in general, there is SciPy. <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html#solving-linear-least-squares-problems-and-pseudo-inverses" rel="noreferrer">As explained in its documentation</a>, it takes a matrix A of the values f<sub>j</sub>(<b>x</b><sub>i</sub>). (The theory is that it finds the <a href="http://en.wikipedia.org/wiki/Moore-Penrose_pseudoinverse#Applications" rel="noreferrer">Moore-Penrose pseudoinverse</a> of A.) With our above example involving (x<sub>i</sub>,y<sub>i</sub>,Z<sub>i</sub>), fitting a polynomial means the f<sub>j</sub> are the monomials x<sup>()</sup>y<sup>()</sup>. The following finds the best quadratic (or best polynomial of any other degree, if you change the "degree = 2" line):</p> <pre><code>from scipy import linalg import random n = 20 x = [100*random.random() for i in range(n)] y = [100*random.random() for i in range(n)] Z = [(x[i]+y[i])**2 + 0.01*random.random() for i in range(n)] degree = 2 A = [] for i in range(n): A.append([]) for xd in range(degree+1): for yd in range(degree+1-xd): A[i].append((x[i]**xd)*(y[i]**yd)) #f_j(x_i) c,_,_,_ = linalg.lstsq(A,Z) j = 0 for xd in range(0,degree+1): for yd in range(0,degree+1-xd): print " + (%.2f)x^%dy^%d" % (c[j], xd, yd), j += 1 </code></pre> <p>prints </p> <pre><code> + (0.01)x^0y^0 + (-0.00)x^0y^1 + (1.00)x^0y^2 + (-0.00)x^1y^0 + (2.00)x^1y^1 + (1.00)x^2y^0 </code></pre> <p>so it has discovered that the polynomial is x<sup>2</sup>+2xy+y<sup>2</sup>+0.01. [The last term is sometimes -0.01 and sometimes 0, which is to be expected because of the random noise we added.]</p> <p>Alternatives to Python+Numpy/Scipy are <a href="http://www.r-project.org/" rel="noreferrer">R</a> and Computer Algebra Systems: <a href="http://sagemath.org/index.html" rel="noreferrer">Sage</a>, Mathematica, Matlab, Maple. Even Excel might be able to do it. <a href="http://www.nr.com/oldverswitcher.html" rel="noreferrer">Numerical Recipes</a> discusses methods to implement it ourselves (in C, Fortran). </p> <h2>Concerns</h2> <ul> <li>It is strongly influenced by <strong>how the points are chosen</strong>. When I had <code>x=y=range(20)</code> instead of the random points, it always produced 1.33x<sup>2</sup>+1.33xy+1.33y<sup>2</sup>, which was puzzling... until I realised that because I always had <code>x[i]=y[i]</code>, the polynomials were the same: x<sup>2</sup>+2xy+y<sup>2</sup> = 4x<sup>2</sup> = (4/3)(x<sup>2</sup>+xy+y<sup>2</sup>). So the moral is that it is important to choose the points carefully to get the "right" polynomial. (If you can chose, you should choose <a href="http://en.wikipedia.org/wiki/Chebyshev_nodes" rel="noreferrer">Chebyshev nodes</a> for polynomial interpolation; not sure if the same is true for least squares as well.)<br/></li> <li><strong>Overfitting</strong>: higher-degree polynomials can always fit the data better. If you change the <code>degree</code> to 3 or 4 or 5, it still mostly recognizes the same quadratic polynomial (coefficients are 0 for higher-degree terms) but for larger degrees, it starts fitting higher-degree polynomials. But even with degree 6, taking larger n (more data points instead of 20, say 200) still fits the quadratic polynomial. So the moral is to avoid overfitting, for which it might help to take as many data points as possible.</li> <li>There might be issues of <a href="http://en.wikipedia.org/wiki/Numerical_stability" rel="noreferrer">numerical stability</a> I don't fully understand.</li> <li>If you don't need a polynomial, you can obtain better fits with other kinds of functions, e.g. <a href="http://en.wikipedia.org/wiki/Spline_(mathematics)" rel="noreferrer">splines</a> (piecewise polynomials).</li> </ul>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload