Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>To apply a perspective transformation you first have to know four points in a plane A that will be mapped to four points in a plane B. With those points, you can derive the homographic transform. By doing this, you obtain your 8 coefficients and the transformation can take place.</p> <p>The site <a href="http://xenia.media.mit.edu/~cwren/interpolator/" rel="nofollow noreferrer">http://xenia.media.mit.edu/~cwren/interpolator/</a> (mirror: <a href="https://web.archive.org/web/20150222120106/xenia.media.mit.edu/~cwren/interpolator/" rel="nofollow noreferrer">WebArchive</a>), as well as many other texts, describes how those coefficients can be determined. To make things easy, here is a direct implementation according from the mentioned link:</p> <pre><code>import numpy def find_coeffs(pa, pb): matrix = [] for p1, p2 in zip(pa, pb): matrix.append([p1[0], p1[1], 1, 0, 0, 0, -p2[0]*p1[0], -p2[0]*p1[1]]) matrix.append([0, 0, 0, p1[0], p1[1], 1, -p2[1]*p1[0], -p2[1]*p1[1]]) A = numpy.matrix(matrix, dtype=numpy.float) B = numpy.array(pb).reshape(8) res = numpy.dot(numpy.linalg.inv(A.T * A) * A.T, B) return numpy.array(res).reshape(8) </code></pre> <p>where <code>pb</code> is the four vertices in the current plane, and <code>pa</code> contains four vertices in the resulting plane.</p> <p>So, suppose we transform an image as in:</p> <pre><code>import sys from PIL import Image img = Image.open(sys.argv[1]) width, height = img.size m = -0.5 xshift = abs(m) * width new_width = width + int(round(xshift)) img = img.transform((new_width, height), Image.AFFINE, (1, m, -xshift if m &gt; 0 else 0, 0, 1, 0), Image.BICUBIC) img.save(sys.argv[2]) </code></pre> <p>Here is a sample input and output with the code above:</p> <p><img src="https://i.stack.imgur.com/dHGcB.png" alt="enter image description here"> <img src="https://i.stack.imgur.com/EOwht.png" alt="enter image description here"></p> <p>We can continue on the last code and perform a perspective transformation to revert the shear:</p> <pre><code>coeffs = find_coeffs( [(0, 0), (256, 0), (256, 256), (0, 256)], [(0, 0), (256, 0), (new_width, height), (xshift, height)]) img.transform((width, height), Image.PERSPECTIVE, coeffs, Image.BICUBIC).save(sys.argv[3]) </code></pre> <p>Resulting in:</p> <p><img src="https://i.stack.imgur.com/wY6iQ.png" alt="enter image description here"></p> <p>You can also have some fun with the destination points:</p> <p><img src="https://i.stack.imgur.com/GicNm.png" alt="enter image description here"> <img src="https://i.stack.imgur.com/tYwvt.png" alt="enter image description here"></p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload