Note that there are some explanatory texts on larger screens.

plurals
  1. PODetermine extrinsic camera with opencv to opengl with world space object
    primarykey
    data
    text
    <p>I'm using opencv and openframeworks (ie. opengl) to calculate a camera (world transform and projection matrixes) from an image (and later, several images for triangulation).</p> <p>For the purposes of opencv, the "floor plan" becomes the object (ie. the chessboard) with 0,0,0 the center of the world. The world/floor positions are known so I need to get the projection information (distortion coefficients, fov etc) and the extrinsic coordinates of the camera.</p> <p><img src="https://i.stack.imgur.com/tXdQs.png" alt="2D input coordinates"></p> <p>I have mapped the view-positions of these floor-plan points onto my 2D image in normalised view-space ([0,0] is top-left. [1,1] is bottom right).</p> <p>The Object (floor plan/world points) is on xz plane, -y up, so I convert to the xy plane (Not sure here if z-up is negative or positive...) for opencv as it needs to be planar</p> <pre><code>ofMatrix4x4 gWorldToCalibration( 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1 ); </code></pre> <p>I pass 1,1 as the ImageSize to calibrateCamera. flags are <code>CV_CALIB_FIX_ASPECT_RATIO|V_CALIB_FIX_K4|CV_CALIB_FIX_K5</code> <code>calibrateCamera</code> runs successfully, gives me a low-error (usually around <code>0.003</code>).</p> <p>using <code>calibrationMatrixValues</code> I get a sensible FOV, usually around 50 degrees, so I'm pretty sure the intrinsic properties are correct.</p> <p>Now to calculate the extrinsic world-space transform of the camera... I don't believe I need to use <code>solvePnP</code> as I only have one object (although I tried all this before with it and came back with the same results)</p> <pre><code>// rot and trans output... cv::Mat&amp; RotationVector = ObjectRotations[0]; cv::Mat&amp; TranslationVector = ObjectTranslations[0]; // convert rotation to matrix cv::Mat expandedRotationVector; cv::Rodrigues(RotationVector, expandedRotationVector); // merge translation and rotation into a model-view matrix cv::Mat Rt = cv::Mat::zeros(4, 4, CV_64FC1); for (int y = 0; y &lt; 3; y++) for (int x = 0; x &lt; 3; x++) Rt.at&lt;double&gt;(y, x) = expandedRotationVector.at&lt;double&gt;(y, x); Rt.at&lt;double&gt;(0, 3) = TranslationVector.at&lt;double&gt;(0, 0); Rt.at&lt;double&gt;(1, 3) = TranslationVector.at&lt;double&gt;(1, 0); Rt.at&lt;double&gt;(2, 3) = TranslationVector.at&lt;double&gt;(2, 0); Rt.at&lt;double&gt;(3, 3) = 1.0; </code></pre> <p>Now I've got a rotation &amp; transform matrix, but it's column major (I believe as the object is totally skewed if I don't transpose, and the code above looks column major to me)</p> <pre><code>// convert to openframeworks matrix AND transpose at the same time ofMatrix4x4 ModelView; for ( int r=0; r&lt;4; r++ ) for ( int c=0; c&lt;4; c++ ) ModelView(r,c) = Rt.at&lt;double&gt;( c, r ); </code></pre> <p>Swap my planes back to my-coordinate space (y up) using the inverse of the matrix before.</p> <pre><code>// swap y &amp; z planes so y is up ofMatrix4x4 gCalibrationToWorld = gWorldToCalibration.getInverse(); ModelView *= gCalibrationToWorld; </code></pre> <p>Not sure if I NEED to do this... I didn't negate the planes when I put them INTO the calibration...</p> <pre><code>// invert y and z planes for -/+ differences between opencv and opengl ofMatrix4x4 InvertHandednessMatrix( 1, 0, 0, 0, 0, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0, 1 ); ModelView *= InvertHandednessMatrix; </code></pre> <p>And finally, the model view is object-relative-to-camera and I want to invert it to be camera-relative-to-object(0,0,0)</p> <pre><code>ModelView = ModelView.getInverse(); </code></pre> <p><img src="https://i.stack.imgur.com/1aPdI.png" alt="output 3D view"></p> <p>This results in a camera in the wrong place, and rotated wrong. It's not <em>too</em> far off, the camera is the right side of the Y plane, the translation isn't wildly off, and I think it's the right way up.... just not correct yet. The paint-drawn blue circle is where I expect the camera to be.</p> <p>I've gone through loads of SO answers, the documentation a dozen times, but not quite found anything right, I'm pretty sure I've covered everything I need to space-conversion-wise, but maybe I've missed something obvious? Or doing something in the wrong order?</p> <p><strong>Update 1 - world-space plane...</strong> I've changed my world-space floor plane to XY(Z up) to match the input for openCV. (gWorldToCalibration is now an identity matrix). The rotation is still wrong, and the projection output is the same, but I think the translation is correct now (It's certainly on the correct side of the markers) <img src="https://i.stack.imgur.com/SpusA.png" alt="3D view on XY plane"></p> <p><strong>Update2 - Real image size</strong> I'm playing with the image size going into the camera calibration; seeing as I'm using 1,1 which is normalised, but the imageSize parameter is in integers, I thought this might be significant... And I guess it is (The red box is where the projected view-space points intersect with z=0 floor plane) Without any distortion correction, here is the result (Only thing changed is imagesize from 1,1 to 640,480. I multiply my normalised input-view-space coords by 640,480 too) <img src="https://i.stack.imgur.com/fSpTS.png" alt="enter image description here"> I'm going to try and add distortion correction to see if it lines up <em>perfectly</em>...</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload