Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>In order to properly debug your problem I would need all the code :-)</p> <p>I assume you are following the approach suggested in the tutorials (<a href="https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html" rel="nofollow noreferrer">calibration</a> and <a href="https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_calib3d/py_pose/py_pose.html" rel="nofollow noreferrer">pose</a>) cited by <a href="https://stackoverflow.com/users/377366/kobejohn">@kobejohn</a> in his <a href="https://stackoverflow.com/questions/19849683/opencv-solvepnp-detection-problems#comment29949323_19849683">comment</a> and so that your code follows these steps:</p> <ol> <li>collect various images of chessboard target</li> <li>find chessboard corners in images of point 1)</li> <li>calibrate the camera (with <code>cv::calibrateCamera</code>) and so obtain as a result the intrinsic camera parameters (let's call them <code>intrinsic</code>) and the lens distortion parameters (let's call them <code>distortion</code>)</li> <li>collect an image of your own custom target (the target is seen at <a href="http://www.youtube.com/watch?v=IeSSW4MdyfU" rel="nofollow noreferrer">0:57 in your video</a>) and it is shown in the following figure <img src="https://i.stack.imgur.com/F1189.png" alt="Axadiw&#39;s own custom target"> and find some relevant points in it (let's call the point you found in image <code>image_custom_target_vertices</code> and <code>world_custom_target_vertices</code> the corresponding 3D points).</li> <li>estimate the rotation matrix (let's call it <code>R</code>) and the translation vector (let's call it <code>t</code>) of the camera from the image of your own custom target you get in point 4), with a call to <a href="http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp" rel="nofollow noreferrer"><code>cv::solvePnP</code></a> like this one <code>cv::solvePnP(world_custom_target_vertices,image_custom_target_vertices,intrinsic,distortion,R,t)</code> </li> <li>giving the 8 corners cube in 3D (let's call them <code>world_cube_vertices</code>) you get the 8 2D image points (let's call them <code>image_cube_vertices</code>) by means of a call to <code>cv2::projectPoints</code> like this one <code>cv::projectPoints(world_cube_vertices,R,t,intrinsic,distortion,image_cube_vertices)</code></li> <li>draw the cube with your own <code>draw</code> function.</li> </ol> <p>Now, the final result of the draw procedure depends on all the previous computed data and we have to find where the problem lies:</p> <p><strong>Calibration</strong>: as you observed in your <a href="https://stackoverflow.com/a/20085179/15485">answer</a>, in 3) you should discard the images where the corners are not properly detected. You need a threshold for the reprojection error in order to discard "bad" chessboard target images. Quoting from the <a href="https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html" rel="nofollow noreferrer">calibration tutorial</a>:</p> <blockquote> <p>Re-projection Error</p> <p>Re-projection error gives a good estimation of just how exact is the found parameters. This should be as close to zero as possible. Given the intrinsic, distortion, rotation and translation matrices, we first transform the object point to image point using cv2.projectPoints(). Then we calculate the absolute norm between what we got with our transformation and the corner finding algorithm. To find the average error we calculate the arithmetical mean of the errors calculate for all the calibration images.</p> </blockquote> <p>Usually you will find a suitable threshold with some experiments. With this extra step you will get better values for <code>intrinsic</code> and <code>distortion</code>.</p> <p><strong>Finding you own custom target</strong>: it does not seem to me that you explain how you find your own custom target in the step I labeled as point 4). Do you get the expected <code>image_custom_target_vertices</code>? Do you discard images where that results are "bad"?</p> <p><strong>Pose of the camera</strong>: I think that in 5) you use <code>intrinsic</code> found in 3), are you sure nothing is changed in the camera in the meanwhile? Referring to the <a href="https://stackoverflow.com/a/14878840/15485">Callari's Second Rule of Camera Calibration</a>:</p> <blockquote> <p>Second Rule of Camera Calibration: "Thou shalt not touch the lens after calibration". In particular, you may not refocus nor change the f-stop, because both focusing and iris affect the nonlinear lens distortion and (albeit less so, depending on the lens) the field of view. Of course, you are completely free to change the exposure time, as it does not affect the lens geometry at all.</p> </blockquote> <p>And then there may be some problems in the <code>draw</code> function.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload