Note that there are some explanatory texts on larger screens.

plurals
  1. POHow can I transform an image so that projected image is same as original
    primarykey
    data
    text
    <p>Problem statement: An image A is projected through a projector, goes through a microscope and the projected image is captured via a camera through the same microscope as image B. Due to the optical elements, the B is rotated, sheared and distorted with respect to A. Now, I need to transform A into A' before projection such that B is as close to A as possible.</p> <p>Initial approach: I took a checkerboard pattern and rotated it at various angles (36, 72, 108, ... 324 degrees) and projected to get a series of A images and B images. I used OpenCV's CalibrateCamera2, InitUndistortMap and Remap functions to convert B into B'. But B' is nowhere near A and rather similar to B (especially there is a significant amount of rotation and shearing that is not getting corrected).</p> <p>The code (in Python) is below. I am not sure if I am doing something stupid. Any ideas for the correct approach?</p> <pre><code>import pylab import os import cv import cv2 import numpy # angles - the angles at which the picture was rotated angles = [0, 36, 72, 108, 144, 180, 216, 252, 288, 324] # orig_files - list of original picture files used for projection orig_files = ['../calibration/checkerboard/orig_%d.png' % (angle) for angle in angles] # img_files - projected image captured by camera img_files = ['../calibration/checkerboard/imag_%d.bmp' % (angle) for angle in angles] # Load the images images = [cv.LoadImage(filename) for filename in img_files] orig_images = [cv.LoadImage(filename) for filename in orig_files] # Convert to grayscale gray_images = [cv.CreateImage((src.height, src.width), cv.IPL_DEPTH_8U, 1) for src in images] for ii in range(len(images)): cv.CvtColor(images[ii], gray_images[ii], cv.CV_RGB2GRAY) gray_orig = [cv.CreateImage((src.height, src.width), cv.IPL_DEPTH_8U, 1) for src in orig_images] for ii in range(len(orig_images)): cv.CvtColor(orig_images[ii], gray_orig[ii], cv.CV_RGB2GRAY) # The number of ranks and files in the chessboard. OpenCV considers # the height and width of the chessboard to be one less than these, # respectively. rank_count = 11 file_count = 10 # Try to detect the corners of the chessboard. For each image, # FindChessboardCorners returns (found, corner_points). found is True # even if it managed to detect only a subset of the actual corners. img_corners = [cv.FindChessboardCorners(img, (rank_count-1, file_count-1)) for img in gray_images] orig_corners = [cv.FindChessboardCorners(img, (rank_count-1,file_count-1)) for img in gray_orig] # The total number of corners will be (rank_count-1)x(file_count-1), # but if some parts of the image are too blurred/distorted, # FindChessboardCorners detects only a subset of the corners. In that # case, DrawChessboardCorners will raise a TypeError. orig_corner_success = [] ii = 0 for (found, corners) in orig_corners: if found and (len(corners) == (rank_count - 1) * (file_count - 1)): orig_corner_success.append(ii) else: print orig_files[ii], ': could not find correct corners: ', len(corners) ii += 1 ii = 0 img_corner_success = [] for (found, corners) in img_corners: if found and (len(corners) == (rank_count-1) * (file_count-1)) and (ii in orig_corner_success): img_corner_success.append(ii) else: print img_files[ii], ': Number of corners detected is wrong:', len(corners) ii += 1 # Here we compile all the corner coordinates into single arrays image_points = [] obj_points = [] for ii in img_corner_success: obj_points.extend(orig_corners[ii][1]) image_points.extend(img_corners[ii][2]) image_points = cv.fromarray(numpy.array(image_points, dtype='float32')) obj_points = numpy.hstack((numpy.array(obj_points, dtype='float32'), numpy.zeros((len(obj_points), 1), dtype='float32'))) obj_points = cv.fromarray(numpy.array(obj_points, order='C')) point_counts = numpy.ones((len(img_corner_success), 1), dtype='int32') * ((rank_count-1) * (file_count-1)) point_counts = cv.fromarray(point_counts) # Create the output parameters cam_mat = cv.CreateMat(3, 3, cv.CV_32FC1) cv.Set2D(cam_mat, 0, 0, 1.0) cv.Set2D(cam_mat, 1, 1, 1.0) dist_mat = cv.CreateMat(5, 1, cv.CV_32FC1) rot_vecs = cv.CreateMat(len(img_corner_success), 3, cv.CV_32FC1) tran_vecs = cv.CreateMat(len(img_corner_success), 3, cv.CV_32FC1) # Do the camera calibration x = cv.CalibrateCamera2(obj_points, image_points, point_counts, cv.GetSize(gray_images[0]), cam_mat, dist_mat, rot_vecs, tran_vecs) # Create the undistortion map xmap = cv.CreateImage(cv.GetSize(images[0]), cv.IPL_DEPTH_32F, 1) ymap = cv.CreateImage(cv.GetSize(images[0]), cv.IPL_DEPTH_32F, 1) cv.InitUndistortMap(cam_mat, dist_mat, xmap, ymap) # Now undistort all the images and same them ii = 0 for tmp in images: print img_files[ii] image = cv.GetImage(tmp) t = cv.CloneImage(image) cv.Remap(t, image, xmap, ymap, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0)) corrected_file = os.path.join(os.path.dirname(img_files[ii]), 'corrected_%s' % (os.path.basename(img_files[ii]))) cv.SaveImage(corrected_file, image) print 'Saved corrected image to', corrected_file ii += 1 </code></pre> <p>Here are the images - A, B and B' Actually I don't think the Remap is really doing anything!</p> <p><img src="https://i.stack.imgur.com/epzn4.png" alt="A. Original Image"> <img src="https://i.stack.imgur.com/vypry.png" alt="B. Captured Image"> <img src="https://i.stack.imgur.com/zsNgz.png" alt="B&#39;. Undistorted Image"></p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload