Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>You have three problems:</p> <ol> <li>The rectangles are not very strict rectangles (the edges are often somewhat curved)</li> <li>There are a lot of them.</li> <li>They are often connected.</li> </ol> <p>It seems that all your rects are essentially the same size(?), and do not greatly overlap, but the pre-processing has connected them.</p> <p>For this situation the approach I would try is:</p> <ol> <li><a href="http://docs.opencv.org/doc/tutorials/imgproc/erosion_dilatation/erosion_dilatation.html" rel="nofollow noreferrer">dilate</a> your image a few times (as also suggested by @krzych) - this will remove the connections, but result in slightly smaller rects.</li> <li>Use scipy to <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.measurements.label.html" rel="nofollow noreferrer">label</a> and <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.measurements.find_objects.html" rel="nofollow noreferrer">find_objects</a> - You now know the position and slice for every remaining blob in the image.</li> <li>Use <a href="http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=minarearect#minarearect" rel="nofollow noreferrer">minAreaRect</a> to find the center, orientation, width and height of each rectangle.</li> </ol> <p>You can use step 3. to test whether the blob is a valid rectangle or not, by its area, dimension ratio or proximity to the edge.. </p> <p>This is quite a nice approach, as we assume each blob is a rectangle, so <code>minAreaRect</code> will find the parameters for our minimum enclosing rectangle. Further we could test each blob using something like <code>humoments</code> if absolutely neccessary.</p> <p>Here is what I was suggesting in action, boundary collision matches shown in red.</p> <p><img src="https://i.stack.imgur.com/COmNX.png" alt="enter image description here"></p> <p>Code:</p> <pre><code>import numpy as np import cv2 from cv2 import cv import scipy from scipy import ndimage im_col = cv2.imread('jdjAf.jpg') im = cv2.imread('jdjAf.jpg',cv2.CV_LOAD_IMAGE_GRAYSCALE) im = np.where(im&gt;100,0,255).astype(np.uint8) im = cv2.erode(im, None,iterations=8) im_label, num = ndimage.label(im) for label in xrange(1, num+1): points = np.array(np.where(im_label==label)[::-1]).T.reshape(-1,1,2).copy() rect = cv2.minAreaRect(points) lines = np.array(cv2.cv.BoxPoints(rect)).astype(np.int) if any([np.any(lines[:,0]&lt;=0), np.any(lines[:,0]&gt;=im.shape[1]-1), np.any(lines[:,1]&lt;=0), np.any(lines[:,1]&gt;=im.shape[0]-1)]): cv2.drawContours(im_col,[lines],0,(0,0,255),1) else: cv2.drawContours(im_col,[lines],0,(255,0,0),1) cv2.imshow('im',im_col) cv2.imwrite('rects.png',im_col) cv2.waitKey() </code></pre> <p>I think the <code>Watershed</code> and <code>distanceTransform</code> approach demonstrated by @mmgp is clearly superior for segmenting the image, but this simple approach can be effective depending upon your needs.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload