Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Your method of "see if the colors keep differentiating between the license plate white, and the black of the text" is basically searching for areas where the pixel intensity changes from black to white and vice-versa many times. Edge detection can accomplish essentially the same thing. However, implementing your own methods is still a good idea because you will learn a lot in the process. Heck, why not do both and compare the output of your method with that of some ready-made edge detection algorithm?</p> <p>At some point you will want to have a binary image, say with black pixels corresponding to the "not-a-character" label, and white pixels corresponding to the "is-a-character" label. Perhaps the simplest way to do that is to use a thresholding function. But that will only work well if the characters have already been emphasized in some way.</p> <p>As someone mentioned in your other thread, you can do that using the black hat operator, which results in something like this:</p> <p><img src="https://i.stack.imgur.com/Cq1cv.jpg" alt="image after black hat operation"></p> <p>If you threshold the image above with, say, Otsu's method (which automatically determines a global threshold level), you get this:</p> <p><img src="https://i.stack.imgur.com/tMmuS.jpg" alt="alt text"></p> <p>There are several ways to clean that image. For instance, you can find the connected components and throw away those that are too small, too big, too wide or too tall to be a character:</p> <p><img src="https://i.stack.imgur.com/cSPaF.jpg" alt="alt text"></p> <p>Since the characters in your image are relatively large and fully connected this method works well.</p> <p>Next, you could filter the remaining components based on the properties of the neighbors until you have the desired number of components (= number of characters). If you want to recognize the character, you could then calculate features for each character and input them to a classifier, which usually is built with supervised learning.</p> <p>All the steps above are just one way to do it, of course.</p> <p>By the way, I generated the images above using OpenCV + Python, which is a great combination for computer vision.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload