Note that there are some explanatory texts on larger screens.

plurals
  1. PORegex: Extracting readable (non-code) text and URLs from HTML documents
    primarykey
    data
    text
    <p>I am creating an application that will take a URL as input, retrieve the page's html content off the web and extract <em>everything that isn't contained in a tag</em>. In other words, the textual content of the page, as seen by the visitor to that page. That includes 'masking' out everything encapsuled in <code>&lt;script&gt;&lt;/script&gt;</code>, <code>&lt;style&gt;&lt;/style&gt;</code> and <code>&lt;!-- --&gt;</code>, since these portions contain text that is not enveloped within a tag (but is best left alone).</p> <p>I have constructed this regex:</p> <pre><code>(?:&lt;(?P&lt;tag&gt;script|style)[\s\S]*?&lt;/(?P=tag)&gt;)|(?:&lt;!--[\s\S]*?--&gt;)|(?:&lt;[\s\S]*?&gt;) </code></pre> <p>It correctly selects all the content that i want to ignore, and only leaves the page's text contents. However, that means that what I want to extract won't show up in the match collection (I am using VB.Net in Visual Studio 2010).</p> <p>Is there a way to "invert" the matching of a whole document like this, so that I'd get matches on all the text strings that are left out by the matching in the above regex?</p> <p>So far, what I did was to add another alternative at the end, that selects "any sequence that doesn't contain &lt; or >", which then means the leftover text. I named that last bit in a capture group, and when I iterate over the matches, I check for the presence of text in the "text" group. This works, but I was wondering if it was possible to do it all through regex and <em>just</em> end up with matches on the plain text.</p> <p>This is supposed to work generically, without knowing any specific tags in the html. It's supposed to extract <strong>all</strong> text. Additionally, I need to preserve the original html so the page retains all its links and scripts - i only need to be able to extract the text so that I can perform searches and replacements within it, without fear of "renaming" any tags, attributes or script variables etc (so I can't just do a "replace with nothing" on all the matches I get, because even though I am then left with what I need, it's a hassle to reinsert that back into the correct places of the fully functional document).</p> <p>I want to know if this is at all possible using regex (and I know about HTML Agility Pack and XPath, but don't feel like).</p> <p>Any suggestions?</p> <p><b>Update:</b> Here is the (regex-based) solution I ended up with: <a href="http://www.martinwardener.com/regex/" rel="nofollow">http://www.martinwardener.com/regex/</a>, implemented in a demo web application that will show both the active regex strings along with a test engine which lets you run the parsing on any online html page, giving you parse times and extracted results (for link, url and text portions individually - as well as views where all the regex matches are highlighted in place in the complete HTML document).</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. COYou can surround short code blocks with `\`` characters.
      singulars
    2. COThe regex does a beautiful job of selecting just the right portions - probably more elegantly than using any DOM et al method (or..?). So except for the "inversion" part, I'm pretty happy with using regex - it's very compact, code wise. I have two candidate methods to make this work: To add an extra piece to the regex (`|(?P<text>[^<>]*)`) that actually will select the leftover text as an isolated match - and since that capture group has a name, it can be tested for in the ensuing iteration. This works, except that I noticed it also picked up just a couple of other "matches" that baffled me.
      singulars
    3. COThe other possibility (I haven't thought this through, but it should work, although it's the cumbersomeness I wanted to avoid) is to use the regex in the main post above, that leaves out the text parts - and "manually" track the matches. Since they both let me know where in the page the match started, as well as the length of the matched string, I would then use the difference between index+length of one match and the index of the next match to determine what would then represent a pure text portion of the document, not caught up in any of the 'masks'.
      singulars
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload