Note that there are some explanatory texts on larger screens.

plurals
  1. POHow to make XPath select multiple table elements with identical id attributes?
    primarykey
    data
    text
    <p>I'm currently trying to extract information from a badly formatted web page. Specifically, the page has used the same id attribute for multiple table elements. The markup is equivalent to something like this:</p> <pre><code>&lt;body&gt; &lt;div id="random_div"&gt; &lt;p&gt;Some content.&lt;/p&gt; &lt;table id="table_1"&gt; &lt;tr&gt; &lt;td&gt;Important text 1.&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &lt;h4&gt;Some heading in between&lt;/h4&gt; &lt;table id="table_1"&gt; &lt;tr&gt; &lt;td&gt;Important text 2.&lt;/td&gt; &lt;td&gt;Important text 3.&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &lt;p&gt;How about some more text here.&lt;/p&gt; &lt;table id="table_1"&gt; &lt;tr&gt; &lt;td&gt;Important text 4.&lt;/td&gt; &lt;td&gt;Important text 5.&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &lt;/div&gt; &lt;/body&gt; </code></pre> <p>Clearly this is incorrectly formatted HTML, due to the multiple use of the same id for an element.</p> <p>I'm using XPath to try and extract all the text in the various table elements, utilising the language through the <a href="http://scrapy.org/" rel="nofollow">Scrapy</a> framework.</p> <p>My call, looks something like this:</p> <pre><code>hxs.select('//div[contains(@id, "random_div")]//table[@id="table_1"]//text()').extract() </code></pre> <p>Thus the XPath expression is: <code>//div[contains(@id, "random_id")]//table[@id="table_1"]//text()</code></p> <p>This returns: <code>[u'Important text 1.']</code>, i.e., the contents of the first table that matches the id value "table_1". It seems to me that once it has come across an element with a certain id it ignores any future occurrences in the markup. Can anyone confirm this?</p> <p><strong>UPDATE</strong></p> <p>Thanks for the fast responses below. I have tested my code on a page hosted locally, which has the same test format as above and the correct response is returned, i.e.,</p> <pre><code>`[u'Important text 1.', u'Important text 2.', . . . . ,u'Important text 5.']` </code></pre> <p>There is therefore nothing wrong with either the Xpath expression or the Python calls I'm making.</p> <p>I guess this means that there is a problem on the webpage itself which is either screwing up XPath or the html parser, which is <code>libxml2</code>.</p> <p>Does anyone have any advice as to how I can dig into this a bit more?</p> <p><strong>UPDATE 2</strong></p> <p>I have successfully isolated the problem. It is actually with the underlying parsing library, which is <code>lxml</code> (which provides Python bindings for the <code>libxml2</code> C library.</p> <p>The problem is that the parser is unable to deal with vertical tabs. I have no idea who coded up the site I am dealing with but it is <em>full</em> of vertical tabs. Web browser seem to be able to ignore these, which is why running the XPath queries from Firebug on the site in question, for example, are successful. </p> <p>Further, because the above simplified example doesn't contain vertical tabs it works fine. For anyone who comes across this issue in Scrapy (or in python generally), the following fix worked for me, to remove vertical tabs from the html responses:</p> <pre><code>def parse_item(self, response): # remove all vertical tabs from the html response response.body = filter(lambda c: c != "\v", response.body) hxs = HtmlXPathSelector(response) items = hxs.select('//div[contains(@id, \"random_div\")]' \ '//table[@id="table_1"]//text()').extract() </code></pre>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload