Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Has the server that you are trying to get the files from got indexing switched on?</p> <p>If so then it's probably a matter of scraping this page that comes back and then visiting each url one by one.</p> <p>If not then I'm not sure it can be done very easily.</p> <p>Ok based on comments below I think you'll want to do something like this:</p> <pre><code> string indexUrl = "http://www.stackoverflow.com"; WebBrowser browser = new WebBrowser(); browser.Navigate(indexUrl); do { Application.DoEvents(); } while (browser.ReadyState != WebBrowserReadyState.Complete); var listOfFilePaths = new List&lt;string&gt;(); foreach (HtmlElement linkElement in browser.Document.GetElementsByTagName("a")) { var pagePath = linkElement.GetAttribute("href"); listOfFilePaths.Add(pagePath); } </code></pre> <p>Note that the WebBrowser control needs to be run in a Windows forms app to get it work (easily). The indexPath variable I used should be changed to the path of the index page of the server (I just used stackoverflow as an example).</p> <p>The foreach loop extracts all anchor (a) tags out of the site and gets the path they are pointing to and adds them to the listOfFilePaths collection.</p> <p>Once this code has finished executing the listOfFilePaths collection will contain an entry for every link on the index page and hence a link to every file on the server.</p> <p>From here it's a matter of looping round the listOfFilePaths collection and downloading each file one by one. Perhaps even using some rules not to download certain types of files that you're not interested in. I believe from what you've said you should be able to do this.</p> <p>Hope this helps.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload