Note that there are some explanatory texts on larger screens.

plurals
  1. POManipulate PDF files (read, split, combine, move)
    text
    copied!<p>I'm trying to figure out a way to deal with scanned pdf's with either Python or PHP. I need to be able to open a multiple page PDF, read the contents, and move the pages to individual PDF files (or one file if they are to be grouped) based on an identifier in the text. </p> <p>I downloaded and have played around a little bit with <a href="http://linux.die.net/man/1/pdftotext" rel="nofollow">pdftotext</a>, but am unsure whether that is the best way to go. I used a sample scanned PDF, ran it through pdftotext to a txt file and grepped around it a bit. It works OK; I was able to find some identifiers, but will need moar regex skill for it to be efficient. But I'm hung at splitting the PDF's up and moving them based on pdftotext.</p> <p>Any ideas?</p> <hr> <p>Edit: clarification.</p> <ol> <li>Use pdftotext to spit out each page of pdf to individual txt files;</li> <li>grep the txt files for identifiers and compile a list of those pages that are similar;</li> <li>based on the list extract and combine (if applicable) related pages and spit out a pdf of each;</li> <li>move generated PDF based on grouping to another location;</li></li> </ol> <hr> <p>PyPDF seems to be a good place to start. This is what I have so far:</p> <pre><code>from pyPdf import PdfFileWriter, PdfFileReader import re output = PdfFileWriter() input1 = PdfFileReader(file("test.PDF", "rb")) totalPages = input1.getNumPages() print "total pages to process:" +str(totalPages) for i in range(totalPages): p = i print "processing page %s" %str(i) output.addPage(input1.getPage(p)) p = input1.getPage(p).extractText()#extract text to search for identifier pr = re.search("identifier", p)#search for the identifier; to be replaced with a list #if there's a match, do work if pr: outputStream = file("test"+str(i)+".pdf", "wb") output.write(outputStream) outputStream.close() print 'match on page %s' %str(i) print '\n' </code></pre> <p>Then from here I can use another library to consolidate PDF's based on their location. </p> <p>Another question, though: how robust is Python's re.search function? Especially dealing with shady OCR, can it be reliable?</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload