PDFParser

Configuration options

There are three ways of configuring the PDFParser.

  1. Programmatically via setter methods on the PDFParser.
  2. Programmatically via the PDFParserConfig object submitted through the ParseContext.

  3. Via the tika-config.xml file (many thanks to Thamme Gowda and Chris Mattmann's work on TIKA-1508).

The first two are fairly self-explanatory through the javadocs.

Here follows an example tika-config.xml file for setting catchIntermediateExceptions to false and for checking for whether the PDF allows for extraction for accessibility.

<?xml version="1.0" encoding="UTF-8"?>
<properties>
    <parsers>        
        <parser class="org.apache.tika.parser.DefaultParser">
            <parser-exclude class="org.apache.tika.parser.pdf.PDFParser"/>
        </parser>
        <parser class="org.apache.tika.parser.pdf.PDFParser">
            <params>
                <param name="allowExtractionForAccessibility" type="bool">true</param>
                <param name="catchIntermediateExceptions" type="bool">false</param>
                <!-- we really should throw an exception for this.
                     We are currently not checking -->
                <param name="someRandomThingOrOther" type="bool">true</param>
            </params>
        </parser>
    </parsers>
</properties>

Optional Dependencies

If you need to process TIFF or JPEG2000 images within PDFs (either for inline image extraction or OCR), please consider adding the optional dependencies specified by PDFBox. These dependencies are not compatible with ASL 2.0; please make sure that any third party licenses are suitable for your project.

Finally, M. Caruana Galizia alerted us to the need to use maven-shade's ServicesResourceTransformer because the third-party dependencies' services file will be overwritten unless you transform the services. See an example: here.

OCR

Note: the configuration of some of these features via the config file requires a nightly build of Tika after 11/8/2016 or Tika version >= 1.15.

Start with the instructions on TikaOCR. In short, you need to have Tesseract installed.

There are two ways of running OCR on PDFs:

  1. Extracting the inline images and letting Tesseract run on each inline image.
  2. Rendering each PDF page as a single image and running Tesseract on that single image.

We have not carried out evaluations to determine which strategy is better. We suspect that the tried and true It Depends(TM) is operative here. We added OCR'ing of the single image option because some PDFs can contain hundreds of images per page where each image is a tiny part of the overall page, and OCR would be useless. However, we recognize, that if the page is logically broken into sections, running OCR on the individual inline images might yield better results.

Option 1: Configuring OCR on Inline Images

This will extract inline images as if they were attachments, and then, if Tesseract is correctly configured, it should run against the images. Note: by default, extracting inline images is turned off because some rare PDFs contain thousands of inline images per page, and it has a big hit on performance, both memory usage and time.

...
        <parser class="org.apache.tika.parser.pdf.PDFParser">
            <params>
                <param name="extractInlineImages" type="bool">true</param>
            </params>
        </parser
...

Option 2: Configuring OCR on Rendered Pages

This will render each PDF page and then run OCR on that image. This method of OCR is triggered by the ocrStrategy parameter, but users can manipulate other parameters, including the image type (see org.apache.pdfbox.rendering.ImageType for options) and the dots per inch dpi. The defaults are: gray and 300 respectively. For ocrStrategy, we currently have: no_ocr (rely on regular text extraction only), ocr_only (don't bother extracting text, just run OCR on each page), ocr_and_text (both extract text and run OCR). We should add more advanced strategies, e.g. if you only get 10 words out of a page, run OCR, but we haven't implemented those yet.

...
        <parser class="org.apache.tika.parser.pdf.PDFParser">
            <params>
                <param name="ocrStrategy" type="string">ocr_only</param>
                <param name="ocrImageType" type="string">rgb</param>
                <param name="ocrDPI" type="int">100</param>
            </params>
        </parser>
...

Optional Dependencies

Note, you should include the following dependency to process JBIG2 images:

    <dependency>
        <groupId>org.apache.pdfbox</groupId>
        <artifactId>jbig2-imageio</artifactId>
        <version>3.0.2</version>
    </dependency>

Note, if their licenses are compatible with your application, you may want to include the following jai libraries in your classpath to handle jp2, jpeg2000 and tiff files. The licenses are not Apache 2.0 compatible!

    <dependency>
        <groupId>com.github.jai-imageio</groupId>
        <artifactId>jai-imageio-core</artifactId>
        <version>1.4.0</version>
    </dependency>
    <dependency>
        <groupId>com.github.jai-imageio</groupId>
        <artifactId>jai-imageio-jpeg2000</artifactId>
        <version>1.3.0</version>
        <scope>test</scope>
    </dependency>

If you are using Java 8, make sure to see pdf-rendering for JVM settings that may improve the speed of processing.

Common Text Extraction Challenges with PDFs

This is mostly a stub. The focus of this section is on extracting electronic text from the PDF with no OCR.

One could write several volumes on how text extraction from PDFs could go wrong. It would only be poetic justice for said author to print out those volumes, pour coffee on the paper, scan them in as PDFs on different scanners, some with OCR, some without, at different angles of rotation with user-defined fonts randomly deleted.

High level preliminaries:

  1. Your matrix algebra (or, your tool's matrix algebra) has to be moderately advanced to do text extraction well.
  2. The PDF format is display-based not text-based
    1. One major goal is to display the same content on different devices b. A PDF may be image-only and contain no actual electronic text c. When there is electronic text, there may be no space characters stored in the text, rather spaces may appear in the rendering of the image via specific coordinates for the characters.
  3. The PDF format is page-based

Tables Aren't Extracted as Tables

Right. Tables aren't stored as tables in PDF files. A human is easily able to see tables, but all that is stored in the PDF is text chunks and coordinates on a page (if there's any text at all). One needs to apply some advanced computation to extract table structure from a PDF. Tika does not currently do this. Please see TabulaPDF as one open source project that extracts tables from PDFs and maintains their structure.

No Text

Mildly Garbled Text

Completely Garbled Text

No spaces/Extra spaces

See 1c. above. Depending on how the PDF was generated, it is possible that it doesn't store actual space characters. Rather software has to use coordinates on the page plus matrix algebra plus font information about the width of characters to "impute" where spaces would be. The math is the simple part; sometimes there can be missing or wrong font information that can lead to no spaces or extra spaces.

Word/Line breaks in the middle of my text ?!

Character Encoding/Unicode Mappings

See also diagnosing PDF text problems.

See also

Upgrading to PDFBox 2.x

PDFParser (Apache PDFBox) (last edited 2019-05-15 19:54:07 by TimothyAllison)