Base (Private) Module: parsers/_pdfbaseparser.py

Purpose:

This module provides generalised base functionality for parsing PDF documents.

Platform:

Linux/Windows | Python 3.10+

Developer:

J Berendt

Email:

development@s3dev.uk

Comments:

n/a

Attention

This module is not designed to be interacted with directly, only via the appropriate interface class(es).

Rather, please create an instance of a PDF document parsing object using the following class:

class _PDFBaseParser(path: str)[source]

Bases: object

Base class containing generalised PDF parsing functionality.

property doc: DocPDF

Accessor to the document object.

_get_crop_coordinates(skip_header: bool = False, skip_footer: bool = False) tuple[float][source]

Determine the bounding box coordinates.

These coordinates are used for removing the header and/or footer.

Parameters:
  • skip_header (bool, optional) – If True, set the coordinates such that the header is skipped. Defaults to False.

  • skip_footer (bool, optional) – If True, set the coordinates such that the footer is skipped. Defaults to False.

Logic:

When excluding a header and/or footer, the following page numbers are used for header/footer position detection, given the length of the document:

  • Number of pages [1]: 1

  • Number of pages [2,10]: 2

  • Number of pages [11,]: 5

Returns:

A bounding box tuple of the following form, to be passed directly into the Page.crop() method:

(x0, top, x1, bottom)

Return type:

tuple

_open() None[source]

Open the PDF document for reading.

Before opening the file, a test is performed to ensure the PDF is valid. The file must:

  • exist

  • be a valid PDF file, per the file signature

  • have a .pdf file extension

Other Operations:
  • Store the pdfplumber parser object returned from the pdfplumber.open() function into the self._doc._parser attribute.

  • Store the number of pages into the self._doc._npages attribute.

  • Store the document’s meta data into the self._doc._meta attribute.

Raises:
  • TypeError – Raised if the file type criteria above are not

  • met.

static _prepare_row(row: list) str[source]

Prepare the table row for writing a table to to CSV.

Parameters:

row (list) – A list of strings, constituting a table row.

Processing Tasks:

For each element in the row:

  • Remove any double quote characters (ASCII and Unicode).

  • Replace any empty values with 'None'.

  • If the element contains a comma, wrap the element in double quotes.

  • Attempt to convert any non-ASCII characters to an associated ASCII character. If the replacement cannot be made, the character is replaced with a '?'.

Returns:

A processed comma-separated string, ready to be written to a CSV file.

Return type:

str

_scan_common() list[str][source]

Scan the PDF document to find the most common lines.

Rationale:

Generally, the most common lines in a document will be the header and footer, as these are expected to be repeated on each page of the document.

‘Most common’ is defined as line occurring on 90% of the pages throughout the document. Therefore, only documents with more than three pages are scanned. Otherwise, the 90% may exclude relevant pieces of the document (as was discovered in testing).

Logic:

For documents with more than three pages, the entire PDF is read through and each line extracted. The occurrence of each line is counted, with the most common occurrences returned to the caller.

The returned lines are to be passed into a page search to determine the x/y coordinates of the header and footer.

Returns:

For documents with more than three pages, a list containing the most common lines in the document. Otherwise, an empty list if returned.

Return type:

list

_set_paths() None[source]

Set the document’s file path attributes.