Monthly Archives: December 2023

Reflections on the Process of Network Analysis Dataset Creation Using Line Content Classification with Python

By Simon Krahé

Editorial note: Simon Krahé is finishing his undergraduate studies in English and American Studies, History and Computer Science at the University of Wuppertal. He will use network analysis methods in his bachelor’s thesis. His research interests include digital humanities and digital history practices, especially the use of Natural Language Processing (NLP) methods on large corpora of cultural heritage data.

Network analysis can be a valuable tool for digital history research: it can lead to new insights into social structures of historical actors and organizations and can also track developments with a diachronic approach, i.e. comparing data at different points in time. In the past months, I have been doing network analysis as part of a larger project at the GHI in Washington, DC, by Prof. Dr. Simone Lässig analyzing the history of the Arnhold family. 1 The Arnholds were a German Jewish family that operated a family bank in Dresden since 1864. Members of the family were part of a larger business network in Germany and also served as board members at other companies. In the context of this project, network analysis offers the opportunity to examine changes in a network of board members and corresponding companies in Germany in the 1920s and 30s. In the following blog, I will demonstrate how structured print data can be efficiently, without manual transcription or correction, converted into a dataset that can be further used for such a network analysis. The basis for this is the 1932 “Adreßbuch der Direktoren und Aufsichtsräte (Band 1)” (address book of directors and boards, volume one),2 I have used a Python Jupyter notebook for the implementation.3

From Page to Digital Artifact

As a first step, I examined the source material visually to determine its capability for digitization. A typical page (see fig. 1) of the address book consists of five types of content: page numbers (red), name lines (blue) that include names, titles and often a listed occupation, addresses (green) with varying amount of specificity, position descriptions (purple), and the companies the descriptions relate to (yellow). Fortunately, at a first glance, all of these types can be distinguished based on their formal characteristics such as being aligned to the left or the right, the level of indentation and the use of italics, which makes these pages a promising case for efficient digitization (here meaning a transformation to a digital queryable format that preserves the semantic context of the print data).

Figure 1: Page 10 of the 1932 address book with the different data types highlighted 4

After assessing the page layout, the document had to be scanned. Luckily, the German Museum, a museum dedicated to the history of science and technology, has already digitized the address book and made it available online. The next step was to extract the text itself, which further requires the use of optical character recognition (OCR) techniques. While there are multiple software solutions for OCR, I used Transkribus with the Transkribus Print M1 model for this project due to its superior accuracy compared to Tesseract 5 and Abbyy FineReader 14, which performed poorer.5 In addition, Transkribus made it easier to work with its output data for the purpose of this network analysis.

Transkribus’ user interface structurally mimics the output of the format it provides, PRIMA PAGE XML. One can see a hierarchical structure that consists of regions, within which lines reside. Lines consist of smaller lines that are drawn between individual points, as can bee seen in fig. 2. The points’ coordinates are aligned to the top left of the overall area. The PAGE XML representation (fig. 3) also includes the coordinates of the line on a word level, but this has no further use here. Rather, the XML elements “TextLine” and its children “coords” andTextEquiv”/”Unicode” contain the information on the coordinates of the text and the text itself. It becomes clear here that the text can be processed on a line-by-line basis.6 In the specific example shown in fig. 2 and 3, the original line “Agethen, Franz;” is split into multiple “Word” elements (namely “Agethen,” and “Franz,”) in the OCR output with each having specific coordinates. The entire line’s content, “Agethen, Franz,” is seen in line 58 of fig. 3, and its coordinates in line 43 respectively.

Figure 2: Transkribus user interface7

Figure 3: XML representation of the same excerpt8

Line by Line Processing

Interfacing with the XML documents that contain the lines was done through the “xml” package for Python. For each “TextLine” element, I saved its coordinates and the associated text. I did not, however, require all coordinates since the maximum and minimum values for the x and y axes are enough to assess which area is covered by the line. These, alongside the coordinates for the maximum values of the text on the overall page, were saved.

Figure 4: XML representation of a line with the information that is saved or further processed highlighted

This information was then sufficient to classify each line. The following rules, applied in this order, constitute the basic algorithm applied. Some edge cases requiring extra consideration are introduced at a later point in this post.

  1. If a line begins at the very left of a page, i.e., its leftmost point is close* to the left edge of the line positions, it is considered a name line, containing the name and potentially other information like the occupation of a person.

  2. If a line is indented only once and its rightmost point is not close* to the right page edge, it is classified as the beginning of a new company name. Indentation can be measured with a constant that describes the starting position relative to the overall page width. Excluding lines that border the right edge of the page minimizes false positives in the form of long addresses that spread so far left into the line starts at the same point that company information lines do.
    An exemption occurs if no position description (e.g. “board member of”) was read between the last read person and the given line; in this case it is not a company name, but a continuation of the person’s name on a second line. Here, the line’s content is added to the previous line and the current line itself is set to be ignored in further processing.

  3. If a line is indented twice and does not reach the right margin of the page,* it is assumed to contain a position description.
    A similar edge case to above applies, as long company names that continue on the following line are then indented again, starting on the same level. A distinction can be easily made by the human eye, as position descriptions are written in italics and company names are not. Unfortunately, neither is this information transcribed in any of the tested sufficiently well performing OCR tools, nor is the rule of using italics applied without exception by the original authors.9 However, there is still a way to distinguish the two fields as the position description text is thankfully quite predictable. By examining the source materials, certain keywords can be found with which the differentiation can be made.

  4. If the line reaches the right edge of the page*, it contains an address.

  5. If neither of the previous cases applies, the line is classified as unclear. This result is logged.

* tolerance margins apply

Once the initial classification process had been completed, I moved on to the next step and revisited all lines for the purpose of encoding the relationships contained within the data. The principal logic behind this step is quite simplistic as the following lines illustrate:

After a person entry is read:
After an address entry is read:
Add the address as that person’s address in their dataset
After a position information entry is read:
After a company is read:
Add the company to the person’s dataset under the above relationship

The stored data at this point then closely resembled the print version, as seen in fig. 5.

Figure 5: Excerpt of data saved internally in dictionaries and lists

In implementing and testing these algorithms, some edge cases can occur that have to be considered. That is, for example, the case for multi-line companies and names, which I have described above. Additional challenges may arise when Transkribus generates two separate lines in situations where a human reader would not. This situation typically occurs when there are extended pauses in the scanned text, which do not necessarily indicate a transition to a different content category.

Figure 6: One line is split in two, see the small line index in front of the word “Bergwerksdirektor”10

Only one out of two lines can be accurately classified based on its starting position. The content from the other line must be merged with that of its adjacent line, after which the original line will be disregarded in subsequent processing. Neighborhood is determined by the closeness of one line’s minimum x value and another’s maximum x value, while they are on a similar average y-value, i.e. when they are close together, which a human would perceive as a ‘normal’ line. While this sounds straightforward so far, the unintuitive case of the first line in the determined reading order being to the right of its neighbor needs to be considered as well. This occurs when the rightmost line is a bit higher than its neighbor. This behavior turns out to be an even bigger problem when it leads to addresses being listed right before the person they belong to, requiring special cases for such behavior in the above algorithm.

Figure 7: Constants, including keywords that are used in the classification and grouping processes

Towards a Network

At the end of this process, we will need to consider the data points necessary to facilitate the implementation of a network analysis. The purpose of this specific network analysis was to identify relationships from person to person based on their mutual relationship to a company. This does, however, require that the company is recorded with the same name for both persons. At this point, errors caused by noisy OCR become visible. Without error correction, even a small mistake such as reading “in” as “m” could lead to associations not being made, which extends to network analysis metrics being imprecise as well. To find a solution, I had to check the company names that are being read against a list of already existing company names. If the string of a company’s name is close enough to another for it to be considered the same company with an OCR error in the transcript, the two names are automatically consolidated based on that criterion. Closeness in this case is measured with Levenshtein distance, i.e., the number of characters that need to be modified for string a to become string b.11

Once this error correction was completed, I exported the data. The use of simple structures to save it allowed for quick traversal. From the data, I created a NetworkX network in Python to export directly to the gexf format used in other network analysis tools such as Gephi. Furthermore, the data can also be exported to comma separated value (CSV) files (one for persons, companies, and their respective relation each) to facilitate reusing the dataset with other tools.

Observations on the Implementation Process and Further Work Needed

While the initial design and implementation of the algorithms for line classification and recreation of the hierarchy of the print source seems straightforward, troubleshooting, adjusting the constants and accounting for edge cases unsurprisingly proved to be very labor intensive. This was partially due to this specific project that required a low consistent error rate. I have successfully tested the script with a multi-page source sample. I was unable to process the entire address book straight away as some of the pages contain ads which need to be filtered out first.

To further improve the program, the processes of line merging (for lines with the same and with different heights), restructuring the data into hierarchical form, as well as cleaning and separating data within the now differentiated fields (such as a company name from the city of the company on the same line, a person’s occupation from their first and last name, etc.) will need to be improved. If we test the process with a larger test set, we can also expect additional edge cases.

Nevertheless, the working script proved to be much more efficient than manual transcription would have been. The potential reuse of the working script for address books from other years that follow the same stylistic guidelines makes it even more attractive. As a proof of concept, the process demonstrates the potential advantages of using digital methods to curate datasets for historical analyses of networks. Processing the full address book based on the workflow that I have outlined would be worthwhile, and promises to yield interesting insights into the shifting landscape of board memberships in 20th century Germany.

1 See the research overview of “Family and Enterprise in the Age of Industry” (https://www.ghi-dc.org/research/german-european-history/family-and-enterprise-in-the-age-of-industry), last accessed December 13, 2023, for more.

2 Mossner, Julius (ed.): Adreßbuch der Direktoren und Aufsichtsräte Band I. Jahrgang 1932., Berlin, 1932. https://digital.deutsches-museum.de/de/digital-catalogue/library-object/904368/, last accessed December 13, 2023.

3 At this point, credit should be given to Vanessa Tissen, whom I cooperated with on the wider effort of using network analysis methods to gain insights into the Arnholds’ network and with whom I developed the objectives of the script in the project context together.

4 Mossner: Adreßbuch, p. 10.

5 Nevertheless, OCR errors still occurred, such as the semicolon in the example given in the following figures being recognized as a comma due to the light print on its dot.

6 The underlying assumption for this process is that no two types of content intermingle on a single line. While, in case of a short name and a short address, these are printed at the same height, constituting a line in a human readable sense, the empty space between the two elements prompts Transkribus to create two distinct line elements.

7 The shown excerpt is from Mossner: Adreßbuch, p. 10.

8 The shown excerpt is from Mossner: Adreßbuch, p. 10.

9 See, for example, Mossner: Adreßbuch, p. 6, under the entries of the Cologne Mayor Conrad Adenauer.

10 The shown excerpt is from Mossner: Adreßbuch, p. 6.

11 As an example, the strings “abcd” and “abba” have a Levenshtein distance of two.