How AI Is Transforming Historical Research And Understanding

On this evening in Venice in 1531, a printer’s apprentice works hard in the printing shop. The page he is completing will be part of an astronomy textbook and features a woodcut image of an angelic face studying celestial objects during a lunar eclipse and thick lines of type.

In the 16th century, book production was a lengthy process; however, it enabled knowledge to spread at an unprecedented rate.

Five centuries later, the production of information is vastly different. There are vast amounts of digital data containing images, videos, and text that move rapidly and have to be examined just as quickly. This has caused machine-learning models to be developed to sort through it all. The implications of this transformation in information production will likely affect numerous areas such as art creation, medication development, and more in the future.

Historians now use machine learning and deep neural networks to reexamine old data. This includes ancient astronomical figures generated in Venice and other cities during the early modern period that has been impaired by the passage of time or distorted by a printing error.

Modern computer science has enabled historians to look at the distant past in a new way, allowing them to make connections in the historical record that would not have been evident. This is beneficial as it helps counteract distortions and inaccuracies caused by examining history document by document.

Using machine learning to study the past could lead to bias or false information being added to historical records. This raises the question of how much historians and other scholars who typically use history to understand the present should be willing to give up control for machines to take on a more prominent role in analyzing history.

PARSING COMPLEXITY

The digitization of numerous historical documents, such as the Library of Congress’s collection of millions of newspaper pages and the Finnish Archives’ 19th-century court records, brings big data to the humanities. This presents both a challenge and an opportunity for researchers as they now have access to more information than ever but lack a suitable way to analyze it.

In 2009, Johannes Preiser-Kapeller, a professor at the Austrian Academy of Sciences, faced a difficult challenge: parsing complexity from a registry of decisions from the 14th-century Byzantine Church. Fortunately for him, creating computational tools to assist scholars has made this task easier.

Preiser-Kapeller knew that understanding the complexities of hundreds of documents necessitated a digital survey to track bishops’ relationships, so they created a database of people and applied network analysis software to map out their links.

Preiser-Kapeller’s data reconstruction unveiled secret patterns of control, showing that the bishops who spoke the most in gatherings were not necessarily the ones with the greatest authority; he has since used this method to analyze other networks, such as the 14th-century Byzantian elite. This research revealed how women quietly helped to maintain their social infrastructure.

Preiser-Kapeller says:

“We were able to identify, to a certain extent, what was going on outside the official narrative,” he says.

Preiser-Kapeller’s research is an illustration of this tendency in the scholarly world. However, until recently, machine learning has had difficulty making deductions from enormous amounts of text due to some components of historical documents (in the case of Preiser-Kapeller, it was poorly written Greek) that rendered them incomprehensible for machines.

Deep learning has made progress in overcoming the previously present restrictions by using neural networks modeled after the human brain to identify patterns within extensive and complex datasets.

Johannes de Sacrobosco’s Tractatus de sphaera, a 13th-century treatise on geocentric cosmology, was widely read by students at early modern universities and remained popular even after the Copernican revolution of the 16th century. It was one of the most commonly used textbooks on this subject.

A stand-out piece in a digitalized library of 359 astronomy texts from 1472 to 1650, the treatise comprises 76,000 pages. It contains an abundance of scientific drawings and astronomical tables.

Matteo Valleriani, a professor with the Max Planck Institute for the History of Science, saw a chance to monitor how European knowledge developed towards a mutual scientific worldview in the extensive data set. He understood that determining the pattern was beyond human capacity, so he and his team from BIFOLD (Berlin Institute for the Foundations of Learning and Data) trusted machine learning.

The collection was divided into three groupings: text portions (segments of writing about a certain topic, having a solid start and conclusion); scientific drawings, which aided in demonstrating ideas like an eclipse of the Moon; and numerical tables, which were employed to teach the mathematical aspects of astronomy.

Historians face an issue of considerable magnitude: To what extent should we allow machines to take over the responsibility of safeguarding our past as technology advances?

At the onset, Valleriani explains, it proved difficult for algorithms to interpret the text. Typefaces varied greatly between books; early modern print shops crafted exclusive typefaces and had metalworking studios craft their letters. This caused a need for any model that utilized natural-language processing (NLP) to read the text to be adjusted for each book.

A major obstacle was the language itself. Numerous documents were composed in regional Latin dialects that machines, unfamiliar with ancient languages, have difficulty understanding.

Valleriani says:

“This is a big limitation in general for natural-language processing, when you don’t have the vocabulary to train in the background,”

NLP can be very effective in languages such as English, but its efficiency diminishes when used in languages that have been around for a long time, like ancient Hebrew.

Researchers manually extracted the text from source materials and identified individual links between documents; for example, when one text was copied or translated into another. This data was then put into a graph incorporating those single links in a network of all the records. A graph was used to teach a machine-learning method to suggest connections between texts. Researchers used neural networks to study the visual elements – 20,000 illustrations and 10,000 tables.

PRESENT TENSE

Lauren Tilton, an associate professor of digital humanities at the University of Richmond, describes computer vision for historical images as having a “present-ist” bias. Because current AI models are trained on data sets from the last 15 years, they tend to recognize features that are part of modern life, such as cellular phones or cars, instead of their ancestors, like switchboards and Model Ts. Furthermore, these models work much better with high-resolution color images rather than black-and-white photographs common in history books or early depictions of the cosmos, which can be inconsistent and affected by age. All these issues make it difficult for computer vision to interpret historical images accurately.

Lauren Tilton says:

“We’ll talk to computer science folks, and they’ll say, ‘Well, we solved object detection,’”

“And we’ll say, actually, if you take a set of photos from the 1930s, you’re going to see it hasn’t quite been as solved as we think.”

Deep-learning models have a great advantage as they can detect patterns in vast data. This is due to their capacity for abstraction.

In the Sphaera project, BIFOLD researchers utilized a neural network to identify, categorize, and group (based on similarity) pictures from early modern texts. This Model is available for other historians through CorDeep, a public web service. They also employed an innovative method when analyzing other data.

An example given by Valleriani is that various tables located in the numerous books within the collection could not be compared simply through sight, as “the same table can be printed 1,000 different ways”. Therefore, researchers created a neural network framework to distinguish similar group tables based on their numerical values while disregarding their structure.

Surprising findings have been uncovered by the project thus far. It appears that during the Protestant Reformation, Europe was divided along religious lines, but scientific knowledge was shared. Wittenberg, a Protestant city known for its advances in scholarship due to the work of Reformers, printed texts which were then replicated in cities such as Paris and Venice before being distributed across Europe.

Valleriani notes that the Protestant Reformation has been studied extensively; however, they gained a new view of it through machine-mediated analysis.

Valleriani went on to say:

“This was absolutely not clear before.” Models applied to the tables and images have started to return similar patterns.

Computers often recognize only contemporary iterations of objects with a longer history—think iPhones and Teslas rather than switchboards and Model Ts.

Valleriani explains that these tools offer potentialities greater than the tracking of 10,000 tables. With them, researchers can infer facts about the development of knowledge from patterns in small clusters of records without even having to examine a huge amount of documents.

Valleriani continues to say:

“By looking at two tables, I can already make a huge conclusion about 200 years,”

AI is transforming history, providing historians with unprecedented capabilities to understand our past better. From digitizing and preserving historical documents to analyzing large datasets and automating research tasks, AI empowers historians with new insights and perspectives. However, it’s important to recognize that the human element in historical research remains indispensable, and historians continue to play a crucial role in interpreting and understanding our shared human history. With the continued advancements in AI, we can expect even more exciting possibilities and discoveries in the field of historical research.

Source: MIT Technology Review

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top