Recording archaeological data from space
This is our second blog post about the SQuARE experiment. You can read the first one here.
This post is co-authored by Shawn Graham, Professor of History and Digital Humanities at Carleton University. You can learn more about his work at his Electric Archaeology website. Here, we discuss some of the problems with capturing archaeological data from a space context.
We start from our raw data, the photographs taken by ISS astronauts of our sample locations, like this one in the US Node 2 module.

Another collaborator, Aidan Walsh (the father of one of the PI’s!), takes each photograph and corrects it for lighting, using the color calibration card (at upper right in this image). He also corrects for the barrel distortion introduced by the camera lens. This allows us to make sure that we are treating the sample locations with their correct horizontal and vertical lines.

Then we crop the photo so that we are only looking at the sample square itself. This is the material that we are collecting data on.

But how do we actually capture that data? Here’s where Shawn, and his student Chantal Brousseau, come in.
In Shawn’s words:
Recently, Justin Walsh got in touch with me to say, ‘we’re doing some archaeology in space. We need some help to think about recording the information. I’ve got some ideas how to do this, but… what do you think?’
One of the pleasures of academic work is being asked to suddenly think sideways about a problem you’ve never really considered. I started doodling ideas, free-writing some thoughts, and then I got down to business.
On Earth, every human action leaves traces. Some things happened before or after other things. These traces – and their relative associations – are called ‘contexts’ by archaeologists; every new event in the history of a site leaves new contexts which build up over time. Archaeologists later peel the contexts backwards, removing them from most recent to latest. They look for patterns and interrelationships through and between those contexts, which enable a vision, a story, of the site to emerge.
So far, so good.
But there’s archaeology off-world now. How do you excavate a contemporary archaeological site like a space station, when NASA won’t let archaeologists be astronauts?
You get the astronauts to photograph a controlled “test pit” (area of the station) systematically over time. Here, the archaeological action is reversed: each photo is evidence of a context at time 1…time 2…time 3…time 4. So we’re “excavating” – recording, creating data, spotting relationships, making contexts – as the site actually forms.
The webapp we eventually built enables the archaeologist to identify objects in the test square, and record those interrelationships. That web of interrelationships is a kind of a network or ‘graph’; each point and each relationship in that graph has all of the archaeological information recorded. This lets us understand and use the structure of relationships to deduce or find larger patterns in the material.
But getting to this point of having a working method to realize this was a challenge.
To do this archaeology-in-space we needed some basic data-recording forms, and we needed to be able to capture the location of objects depicted in the images, at least relative to the other items in the image. The forms would then capture the data around how these objects interrelated with each other: the photo captured one moment in time, one event, one context. My goal was to make this as painless as possible. I created what amounted to a Rube Goldberg machine of various bits and pieces of software, cobbled together in a way that worked, but only in the sense that Wile-E-Coyote’s schemes often ‘worked’: on paper.
At this point, Chantal joined the project and explored what I had rigged together. From her own research on automatically extracting images from early modern print sources, she was familiar with other image annotators, and suggested that we modify the open source VIA tool from the Visual Geometry Group at Oxford. Chantal quickly modified the tool for the explicit archaeological information we were after, and began to improve the code to optimize it for ISSAP use, putting it all into an elegant framework. In this way, it could be installed on the same secure server being used to host the images from the ISS and which the ISSAP team could access.
While Chantal worked on that, I developed a way to turn the output from the webapp into the graph. A ‘graph’, remember, is just a way of describing a network of things connected to other things by those archaeological relationships we spotted. This graph should let us examine change over time in how the astronauts use certain segments of the space station. It should enable us to identify assemblages of material culture at particular times. What artifacts persist over time in a given space/image? What artifacts are associated with each other over time? Do we see different use zones emerge over time? Do artifacts travel from one zone to another?

The webapp we built is open source and can be modified fairly quickly to have different metadata fields; maybe other kinds of archaeologists will find it useful and it’ll be the start of something new. This experiment in doing archaeology in space will have its dead ends, its failures, and it might be that we’ll have to go back to the beginning; at least we’ll have the photographs if something goes terribly wrong. But I don’t think that’ll happen. I don’t know of any field archaeology project that uses a graph database from day one, though there are a few that are using graph databases to wrangle legacy data. Thinking about what it means to do archaeology in space has implications for how we might do archaeology better here on Earth. What is an archaeological assemblage, when you’re on a space station without gravity? After all, it’s gravity that holds assemblages together on earth. Without gravity, relationships fly apart. Everything is relative, I suppose; the webapp lets us pull out those relative relationships over time.
Chantal’s webapp is now hosted on a server run by our other collaborators, Erik Linstead and his Machine Learning and Affiliated Technology Lab at Chapman University (clearly, this project could never have happened without great people willing to share their skills and time!). We’ve started using the webapp to annotate the photos from SQuARE. Here you can see the interface that Chantal developed for us. It allows us to record information about individual items, their types, and their locations, as well as about the context represented by the photograph.
When we are done making annotations, this is what the photo looks like, with bounding boxes drawn around all of the different objects – each of which now has an entry in our database. So this is how we get from an astronaut’s photo to archaeological data!
