In 2008, Stanford Libraries received funding from the Stanford University Provost for the building of a world-class digital library infrastructure. This work, known as the Digital Library Build-out (DLB), was a continuation of pioneering research on archival digital libraries performed by Stanford and its partners in the Archive Ingest and Handling Test (AIHT) [
2] (2003–2005) and in the National Geospatial Digital Archive (NGDA) [
3] project (2005–2008), both funded by the Library of Congress’ National Digital Information Infrastructure and Preservation Program (NDIIPP). A core component of the DLB was the redesign of the Stanford Digital Repository (SDR) in 2009 [
4]. The first iterations of the SDR stored preserved objects using the Library of Congress BagIt format, but after several years of experience it became clear that, while BagIt was appropriate as a transfer mechanism between repositories, it lacked several key features for managing the lifecycle of a digital object within a repository. This led to the development of Moab, an archive information package (AIP) [
5] format designed to address the specific needs of a modern digital repository.
An Introduction to Moab
Moab, designed by Richard Anderson et al. in Stanford Libraries’ Digital Library Software & Services team [
6], is a versioned, forward-delta AIP format that allows for the efficient storage, management, and preservation of any digital content. It has been in production use by the SDR since 2012.
The selection of a forward-delta versioning system has been especially important as the SDR ingests more audio-visual content. Without forward-delta versioning, metadata updates to existing objects (a relatively common occurrence) would consume an outsized amount of storage for the actual amount of data on the disk changed. While there are filesystem-level solutions to content deduplication that are more efficient than the file-level deduplication offered by Moab (e.g., WAFL from NetApp [
7] or ZFS from Oracle [
8]), these almost always require vendor-specific storage solutions. Further, when the object is transferred between systems over the network, the entire fully-hydrated object must be transferred and the receiving system may not support deduplication, leading to excessive network and storage consumption.
A core component of Moab is the desire to retain human readability of preserved content and preservation metadata. This is solved by retaining the original file names of deposits and writing preservation metadata into human-parsable XML files. A lesson learned that the OCFL benefits from is the realization that many small XML files scattered across many version directories, while readable individually, are less useful for constructing a narrative of the object’s history. The OCFL’s decision to consolidate all changes into a single JSON file both removes the somewhat-contradictory XML schemas in the various Moab manifests and provides a single file of record with an internally-consistent schema.
Moab was designed with the presumption that files would reside on a POSIX-compatible filesystem. This seemed like a reasonable assumption to make given the remarkable longevity of the POSIX specification. However, with the rise of object-addressable storage, especially with increased adoption of cloud-based storage, such as Amazon S3 and Openstack Swift, it is no longer reasonable to design a preservation system with the presumption that it will always have POSIX file semantics available for discovery and indexing.
Stanford tracks preserved data in Moab objects by assigning to each a unique identifier called a digital resource unique identifier (DRUID). This is a string composed of a subset of alphanumeric characters that conforms to a specific pattern, giving a namespace of 1.6 trillion possible digital objects. To enable efficient storage of these objects on a POSIX-based filesystem, Stanford also utilizes Druidtree, a modified version of Pairtree [
9], to create a predictable directory hierarchy that efficiently disperses DRUIDs across a filesystem.
Given a Druid (e.g. bb123cd4567), it is possible to construct a Druidtree path to the expected location of the version 1 manifestInventory.xml file of the associated Moab object (e.g. bb/123/cd/4567/bb123cd4567/v0001/manifests/manifestInventory.xml) and, if found, construct a path to the signatureCatalog.xml file. By parsing that file, paths to all other files in that version and all prior versions of preserved content (but not of preservation metadata) may be constructed. However, it provides no knowledge of higher versions of that object. On a POSIX-based file system, these other versions can be discovered via a relatively inexpensive ’ls’ or ’dir’ action inside the parent directory of the Moab object, followed by some logic to enumerate and sort the results for directories that match the expected syntax of valid version directories.
While S3 and other object stores can emulate the enumeration of files in a directory by assuming certain characters in an object name can be treated as directory markers [
10], it is important to note that they are actually filtered sorts of all objects in a particular bucket, and thus suffer potential performance issues as the total number of objects in the namespace increases. Enumerating all files of a Moab object using traditional POSIX semantics when the object resides on object-addressable storage is therefore relatively inefficient and, given the pricing models employed by commercial cloud object storage, incurs unnecessary additional costs.
The OCFL neatly addresses this issue by placing the most current version of the object’s inventory at the root level of the object, creating an essentially immutable object path that not only provides an index of all files in the object, but also enumerates all versions of that object. With Stanford’s repository approaching two million objects and 10 million object-versions, this design offers significant speed and cost savings when conducting audit operations (see
Table 1). Further, by placing the most recent inventory file in the object root, the OCFL preserves Moab’s concept of immutable version directories. This feature is designed to facilitate the storage of objects on write-once, read-many media, such as tape, without compromising the ability to version those objects in the future. In the unlikely event that the inventory file in the object root does not reflect the most current version on that object store, audit tools may fall back to an iterative approach of sequentially enumerating higher version directories until no more are found.
Another issue that only becomes apparent at scale is the cost and latency incurred by performing multiple small-block reads to construct the history of a Moab object at its current version. For a given version (V), four manifest files must be read to fully reconstruct the object. To fully parse an object as of the given version, 4V reads must be performed. These reads are almost always small, as most manifest files are <4 KB in size, with the total number of reads required to parse an object scaling linearly with versions.
At a small scale this cost is negligible. But as the number of objects in the repository grows, and the number of versions of those objects also grows, this incurs a significant overhead, which in turn decreases the ability of audit tools to keep an accurate current inventory of objects. In Moab, an object at version 5 requires 20 small block reads to fully reconstruct, and if the number of versions is not known, then a list/dir operation must be performed on the object root to discover all potential version directories. In the OCFL only one read is necessary, albeit of a file larger than 4 KB. Across one million objects with an average of three versions each, this reduces the cost of object discovery and construction from 13 million I/O operations to one million. Additionally, for objects stored in commercial cloud systems, this represents a significant reduction on the number of request operations that must be performed, and hence a reduction in the charges incurred by audit actions (see
Table 2).
Moab made the decision to compute checksums for every file in the object using three different digests—MD5, SHA1, and SHA256. This was an attempt to avoid short-term obsolescence and provide surety that a given file was unaltered, even if one algorithm was shown to be vulnerable to compromise. Within 10 years, two of the three digests have been broken (MD5 [
11], SHA1 [
12]), showing the need for the next version of Moab to have an easy way to add new digests over the life of the object. The OCFL achieves this. Further, experience with Moab has shown that there is no value in storing more than two checksums for a given file, and especially not MD5, which is only useful as an ephemeral checksum used to verify successful file transfers between systems
In conclusion, although Moab is a well-designed AIP with a proven track record, several inefficiencies in the design have been identified that pose scalability challenges, especially when using object-addressable storage, such as AWS S3. The OCFL is an evolution of Moab that retains its core features of forward-delta versioning, version immutability, and file-based deduplication, whilst providing for more efficient object discoverability and reconstruction in both POSIX and object-addressable storage environments and presenting a clearer path for adding new digest algorithms over the lifetime of an object.