Note: this documentation is still under development; additional sections are forthcoming.


The book import process includes the following steps, some of which will require assistance from the LTDS team:

  1. Preparation of pull-list spreadsheet with metadata and file paths per volume
  2. Export of Alma records for all books/serials in the collection
  3. Preparation of Collection-level metadata spreadsheet
  4. File transfer of all needed files, using the directory structure recorded in the pull-list filepaths
  5. Curate bulk import process

Metadata Preparation

Digitized books utilize metadata from two sources: the original pull-list spreadsheet used for digitization reviews as well as Alma catalog records.

 The table below lists Pull-list metadata fields/columns which are required for ingest into the repository.

 The following are also required in books and serials’ Alma records for ingest into the repository:

  • Title
  • Date Issued or Date Created

Reformatting Pull-List Spreadsheets for Curate Ingest

The following spreadsheet template shows the required formatting for a Curate-ready pull-list. While the pull-lists prepared during the digitization and review process may vary, the following spreadsheet columns are required for Curate's bulk import method. For information about metadata requirements, see the Cor Metadata Field Usage documentation.

Note: additional metadata is also extracted from Alma/MARC catalog records; the following fields are recommended for the pull-list itself.

* Required pull-list fields are indicated with an asterisk.

Some file path related column heading names may vary in the original pull list depending on what it was created.

Pull-list Heading(s)Heading for ImporterExplanation
Item NumberItem IDA numeric ID for each individual work in the spreadsheet(e.g. the original row number). Recommended for cross-referencing across pull-list versions later.
N/Asource_collection_idThis will be populated by the ingest team once the Collection has been provisioned in Curate.
N/ANon-unique TitleIndicate "Yes" if the title is known to have multiple copies, editions, or child volumes: this helps the ingest team create parent-child works later.
N/Adeduplication_key*A unique ID for each individual volume in the collection; typically an ARK or barcode number. This will be added  by the ingest team.
OCLC Number, Barcode, DigWF IDother_identifiersconcatenated list of other local identifiers e.g. barcode, digwf ID, OCLC #, etc. Identifiers should contain a prefix indicating their type, and multiple values should be separated by pipes
PIDemory_arkEmory ARK id, if applicable
MMS ID or Alma MMSIDALMA MMSID*Alma MMSID for the catalog record from which additional metadata will be extracted during import. This field is required by the importer. See also the system_of_record_ID notes below.
MMS ID or Alma MMSIDsystem_of_record_IDCopy of the Alma ID, to be stored as metadata in Curate. The prefix "alma:" should be added to each ID.
Institutioninstitution*Name(s) of institutions providing the material, e.g. Emory University
Holding Repositoryholding_repository*Name of Library providing the material
Administrative Unitadministrative_unitName of administrative unit within the Library, if applicable
Call NumberCSV Call NumberThe call number will be supplied from Alma, but it is useful to have this on the pull-list for reference. 
EnumerationEnumerationVolume-level enumeration, if applicable (e.g. Volume 1, Copy 1, Edition etc.)
CSV TitleCSV TitleTitle will be supplied from Alma, but it is useful to have this on the pull-list for reference. 
Content Typecontent_type*Supplied as URI. Recommended value:
Rights - Public Note/MARC 590 Fieldemory_rights_statements*The Emory Libraries supplied rights statement
Rights - Internal Noteinternal_rights_noteAdditional internal rights notes or documentation
Desc - Designation (URI)rights_statement*Supplied as URI from rights values, e.g.
Visibilityvisibility*See available access controls (Public, Public Low View, Emory Low Download, Rose High View, Private)
Data Classificationdata_classifications*Emory defined data classification type: Public, Confidential, Internal, Restricted
Sensitive/Objectionable Materialsensitive_materialIndicate "Yes" if the volume contains sensitive material
Sensitive/Objectionable Material Notesensitive_material_noteProvide additional context for any sensitive material determination
Transfer Engineertransfer_engineerThe name of the digitization technician
BarcodeBarcode*This is used to generate certain volume-level filenames. The barcode number should also be added to other_identifiers with the prefix "barcode:"
Base PathBase_Path*The base directory path where content files are stored on the server
Mbytes or MB SizeMBytes*The overall file size for all content files in the work
pdf_path or PDF Path or PDF_PathPDF_Path**The base directory path for volume-level PDF file for the work
PDF Count or PDF_CntPDF_Cnt**The count of PDF files to be imported
XML Path, xml_path, OCR Path, our_pathOCR_Path**The base directory path for volume-level OCR file for the work 
OCR Count, OCR_Cnt, XML CountOCR_Cnt**The count of volume-level OCR files to be imported
Images Path, TIFF Path, Disp_PathDisp_Path*Directory containing the page level image files (TIFFs) > Primary Content: Preservation Master File
TIF Count, Images Count, Disp_CntDisp_Cnt*The count of page-level image files to be imported
Txt Path, Text Path, Txt_PathTxt_Path**Directory containing the page level plain text files > Primary Content: Transcript File
Txt Count, Txt_Cnt, Text CountTxt_Cnt**The count of page-level text files to be imported
POS Path, POS_PathPOS_Path**For Kirtas outputs: directory containing the page level POS files > Primary Content: Extracted Text File
POS Count, POS_CntPOS_Cnt**For Kirtas outputs: count of page level POS files to be imported
ALTO Path, ALTO_PathALTO_Path**For LIMB outputs: directory containing the page level Alto XML files > Primary Content: Extracted Text File 
ALTO Count, ALTO_CntALTO_Cnt**For LIMB outputs: count of page-level ALTO xml files to be imported
METS Path, METS_PathMETS_Path**For LIMB outputs: directory for volume-level METS file to be imported
METS Count, METS_CntMETS_Cnt**For LIMB outputs: count of volume-level METS file to be imported
Rights - Digitization BasisAccession.workflow_rights_basisRights basis determination (e.g. Public Domain) for digitization
Rights Access Basis - Review DateAccession.workflow_rights_basis_dateDate of rights review (EDTF format)
N/AAccession.workflow_rights_basis_reviewerName of individual or office performing rights review
Rights Access Basis - NoteAccession.workflow_rights_basis_noteRights-related notes about digitization/preservation
Rights - Digitization Basis - NoteAccession.workflow_notesGeneral notes about digitization/preservation or aquisition
N/AIngest.workflow_rights_basisRights basis determination (e.g. Public Domain) for ingest and access level
N/AIngest.workflow_rights_basis_dateDate of rights review (EDTF format)
N/AIngest.workflow_rights_basis_reviewerName of individual or office performing rights review
N/AIngest.workflow_rights_basis_noteRights-related notes about ingest or migration
Ingest/Migration Event NoteIngest.workflow_notesGeneral notes about ingest or migration, e.g. Migrated to Cor repository from LSDI Kirtas workflow during Phase 1 Migrations, 2019

** Required for import, depending on digitization output.

Additional Preparation Steps

It is strongly recommended to sort the pull-list CSV by the title column prior to submitting it for ingest. This helps the repository ingest team to identify multiple editions of the same work as well as parent-child relationships.

If a Collection is being split into multiple pull-lists, please identify whether the Title is known to have other copies, editions, or child volumes so that the repository ingest team can be aware of this in the future. This can be done by indicating "Yes" in the Non-Unique Title column.

Information about Metadata Extracted from Alma Records

As noted above, the pull-list provides certain metadata for the repository, but additional fields are extracted from MARC records exported from Alma:

  • conference_name
  • contributors
  • copyright_date
  • creator
  • date_created
  • date_digitized
  • date_issued
  • edition
  • extent
  • content_genres
  • local_call_number
  • place_of_production
  • primary_language
  • publisher
  • series_title
  • subject_geo
  • subject_names
  • subject_topics
  • table_of_contents
  • title
  • uniform_title

For more information about MARC mappings and field reformatting, see the MARC to Cor mapping worksheet.

Filename Conventions for Bulk Import

The Curate bulk-import process is optimized to work with the following filename conventions in use within digitized book collections. If your collection's files use a different convention, please contact LTDS for support.

Volume-Level Files

The Curate book import preprocessor makes the following assumptions:

  • Kirtas outputs expect that a filename is supplied in the CSV, using "Output" as the base filename for the volume-level PDF and OCR files:
    • Output.pdf
    • Output.xml
  • LIMB outputs will not have an explicit filename supplied in the CSV, and instead will generate a filename using the barcode number for the volume as the filename for the volume-level PDF and METS files:
    • [Barcode#].pdf
    • [Barcode#].mets.xml

Page-Level Files

While file naming practices may vary, it is strongly recommended that all filenames contain or end with a numeric part sequence, such as "0001.tif". The Curate book import preprocessor makes the following assumptions about page-level files:

  • Kirtas filenames have 4 digits using 0 as padding (0001.tif, 0085.tif, etc. )
  • LIMB filenames have 8 digits using 0 as padding (00000001.tif, 00000085.tif, etc.)

Some file sequences start with zero, some with one. This should be identified as part of the collection preparation process.

Works whose filename sequences include an additional prefix such as an OCLC number should also be identified as part of the collection preparation process.

Page Contents:

  • No labels