Steps for producing an archive-ready version of pleiades-datasets
./scripts/get_csv.sh
curl https://pleiades.stoa.org/credits.html > html/credits.html
python scripts/get_json.py
./scripts/get_ttl.sh
(Change datestamp on second command!)
git add csv html json rdf
git commit -m "yyyymmdd updates"
git push origin master
In the normal way. Use 3-part semantic versioning, so if we change the data fields we're including in any sub-component, or if we add/alter other components, then we need to do a major number increment. Otherwise, we just increment the middle number. We would only increment minor number if we were issuing a corrected version.
Zenodo should automatically create a new entry for the release from Github, but the project-level metadata isn't carried over. You'll need to copy/paste/modify as appropriate from the prior one. We should look into whether/how to automate this aspect.
NB: The Zenodo master record DOI is , which will always redirect to the most recent record.
Visit archive.nyu.edu, login, navigate to the appropriate collection, point-and-click to happiness. You may have to copy metadata from an earlier version. Once the submission is complete and a Handle URI assigned, go back to Zenodo and enter that handle as an alternate identifier for the dataset.
Promulgate on Pleiades social media. Time for a blog post at pleiades.stoa.org.