Releases: FAIRMetrics/Metrics
FAIR Metrics, Evaluation results, and initial release of automated evaluator code
This release is to parallel the formal publication of the FAIR Metrics rubric and exemplar Metrics set (https://www.nature.com/articles/sdata2018118). It contains the latest updates to the Metrics, based on responses to the Issues raised in GitHub. It contains the results of the evaluation study. It also contains the strawman version of the Metrics Evaluator code (for access to the Evaluator interface, contact any of the authors)
Proposed FAIR Metrics and results of the Metrics evaluation questionnaire
The initial metrics have been updated based on github user comments and corrections. In addition, we executed an evaluation of the Metrics via a questionnaire. The results of this evaluation are in the "Evaluation Of Metrics" folder in the Git, and discussions about the questionnaire, the answers provided, and what is/not an "acceptable" answer, is ongoing within the GitHub issues pages.
Initial set of proposed FAIR Metrics for community discussion
The FAIR Metrics Authoring group (Wilkinson, Dumontier, Sansone, Schultz, Doorn and Olavo Bonino) has produced this initial set of generally-applicable metrics spanning all of the FAIR sub-principles. These are intended to act as a starting point for broader community discussion about both the authoring of Metrics (the rubric) and about the proposed metrics themselves.