Metadata is often the first information that a user interacts with when looking for data. Understanding that there is typically only one chance to make a good impression, data and information repositories have placed an emphasis on metadata quality as a way of increasing the likelihood that a user will have a favorable first impression. This session will explore quality metrics, badging or scoring, and metadata quality assessment approaches within the Earth observation community. Discussion questions include:
● Does your organization implement metadata quality metrics and/or scores?
○ What are the key metrics that the scores are based on?
○ What priorities are driving your metadata quality metrics? For example, different repositories have different priorities. These priorities can include an emphasis on discoverability, accessibility, usability, provenance, etc...
● Does your organization make metadata quality scores publically viewable? What are the pros and cons of making the scores publically accessible?
How to Prepare for this Session:
Presentations:
https://doi.org/10.6084/m9.figshare.11553606.v1https://doi.org/10.6084/m9.figshare.11551182.v1View Recording: https://youtu.be/lbza3gEHmtQ
Takeaways
- Visualizations of the metadata quality metrics need to be easily understood or well documented to be effective
- There are diverse ideas and current metrics that are being rolled out soon (U.S. Global Change Research Program & NCA)
- Ensuring that metrics interact with existing standards such as FAIR is also important