Home / FAIRsFAIR documents for community review
The following deliverables are now open for comments by the community.
FAIRsFAIR's landscape assessment found that data policies that are clear and easy to understand can positively influence researchers in making their data FAIR. Based on the instruments used during our policy support programme, we have developed an easy to use FAIR data policy checklist to support policy-makers at all levels in ensuring their policies align with the FAIR Principles and provide clarity on exactly what is expected of researchers. The checklist is based on FAIRsFAIR policy enhancement recommendations and will help users assess whether specific elements of their data policies are FAIR-enabling. The checklist provides practical recommendations on what policy elements should be addressed in data policies to progress alignment with FAIR Principles.
We'd be very grateful for feedback on the draft FAIR Data Policy Checklist. We are particularly keen to your views on whether the assessment statements are appropriate and if you feel there are policy elements that are missing. Comments can be added directly to the google doc or sent to firstname.lastname@example.org. The draft will be open for consultation until February 14, 2022.
ACME-FAIR is a 7-part guide that FAIRsFAIR is releasing for consultation in a dedicated Zenodo community. Each part deals with one of the key issues that Research Performing Organisations (RPO) face in establishing the capabilities to put the FAIR principles into practice, and is informed by the projects engagement with community initiatives to ‘turn FAIR into Reality’, and by the report of the same name. We recommend that universities, institutes and other RPO consider providing these capabilities as vital steps towards ‘FAIR-enabling practice’. The overall purpose of ACME-FAIR is to help those managing and delivering relevant professional services to self-assess how they are enabling researchers and their colleagues to do just that.
The 7 guides are being released as self-standing documents, each with a thematic introduction, an overview of the relevant capabilities, and a rubric for assessing the levels of maturity and community engagement for each capability.
We warmly invite feedback on the draft guides. There are several ways you can do this. One is by commenting directly on the relevant google doc versions (on p. 5 of each guide). If you would prefer not to be identified, or would like to use a few rating scales to give us your feedback, we have also provided this form. It asks how far you agree with 4 simple statements, and invites you to add any comments you wish. Alternatively, if you prefer to let us know your thoughts by email, the FAIRsFAIR task lead Dr Angus Whyte can be contacted at email@example.com.
FAIRsFAIR has published the “CoreTrustSeal+FAIRenabling, Capability and Maturity Report”, an updated version of the previous CoreTrustSeal+FAIR Overview and the Draft Maturity Model Based on Extensions and-or Additions to CoreTrustSeal Requirements, both published in August 2020. The report was written for data repositories and received feedback from CoreTrustSeal Board. It presents updates to the FAIRsFAIR alignment of CoreTrustSeal with repository characteristics that enable FAIR data. Though many of the CoreTrustSeal Requirements contribute to enabling FAIR data, each FAIR Principle is aligned with a single CoreTrustSeal Requirement to streamline the preparation of self-assessment statements and supporting evidence. In this text each “Requirement to Principle” mapping is presented alongside the current iteration of RDA FAIR Data Indicators and the current FAIR tests as implemented by the F-UJI tool. Together these provide all the context necessary for a repository to self-assess as a CoreTrustSeal TDR that enables FAIR data.
Your comments and suggestions regarding the current version will help us produce the most helpful report possible so please use the link below to access the report as a Google doc and insert your feedback directly.
The FAIRsFAIR Synchronisation Force is very glad to share the first, preliminary version of the FAIRsFAIR White Paper. Built on the outcomes and reports of the series of three Synchronisation Force workshops organised across 2019, 2020 and 2021, the White paper takes FAIRsFAIR main recommendations a step further, mapping them against three priority topics in the Strategic Research and Innovation Agenda of the EOSC (SRIA) and the relevant Task Forces under the EOSC Association to facilitate their impact and uptake.
This milestone (M4.3) document updates the previous CoreTrustSeal+FAIR Overview and the Draft Maturity Model Based on Extensions and-or Additions to CoreTrustSeal Requirements (M4.2). The latter document provides extensive context and references component documents that provide the foundation for this work.
The authors would like to thank the CoreTrustSeal Board for their valuable feedback on a pre-publication version of this text including alignments and target capabilities. The authors acknowledge that while recommending that repositories adopt this approach, no formal adoption and integration into the CoreTrustSeal requirements or processes can take place outside the scheduled, periodic community review process.
One of three in a series, this report builds on the landscaping effort published in March 2020 as Persistent Identifiers and Interoperability: Outcomes from the FAIRsFAIR Survey of the European Scientific Data Landscape which reviewed and documented the state of FAIR in the European scientific data ecosystem, and identified commonalities and possible gaps in semantic interoperability, and the use of metadata and persistent identifiers across infrastructures. The new report is aimed specifically at an audience of researchers, data stewards, and service providers, and serves as an explanatory guide to the use of PIDs, metadata, and semantic interoperability.
This document is the first iteration of recommendations for making semantic artefact FAIR. These recommendations result from initial discussions during a brainstorming workshop organised by FAIRsFAIR as co-located event with the 14th RDA Plenary meeting in Helsinki. We are proposing 17 preliminary recommendations related to one or more of the FAIR principles and 10 best practice recommendations to improve the global FAIRness of semantic artefacts. These initial recommendations should not be considered as a gold standard but rather as a basis for discussion with the various stakeholders of the semantic community.
This report presents the results of the first year of Task 2.3 from the FAIRsFAIR project. It gives guidelines to enable features for repositories which allow them not only to host FAIR digital objects, but also to be FAIR themselves. The recommendations were collected in the workshop “Building the data landscape of the future: FAIR Semantics and FAIR Repositories” (22 October 2019, Espoo Finland) that was hosted by this task together with the FAIRsFAIR task 2.2. It derived input from more than 70 participants from 6 communities: the European Life Sciences Infrastructure for Biological Information (ELIXIR), the European Incoherent Scatter Scientific Association (EISCAT), the Social Sciences and Humanities (SSH), the Integrated Carbon Observation System (ICOS), the European network of Long-Term Ecosystem Research sites (eLTER), and the Data Publisher for Earth & Environmental Science ( Pangea). The background of participants lied in infrastructures, research and libraries.
This report marks the first milestone of the task. It presents a survey of existing FAIR assessment frameworks, a proposed set of guiding principles and desiderata for the FAIR assessment framework that will be constructed, and three ‘FAIR service assessment’ case studies. We are seeking wide feedback on this report to inform subsequent work and, ultimately, feed into a FAIR assessment framework for data services that delivers clear direction and value to service owners and the community at large.