Towards the right standards

The intersection of open science, responsible research and innovation, and standards

doi: 10.7203/metode.11.16103

tubes

The introduction of standards in research and development leading to new products or innovative processes can be thought of as a particularly technical approach to framing scientific enterprises. At the other end of the spectrum, open science or responsible research and innovation may be initially thought of as concepts with no underlying technical approaches to support them. In reality as currently practiced, the development and use of standards engages significant non-technical aspects, needing to take into account research cultures or desired societal outcomes. Similarly, open science, and responsible research and innovation can operate using very practical and technical approaches. This essay focuses at the intersections of these concepts to try to contribute to larger discussions in both the research and governance communities as to how researchers should conduct their research, and what respective responsibilities of researchers, their institutes, and their supporters are.

Keywords: standards, open science, responsible research and innovation.

Open science, responsible research and innovation, and standards

While there is significant overlap in the framings, purposes, and outcomes of the concepts responsible research and innovation (RRI) and open science (OS), we can roughly separate them initially as focusing on science for and with society in the former case, and the process of research and disposition of findings in the latter. To be clear, society as a whole benefits from open science, and we can certainly think of it as being critical in responsible research and innovation. It is useful to separate these to some degree, however, for the purpose of understanding whether and how the use of standards could influence the robustness of RRI and OS.

logo de la iniciativa Open Source per a la ciència oberta estàndards

Open science, as a concept, can be thought of as a way to make science as accessible and responsive as possible to society by different means, from open access to publications to promoting citizen science. Above, open science logo for the Open Source Initiative. / Greg Emmerich

Open science includes many stakeholders and their representative communities may have different working definitions of open science. Most inclusively, open science can be thought of as a way to make science as accessible and responsive as possible to society. Such accessibility will of course require some discretion to protect sensitive or potentially dangerous information from being unnecessarily widely shared.

The pillars of open science as well may vary between stakeholder communities, but in general all will include open access to publications, open data availability, educational resources on how to participate in open science, a review component to assure quality and integrity, and citizen scientist participation.

All of these areas are currently under discussion at the European level (for example, in the Open Science Policy Platform, a high-level advisory group to the European Commission Research Commissioner) and at national levels (for example, in the countries participating in the Council for Open Science Coordination) (CoNOSC, 2020; European Commission, 2020).

While these discussions may come to different conclusions about the best ways to achieve open science, there will certainly be some areas where it will be desirable to have those processes at least aligned, if not standardized. The pillars of open data, particularly as captured in the FAIR data concept (findable, accessible, interoperable, reusable), would seem to in fact require standards to assure its viability.

Responsible research and innovation provides both analytic and practical frameworks to consider when undertaking research. We can consider RRI from the analytic perspective of social sciences (see Owen et al., 2012, an early and comprehensive description of RRI), but we can think of it as well from the view of researchers doing work that is encompassed by the concepts of RRI. In fact, while RRI is frequently described by the pillars that the EU has used to functionalize its definition (public engagement, open access, gender equality, ethical issues, education), a 2014 flyer (European Commission, 2014) describing RRI as Europe’s ability to respond to societal challenges points more toward the actions required by researchers themselves («choose together», «do the right “think” and do it right») as a defining factor. Interestingly, this document begins to touch on a need for standards (especially, in aligning not only outcomes but processes) to assist researchers in accomplishing these tasks.

It is quite reasonable to think about standards in the first instance as technical solutions to technical problems. We can avoid having ten different stoppers for laboratory glassware by standardizing openings and closures. Industries can work to assure that companies can compete on new ideas and improved products by enforcing standards as was famously and successfully accomplished by the semiconductor consortium Sematech (Hof, 2011). But could we have the equivalent of an ISO standard for RRI?

«Open access and open data in principle can be handled as technical issues, with their own sets of standards»

A problem in thinking about standards for RRI is in the conceptualization of standards as applying to technical and, usually, quantitative areas. Thus, if we think of this question as where can we apply standards, it is much easier to imagine standards for open science than for RRI. The concepts underlying OS are much more technical, at least on first inspection, than those of RRI. Open access and open data, two major areas that OS proponents want to accomplish, in principle can be handled as technical issues, with their own sets of standards. Open data is already described as being (or not being) FAIR; that is, as mentioned before, findable, accessible, interoperable, and reusable. These principles provided by Force11 (2017) offer in essence a set of standards and metrics for defining whether those standards have been met.

It would then not be that far of a step to capture these in a standard. The last and hardest step of course is the adoption of such standards universally. In some aspects, the communities concerned about FAIR data (indeed, most researchers) are at least partway there already in the use of data management plans. When employed, such plans act not as an obstacle to accomplishing research but rather as an inherent part of research planning, in the same way that technical standards are simply taken into account in research planning.

pla S ciència oberta estàndards

Nowadays, there seems to be no considerable objections to publishing research in a way as open and quickly as possible. However, what open means remains under debate. Even during the discussions concerning Plan S, a proposal by a group of funders – the European Research Council among them– with respect to requirements for posting papers in open access if money from those funders was to be used, there was some lack of agreement around many aspects, including what open publishing entailed. / Science Europe

As researchers we can think then also about whether standards for open publishing are possible and desirable. The discussions around open access have been percolating for decades, and at this point, it is probably reasonable to say that there is no objection to publishing research findings in a way that is as open and quickly accessible as possible, taking into account potential private or security issues.

However, what open means with respect to access to research papers remains remarkably fuzzy. The lack of agreement around open access was on display during the discussions concerning Plan S, a proposal by a group of funders with respect to requirements for posting papers in open access if money from those funders was to be used. This group includes currently seventeen national funders and with support expressed by the European Commission, including one of its funding bodies, the European Research Council. The singular target of Plan S, as described by the group of funders called cOAlition S, is that «With effect from 2021, all scholarly publications on the results from research funded by public or private grants provided by national, regional and international research councils and funding bodies, must be published in Open Access Journals, on Open Access Platforms, or made immediately available through Open Access Repositories without embargo». (cOAlition S, 2019). This is accompanied by ten principles and work on the implementation is ongoing.

«What open means with respect to access to research papers remains remarkably fuzzy»

What was particularly interesting in the discussions around the first draft of the plan was an apparent lack of agreement around any particular aspect. Is the concept of open in a hybrid journal sufficient (that is, researchers or institutions pay an otherwise subscription journal for a specific article to be open access)? Is it acceptable for the community to use hybrid journals for a while, but not after an arbitrary end date? Are preprints an acceptable alternative? Or posting of a pre-acceptance manuscript on one’s own server? What was compelling in this discussion was not so much the details (though these are important) but that the community had been talking about this issue for so long, and those discussions somehow could not be synthesized into policy, even by a relatively small group of important actors.

Does this indicate that even loose standards («principles», «best practices», and the like) would be difficult or impossible for open access? Or can we imagine a case that the definition of open is left up to individual funders (as many have policies for now) or even to research sectors? These solutions of course move away from the idea of standards as universal.

Applying non-technical principles to improving technical standards

taller BioRoboost estàndards

Above, break during the last BioRoboost workshop held in October 2019. This European funded project aims at improving the standardization of biological systems within the frame of synthetic biology. For this, several questions must be discussed, such as why standards are necessary at all, or which standards should be invented specifically for synthetic biology. / Michele Garfinkel

As communities are considering the role that standards may play in expanding and improving open science and responsible research and innovation, we can also look at the reverse. How can the principles of open science, or the structures of RRI, help us to improve standards? The European-funded project BioRoboost (Fostering Synthetic Biology Standardisation through International Collaboration) (2019) in which I participate is focused on improving the standardization of biological systems, broadly wrought. The earliest framings of synthetic biology focused on emphasizing the engineering part of genetic engineering. If this is to eventually be functionalized, synthetic biology will require standards, as engineering does.

We can make a parallel then with any system of specification. One useful comparator might be FAIR data. Specifically, what do we need in the specification and execution of synthetic biology experiments and applications to assure that each «thing», be it a chassis, a measurement device, or an approach to risk assessment is, in the broadest sense, findable, accessible, interoperable, and reusable. As a synthetic biology research community, we are unlikely to achieve all of these quickly and comprehensively. But some lessons that we can take from the discussions around open science are very useful, particularly with respect to how open science is not exclusive of high quality and responsible science. Our communities may need, though, to create modified or new structures to assure that quality and responsibility. One area where these concerns are particularly noted is respect to peer review, as sharing of research results now no longer occurs only through peer-reviewed journals.

Looking toward the framework of RRI, and more generally issues around responsible conduct of research and research integrity, will be even more fruitful for thinking about how to approach standardization. We learn from rigorous literatures that mechanisms for working through even the most technical questions are subject to sectoral, cultural, gender, and national biases. Within BioRoboost (and in many other projects) we are trying to apply these lessons in approaching all of the concerns about the usefulness of standards for researchers.

Further, we can use the development of standards to assist an understanding of the role of open science in promoting and assuring responsible conduct of research broadly. It is frequently said (though with not enough evidence yet to draw conclusions) that openness will help to improve integrity because «everyone can see». But science has not been hidden per se to date, only looked at in perhaps a more compartmentalized manner. As just one example from a small set of journals, in post-peer reviewed, pre-publication primary research papers, about 20 % contain aberrations that must be pursued by journal editors prior to acceptance. About half of these are a result of authors manipulating images or data in such a way to make the paper «look nicer», but on removal of these manipulations, the results stand. The other half contain varying degrees of manipulations, from beautification to outright fraud, that may change the conclusions (Pulverer, 2015). There is no reason to think such aberrations do not occur in a more «open», less overseen literature. Standards of course are much more tightly overseen, but there are still differences in how standards develop between communities that may remain unresolved.

experiment laboratori estàndards

To operate within the responsible research and innovation framework, researchers need training and tools. If researchers do not know what constitutes an improper behaviour or even a manipulation of results, we cannot really hold that against them. / Louis Reed

A key realization with respect to research integrity generally and even RRI more specifically is that in order to operate within those frameworks, researchers need both training and tools. It is easy to be disillusioned about a 20 % aberration rate, but if researchers do not know what constitutes an improper manipulation, we cannot really hold that against them. Similarly, it is becoming rapidly apparent that the need for standards, the uses of standards, and the roles of individuals and communities in assurance of proper and necessary use will require training. In principle, that training would fit in easily with more general training in responsible conduct of research. Unfortunately, the requirements for this type of training remain idiosyncratic and vary widely between funders, institutions, and countries. This is an area where those concerned about standards could be in front and work towards providing training at least within the community, for the value of that demonstration, but also for the important substantive reasons.

The distance then between applying a technical standard to solve a technical problem and asking for a process standard (e.g., «think about your problem engaging a set of stakeholders prior to submitting a grant proposal») may not be so far. The difference rather would be in how users (researchers) would view the use of those processes. Is this something that can be regulated? Or, is «think about this problem» something that researchers simply do as a matter of course, and trying to add a step to standardize it in this case does become excessive rather than helpful?

Whose responsibility?

Contemplating how responsibilities may be undertaken, it may be useful to think about responsibility’s component parts: the desired outcome, and the performer(s) of particular actions to get to that outcome. Identifying «someone» or «an entity» as needing to be responsible is an important first step. But those identities need to be defined earlier rather than later. It will matter whom or which agency is specifically responsible, for example, to assure that a standard will work in an open science environment or that researchers are properly trained on how to employ standards in their work.

A perhaps tangential but important responsibility regards the type of work that researchers could or should do to contribute to improve standards for the entire community. Different research and organizational sectors approach the issue of routine or non-novel work in different ways. In for-profit organizations, this type of work may be baked in to the overall work plan, and appropriate hiring assures that work is done. But, for example, in the academic sector, where the underlying research to support standards development might need to happen, it is difficult to direct that such research happens. Incentives, particularly relating to the provision of significant grants, could improve that situation. But ultimately such research must be seen as being valued by the community, and not as an appendage (Garfinkel, 2012).

Finally, a clear responsibility of the research community must be to help decision-makers to understand where standards are necessary and how the research community should be involved in their development. One important and underexplored problem with imposing standards (or regulation, or any «rule» most broadly scoped) is that they definitionally decrease diversity. Sometimes this is good: a «diversity of regulations» would not a priori be desirable or helpful. But in other cases, standardization can destroy diversity that was inherently necessary in the system. In some cases, that diversity allows for competition, benefiting, for example, consumers or any users of a product or technology.

Particularly in research, a period of competing standards can be healthy. It is only through experimentation that the community can definitively assess the value of particular standards, and that experimentation, given the nature of research, will take time. Part of our collective responsibilities then must relate to protecting the ability to try different approaches, while essentially simultaneously working to assure that useful standards are imposed and enforced as needed. This is not easy or straightforward. But particularly in emerging areas of biotechnology research where concerns about a particular approach’s usefulness, safety, or societal desirability are already key parts of policymaker discussions, this last piece of supporting some ambiguities around standardization followed by robust adoption should contribute to improved governance, and societal outcomes.

References

BioRoboost. (2019). Bioroboost. Retrieved April 1, 2020, from http://standardsinsynbio.eu/?page_id=53

cOAlition S. (2019). About Plan S. Retrieved April 2, 2020, from https://www.coalition-s.org/

CoNOSC. (2020). Council for National Open Science Coordination. Retrieved April 2, 2020, from https://conosc.org/

European Commission. (2014). Responsible research and innovation: Europe’s ability to respond to societal challenges. European Union. Retrieved April 1, 2020, from https://ec.europa.eu/research/swafs/pdf/pub_rri/KI0214595ENC.pdf

European Commission. (2020). Open Science Policy Platform. Retrieved April 2, 2020, from https://ec.europa.eu/research/openscience/index.cfm?pg=open-science-policy-platform

Force11. (2017). The FAIR data principles. Retrieved March 31, 2020, from https://www.force11.org/group/fairgroup/fairprinciples

Garfinkel, M. (2012). ESF strategic workshop on biological containment of synthetic microorganisms: Science and policy. European Science Foundation Exploratory Grant. Retrieved April 2, 2020, from https://www.embo.org/documents/science_policy/biocontainment_ESF_EMBO_2012_workshop_report.pdf

Hof, R. D. (2011, July 25). Lessons from Sematech. MIT Technology Review. Retrieved March 31, 2020, from https://www.technologyreview.com/s/424786/lessons-from-sematech/

Owen, R., Macnaghten, P., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 39(6), 751–760. http://doi.org/10.1093/scipol/scs093

Pulverer, B. (2015). When things go wrong: Correcting the scientific record. EMBO Journal 34, 2483–2485. http://doi.org/10.15252/embj.201570080

© Mètode 2020 - 105. Standards - Volume 2 (2020)

She is head of the Science Policy Programme at EMBO (Heidelberg, Germany). Her major areas of policy research are biotechnology, responsible conduct of research, and scientific publishing. Previously she was a policy analyst at the J. Craig Venter Institute. Her research there focused on identifying emerging societal concerns associated with new discoveries in genomics, particularly synthetic biology. She was a research fellow at the Center for Science, Policy and Outcomes at Columbia University, and earlier was a research associate at American Association for the Advancement of Science (AAAS). Michele holds a PhD in Microbiology from the University of Washington, an A.B. from the University of California (Berkeley), and an M. A. in Science, Technology, and Public Policy from the George Washington University. She is an elected Fellow of the AAAS.