Open science practices in our group

Since the beginning of 2018, we have adopted Open Science practices in our group. Why? For reasons related to ethics, quality, and impact.

We are mostly funded by taxpayers, in the European context for example by the ERC. We are — morally and increasingly also legally — obliged to make our work as widely and completely available to the public as possible.

Presently only a tiny part of research published in HCI is available. This is thanks to paywalling by the main publishers including the ACM. Authors are not to blame. At the moment, the publication forums in HCI do not offer any genuinely good OA options. Price tags for OA are unacceptably high, even if authors do almost all the work from research to reviewing.

Open Science is more than just Open Access (https://en.wikipedia.org/wiki/Open_science). It also includes open research, open education, open notebooks etc. Releasing datasets and code for other groups allows them to replicate and build on our work. Open Science practices also promote scrutiny by peers and can thereby increase the quality of research. It is about quality monitoring against harking and p-hacking.

Characteristics of HCI and Lack of Top-Down Regulation

Human-computer interaction as a field, is somewhat special. For one, we do not just do code or run experiments with human participants, the usual suspects of Open Science. We also design artefacts, study vulnerable populations, and analyze privacy-sensitive datasets. Some HCI groups are funded or directly part of corporate research. HCI is also a multi-polar field with very different views on what constitutes good research.

Different views and interests have stalled progress on Open Science. Important discussions are going on. For example, at CHI this year, there was discussion on pre-registration of studies and on the Centre for Open Science. However, first attempts, such as the RepliCHI, have already failed. Presently, there is hardly any top-down mechanisms in place, such as regulation or incentives.  

In the present situation, research groups must assume a proactive role. Start doing Open Science bottom up. What does this mean?

Our take on Open Science

In December 2017, we went for a retreat in the beautiful Töölönlahti to define our stance on Open Science. We went through the main arguments and practices in other fields. Our stance in the end was that Open Science should be the default option for everything we do. But we understand that there are occasionally good reasons to deviate, for example to protect the privacy of our participants. By making Open Science the default modus operandi,  we send the signal that it’s not a strategic choice but a moral obligation toward our stakeholders.

Every group discussing Open Science will ask: Is it worth it? It implies a lot more work. Our approach is to turn Open Science from an obligation (boring; more work) into a tool that promotes quality and impact in our field.

What does this mean concretely? Attached is a template that we use for planning our projects. The project leader fills it with co-authors and it is then jointly agreed as a plan that can be later updated if all parties agree. The template includes, among others:

  • Summary of project goals and intended scientific contributions
  • Publication target and open access license
  • Definition of supplementary materials associated with the paper vs. those stored by other means
  • Repositories and license for publishing Datasets, Code, and Software, including maintenance plans
  • Publicity plan for promoting visibility and sharing outside the research community
  • Internal quality monitoring plan for highest quality practices in experimental research, statistical inference etc.
  • Plans for publishing intermediate results in our blog and putting out notebooks and tutorials.

Feel free to appropriate it as you see best. 

Sought Positive Effects

I believe that Open Science may promote better research in our group:

  • Because the overhead of an Open Science project is higher, we are forced to plan more carefully and set more ambitious objectives.
  • Because we explicitly aim to expose our work, Open Science promotes interactions with and accountability toward our stakeholders.
  • Replicability improves. Not only that, we explicitly aim at contributing higher quality datasets to open research (e.g., https://userinterfaces.aalto.fi/136Mkeystrokes/).

Although we are just one group,  positive examples may get other groups to consider it. Wider adoption might have positive effects on HCI as a whole.

It is my personal opinion that HCI research is going through a crisis. On the one hand it is addressing highly visible, important topics, but on the other it fails to deliver. I have discussed earlier at CHI 2016 and 2017 the erosion of our theoretical and methodological basis (https://dl.acm.org/citation.cfm?id=3025765) and how our problem-solving capacity should be increased (https://dl.acm.org/citation.cfm?id=2858283).

How could Open Science help? By medicating its broken incentive and feedback system. The dynamics of unhealthy incentive systems in science is described well in this eye-opening paper: Natural Selection of Bad Science (http://rsos.royalsocietypublishing.org/content/3/9/160384).

Some indicators of this crisis in HCI include:

  • Low incentives for aiming high: closer to 700 full papers are published at the flagship conference per year, which is out of synch with the quality of research outputs. Compare this to e.g. 100+ papers in SIGGRAPH. More worrying is that, among HCI researchers, there is more talk about the quantity of papers published than their substantial contributions.
  • Recognized issue with replicability (remember RepliCHI?).
  • Recognized issues with statistical conclusion validity due to low sample sizes (see https://dl.acm.org/citation.cfm?id=2858498) and misuse of inferential statistics (a whole new book came out on this topic recently).
  • Issues with scientific identity, which at the extreme is reflected in anti-science views. It’s not uncommon to meet researchers at CHI who do not consider HCI as science or even research. 
  • Recognized issues with progress (http://interactions.acm.org/archive/view/march-april-2015/the-big-hole-in-hci-research-insights) and accumulation of knowledge. Shortcomings of the prevalent “point design, point study” approach have been raised many times. We also lack shared objectives against which progress could be demonstrated.

We are working hard to put our money where our mouth is. Here’s a list of Open Science contributions to the CHI 2018 conference that just finished:

Happy First of May!