This is a guest post by Mark Coulson, Professor of Psychology at the School of Human and Social Sciences.
I’ve had a longstanding love affair with open science, but it took time for us to properly consummate our relationship. My first pair of open science ‘badges’ from a journal published by an American Psychological Association journal was exciting as it proclaimed both Open Data and Open Materials. Neither my data nor my materials seemed to generate much interest, and the paper’s citation count has made no discernible impact on my h-index. I still like the badges, though.

One nice thing about open science is that you can dip your foot in it. Open Methods are easy – set up an online project on a repository (I use osf.io) and attach your method (I exported an online Qualtrics survey). As anyone can access and download this, you have effectively gifted the community your methodology.
Open data are a little more tricky, principally due to the importance of full anonymization of participants, but also the fact that complex data files can end up impenetrable even to those who have created them (note: this might just be me). Still, going for open data builds confidence, tells the research community you’ve done a good job, that you are prepared to show your workings to the world, and who knows, someone might find something interesting in there that you didn’t, and want to collaborate, and publish more, and do more open science.
But then there’s the inescapable feeling that this is all a bit narcissistic – that open science is rather ‘look at me!’ A sort of Instagram-with-data. Or the even more grandiose version of narcissism which holds that your ideas are so brilliant that others will immediately steal them. (Don’t worry, they won’t. You’re not that good. And in any case, if you’re really worried you can always embargo your projects).
And more pessimistically, the nihilist in me knows that in fact the universe doesn’t care, nobody cares, no one looks at these things, and even blatantly flippant examples fail to leave a ripple on the surface of scientific discourse.
Still, the hardest part for me, and what really interfered with a full, committed relationship, was pre-registration – the specification, before the data are collected, of exactly how you will analyse them.
I freely admit that I was educated in a pre-open science environment. Statistics classes then (and I fear they haven’t changed much) involved learning straightforward rules for traversing the epistemological tundra between hypothesis and statistical test. It was often suggested that we decide which tests to use before running them, but statistical packages present so many options that the temptation is to try all of them. Just to see. And then decide which one is ‘best’.
So this love affair threatened to be over before it had properly begun. I’d had a taste, and it was delightful, but the fear of rejection, or even worse of not being noticed, weighed heavily, and the lifelong habits of freewheeling through the contents of Tabachnik and Fiddell until I found just the right tool to get just the right results were hard to shake off.
And then I read a few things, many of which stuck in my mind. I read Jacob Cohen’s seminal work on power analysis, John Ioannidis’ explanation of Why Most Published Research Findings are False, and the magnificent statement from the American Statistical Association (who definitely know what they are talking about) on why unplanned and non-transparent reporting of statistical tests (and in particular p values) makes many findings ‘uninterpretable.’
Which is when love reared its head again. Well, sort of.
Once upon a time I published a paper about people having emotional attachments to digital characters in a video game. It was a small study, fun to carry out, and was cited by more people than I generally get cited by, which is a Good Thing. That was in 2012. Skip forward a dozen years, and the sweet innocent relationships of video games passe have developed into complex, branching, polyamorous, non-binary and quite magnificent side events embedded in the normal video game tropes of killing things, flying things, amassing loot, saving universes and fulfilling prophesies. If you’re interested in the kind of things digital characters get up to in the 2020s, there’s a compilation of encounters from one game on YouTube that is over 2 hours long and very much NSFW.
So, I developed my survey (with a big thank you to a generous internship paid for by the university) and lodged it on osf. I obtained ethical approval. With a finger hovering over the button which would launch my survey out into an expectant world, I felt the excitement of data collection, analysis, discovery, publication.
And then the whisper. Pre-registration. It reminded me I preach but don’t practice. I’ve got the other badges, but I’m missing the big one. Can I actually make these decisions before rather than after I collect the data? There are some truly intimidating and brilliant examples of pre-registered studies out there. But then there are plenty which are not [link deleted]. And, as the preaching typically goes, you do eventually have to make these decisions, so why not prior to the event? And finally, listening to the whisper, remembering the wise words of others, and perhaps deciding to rid myself of my own hypocrisy, I went all in. Pre-registration, consummation.
Okay then, go judge for yourself, because in the final analysis (sic) that’s what it’s all about. The data are in, I am about to start my pre-registered analysis, and am both excited and scared about where things will go. If you do take a look at my efforts, and spot an error, please let me know, but preferably before it gets published.
Professor Mark Coulson, School of Human and Social Sciences.