Recently read: The rise and fall of peer review
Hi Get Goalside supporter, happy new year.
There was an article I read recently, 'The rise and fall of peer review', that I found interesting and tangentially related to football analytics, and this seemed a better place to talk/think aloud about it than Twitter.
The crux of the piece is that peer reviewing scientific work has been like one big experiment, basically taking off after the Second World War, and the experiment doesn't appear to catch that many errors or help good science get done. Some of you who have scientific backgrounds might have opinions about this.
Football analytics has always been outside of academia enough that nobody really looks for peer-reviewed journals, although the filtering process of conference committees does act as a bit of a stamp of approval. (To be fair, an advantage that sports analytics has over Actual Science is that bad ideas don't really harm anyone, and avoidance of harm is, I guess, one of the aims of peer reviewing).
But it feels like this is sort of related to the moans about 'the state of public analytics' that occasionally surface on Twitter. Public analytics wasn't just a place of new ideas, but a place where ideas evolved. Even if, being cynical for a moment, this evolution included in-club people pilfering the public ideas they liked and ignoring the ones they didn't. Science where everyone can have a go and publish out in the open is a form of peer review, the oldest and least formalised version, but it needs work to review and peers to review it.
Reading the article also made me feel bad for people working in clubs (or other analytics-adjacent areas) where they're just a one or two-person band. At work I've always been in a three-person team (at minimum), with people in other teams to bounce ideas off too, whether that be analytics related or simple code reviews. If you have no-one, or just one person, to check your work with though... well, no wonder people would like a thriving set of public ideas again.
If you're interested, there's a follow-up post from the same author here, based on reactions that the initial post generated. As you may imagine, there were many. It also turns out that there's a journal for creationists (the most recent volume includes parts 2 and 3 of a paper 'What’s wrong with being wrong: a closer look at evolutionary ethics', which sounds annoyingly intriguing).
I'll end this with a passage that I enjoyed from the follow-up:
The word boring really stuck with me. Reviewing should be interesting. It should matter whether the paper’s claims turn out to be true or not, and the only reason to review it is that you care about those claims. The fact that we find it boring suggests that part of us, deep down, believes that the paper in front of us doesn’t actually deserve our attention.
PS: I'm finally getting the last of some data together for an upcoming newsletter. I had one of those annoying moments where I realised I needed to change what I was scraping to make sure it was a nice dataset only after I'd scraped it all and done some analysis. And then the second time I was scraping I forgot to actually save the data as a csv (covid takes some blame for this). Third time's the charm.
Thanks again for your support,
Mark