Eradicating gender bias in science requires evidence-based action — which in turn requires open data on the inner workings of science.
— by Richard Walker
Despite every effort to achieve gender equality, and despite improvements in many countries and disciplines, science is still a “man’s world”:
- 40% of European science and engineering doctorates go to women.
- 33% of junior faculty are female.
- 11% of senior faculty are female.
- In the US:
- the average grant size is US$ 507,000 for men and US$421,000 for women
- the average salary is 18% lower for women (1).
- Only 30% of paper authors are women; the number of papers with women as first authors is even lower (2).
- Papers with women as first authors, last authors or sole authors are less cited than papers with men in these positions.
All this begs the question – why? To tease apart the reasons, we need to understand the internal workings of faculty selection committees, funding agencies and publishers. For that we need data which committees, agencies and publishers rarely make available. However, some organizations have a more open approach. One example is Frontiers which not only publishes the names of editors and reviewers on the papers it endorses, but also allows outside researchers to mine their data through the Frontiers Open Science platform . In 2015, a group of colleagues and I used these data to look for gender bias in the Frontiers review process. We found none (3). But everything depends on where you look and what you are looking for. A new study by Helmer and colleagues (4), also analysing Frontiers data, finds a subtle form of bias that we were not looking for.
In our study, we showed that the results of Frontiers peer reviews are independent of the gender of authors and reviewers. Women generally get worse results than men – but that this has nothing to do with the gender of the reviewer. Female reviewers also tend to be more severe than men, but they are equally severe to all authors, whether they are men or women. And this sounds right. It’s not easy to imagine a 21st century reviewer marking down a paper, just because the author is a woman (or a man).
But Helmer and colleagues didn’t look at review results. What interested them was the way editors choose reviewers for a paper – a more personal decision. They found that male editors have a general preference for male reviewers and that female editors prefer females (though not so strongly). This also sounds right. When we review papers from authors we don’t know, gender is probably not the first issue on our mind and we can focus on merit. But when we choose people to do a job – when we appoint a reviewer, or hire a new faculty member, or approve a grant – we look for people we can trust. Social scientists tell us that we tend to choose people like ourselves, often people with the same gender. They call this homophily – the attraction of like for like. Studies show that it’s a completely general phenomenon that goes far beyond Frontiers. Assuming they are right, we have cause to worry. If the majority of senior scientists are men, and if these men give reviews, jobs and grants to other men, gender equality will take a long time to achieve.
In fact, the Helmer study gives a rough estimate of how long it could take. Like many other studies, they found that women’s position is improving – but very slowly. Based on the trends shown in the graph below (4) , it will be 2027 before half of Frontiers authors are women, 2034 before women achieve parity as reviewers, and 2042 before they contribute half of our editors (4).
Are these predictions correct? How important is homophily in the every-day business of science? How important are other factors working to bring gender equality closer, or to delay progress? Above all, what can we do to speed things up?
Of course, there have been many suggestions: changing the wording of job adverts, special training and mentoring for women scientists, adopting double blind peer review, where reviewers do not know the identities of authors, training reviewers to recognize their own implicit biases, using automated review management systems to monitor potential gender bias, reprogramming these systems to include more female reviewers in the lists they present to editors. But do these measures work? Eradicating bias requires action. But if we can’t measure gender biases and the outcomes of corrective measures, we have no rational way of deciding what to do.
Helmer’s study demonstrates that data can throw new light on the mechanisms driving gender bias. Once we recognize these mechanisms we can act on them. But to progress we need more data. Far more. From publishers, hiring committees, funding agencies, everyone involved in the business of science. Data on hiring, salaries, grants, publications, citations, data on how the role of women is changing – or not changing, data for different disciplines, data for different countries and data on the impact of different policy measures and review mechanisms.
Will the data come? Who knows? Will gender equality come without it? Maybe – if we wait long enough. But rapid progress requires rapid evidence-based action. And for evidence-based action we need more data on the inner workings of science. Open data can’t just be something we demand from authors. To move in the right direction, it has to become part of the vision and practices of all participants.
- H. Shen, Mind the gender gap. Nature. 495, 22 (2013).
- C. R. Sugimoto, V. Lariviere, C. Ni, Y. Gingras, B. Cronin, Global gender disparities in science. Nature. 504, 211–213 (2013).
- R. Walker, B. Barros, R. Conejo, K. Neumann, M. Telefont, Personal attributes of authors and reviewers, social bias and the outcomes of peer review: a case study [version 2; referees: 2 approved] (2015; http://f1000r.es/5gj), vol. 4.
- M. Helmer, M. Schottdorf, A. Neef, D. Battaglia, Gender bias in scholarly peer review. eLife. 6, e21718 (2017).