https://debruine.github.io/code-review/
Lisa DeBruine
Research transparency and integrity benefit greatly from computationally reproducible code, and there is an increasing emphasis on learning the skills to code. However, there hasn’t been as much emphasis on learning the skills to check code. People cite a lack of time, expertise, and incentives as reasons that they don’t ask others to review their research code, but the most commonly cited reason was embarrassment for others to see their code.
In this introductory session, I will cover the goals of code review, some key concepts, and present checklists for preparing your code to be reviewed and for reviewing others’ code. We will also do a mock review of my own code. The hope is that code check sessions can become a regular part of the M&Ms seminar series.
The process of methodically and systematically checking over code–your own or someone else’s–after it has been written.
An analysis by Nuijten et al. (2016) of over 250K p-values reported in 8 major psych journals from 1985 to 2013 found that:
The specific goals of any code review will depend on the stage in the research process at which it is being done, the expertise of the coder and reviewer, and the amount of time available.
In this talk, we’ll focus on pre-submission code review by colleagues.
All file references should use relative paths, not absolute paths.
Name files and code objects so both people and computers can easily find things.
DeBruine_code-review_2022-10-19.Rmd
An approach to programming that focuses on the creation of a document containing a mix of human-readable narrative text and machine-readable computer code.
Huge thanks to the Code Review Guide Team (especially Hao Ye, Kaija Gahm, Andrew Stewart, Elaine Kearney, Ekaterina Pronizius, Saeed Shafiei Sabet, Clare Conry Murray)
Anyone is welcome to get involved in the project.
DeBruine LM (2002). Facial resemblance enhances trust. Proceedings of the Royal Society of London B, 269(1498): 1307-1312. doi: 10.1098/rspb.2002.2034