Lisa DeBruine
  • Posts
  • Talks
Categories
All (26)
ai (1)
code check (1)
coding (1)
ethics (1)
faces (2)
julia (1)
metacheck (3)
methods (1)
mixed effects (1)
open research (1)
open science (1)
panel (1)
project planning (1)
PSA (3)
psyTeachR (1)
quarto (1)
R (10)
replication (1)
research (1)
simulation (10)
teaching (9)
team science (6)
workshop (2)

Talks

See Talks for a list of slides.

Blue hexagon with the word FAUX

Data Simulation Keynote

simulation
Lisa DeBruine will give a keynote at the 10th annual Summer School on Statistical Methods for Linguistics and Psychology in Potsdam, Germany, on the topic of data simulation.
2026-08-25
Lisa DeBruine
Rainbos-bordered hexagon with the word PsyTeachR

Teaching Reproducible Research

teaching
Lisa DeBruine spoke about PsyTeachR at the Maths and Stats seminar at the University of Glasgow in Scotland.
2025-11-25
Lisa DeBruine
Green hexagon with the words META CHECK

Using AI Ethically in Research: Festival of Data Science and AI

panel
ai
Lisa DeBruine spoke about Papercheck in a panel discussion about Ethics in AI at the Festival of Data Science and AI at the University of Glasgow in Scotland.
2025-10-28
Lisa DeBruine
Green hexagon with the words META CHECK

Research Literature Demonstration: Festival of Data Science and AI

workshop
metacheck
Lisa DeBruine gave a Papercheck workshop at the Festival of Data Science and AI at the University of Glasgow in Scotland.
2025-10-28
Lisa DeBruine
The University of Glasgow crest - blue with a bird, tree, book, bell and fish

Glasgow Seminar on Papercheck

metacheck
Lisa DeBruine presented Papercheck in a talk for the University of Glasgow School of Psychology & Neuroscience summer seminar series.
2025-07-25
Lisa DeBruine
The metascience logo -- a abstract red and orange shape

MetaScience Unconference

workshop
metacheck
Lisa DeBruine gave an impromptu Papercheck workshop in the unconference section of the MetaScience conference at UCL in London.
2025-07-02
Lisa DeBruine

Data simulation with {faux} and data simulation for mixed designs

R
simulation
teaching

Being able to simulate data allows you to prep analysis scripts for pre-registration, calculate power and sensitivity for analyses that don’t have empirical methods, create reproducible examples when your data are too big or confidential to share, enhance your understanding of statistical concepts, and create demo data for teaching and tutorials. This workshop will cover the basics of simulation using the R package {faux}. We will simulate data with factorial designs by specifying the within and between-subjects factor structure, each cell mean and standard deviation, and correlations between cells where appropriate. This can be used to create simulated data sets to be used in preparing the analysis code for pre-registrations or registered reports. We will also create data sets for simulation-based power analyses.

2024-02-01

Research software engineering with Julia: basics, visualization, and statistics

simulation
julia

The development of research software has become an important part of research projects in many areas of science and engineering. At the same time, increasing computational power in the area of high performance computing has made computationally challenging statistical tasks feasible and highly desirable in many application fields.

In this week-long summer school, we will therefore address these different aspects and familiarize you with the most essential paradigms of software development, which support the design of efficient, user-friendly, and sustainable software. In particular, we will focus on the scientific programming language Julia.

The summer school is organized around keynote presentations by invited Julia experts and many hands-on tutorials. First, a gentle introduction including packaging, testing, virtualization, interaction, and visualization will supply you with the essential skills you need to use Julia in your research. Afterwards, we build on these skills to implement computationally expensive statistical methods. In particular, we will focus on methods for regression and resampling using bootstrap and permutations. That is, methods addressing two of the most common challenges in statistics: estimation of the relationship between variables of interest and the quantification of uncertainty. You are invited to bring your own problem to apply the skills you learn in this summer school.

2023-10-09

Ethics and Open Research

open research
ethics
2023-10-06

Replicability and Generalisability in Face Research

faces
coding
replication
methods

In this talk, I will discuss several initiatives to increase the replicability and generalisability of research on faces, with a special focus on big team science efforts, such as the Psychological Science Accelerator and ManyFaces. I will also make an argument for reproducible stimulus construction and introduce webmorphR, an R package for reproducibly scripting face stimulus creation. Additionally, I will explain how a common methodology in face research, the composite method, produces very high false positive rates, and explain alternatives to this, including the use of mixed effects models for analysing individual face ratings.

2023-09-21

Plenary: Everything is cool when you’re part of a team

PSA
team science

The “replication crisis” has led to a call for initiatives to increase the replicability of psychological science, such as data and code sharing, pre-registration, registered reports, and reproducible workflows. Similarly, researchers have questioned the extent to which studies of WEIRD populations (Western, Educated, Industrialised, Rich, and Democratic) generalise to the majority of people in the rest of the world. Here, I will discuss how large-scale collaborations can improve both replicability and generalisability, with a focus on the Psychological Science Accelerator, a globally distributed network of more than 1300 researchers from more than 70 countries across all six populated continents.

2023-07-23

Creating Open Source Textbooks

R
quarto
teaching

One great way to contribute to open research is by creating your own open-source textbook. In this workshop, we’ll introduce you to quarto, an open-source scientific and technical publishing system that you can use to create dynamic online textbooks. While quarto integrates very easily with R and python to create code-focused textbooks, you can also use it to publish any type of text.

This workshop will use an in-development R package https://github.com/debruine/booktem to make setting up a textbook very straightforward. You will just need to install R, RStudio, and quarto on your machine (or use the free service posit.cloud) to participate, but need absolutely no other experience with R. By the end of the workshop, you will have the bones of a textbook, with chapters, appendices, and linked references. If you have a GitHub or GitLab account, you will also be able to put this online. Additionally, we will discuss getting credit for your textbook by registering a DOI with Zenodo and how to set up a textbook in languages other than English.

2023-06-24

Fake It Until You Make It: How and why to simulate research data

R
simulation
teaching

Being able to simulate data allows you to prep analysis scripts for pre-registration, calculate power and sensitivity for analyses that don’t have empirical methods, create reproducible examples when your data are too big or confidential to share, enhance your understanding of statistical concepts, and create demo data for teaching and tutorials. This workshop will cover the basics of simulation using the R package {faux}. We will simulate data with factorial designs by specifying the within and between-subjects factor structure, each cell mean and standard deviation, and correlations between cells where appropriate. This can be used to create simulated data sets to be used in preparing the analysis code for pre-registrations or registered reports. We will also create data sets for simulation-based power analyses.

2023-06-01

STORK: Why Code?

R
simulation
teaching

Research transparency and integrity benefit greatly from computationally reproducible code, but the barriers to learning the required skills can seem overwhelming. In this talk, I will summarise the benefits of using code to process and analyse data, give some practical tips for developing your skills, demonstrate how data simulation can improve your research, and discuss the benefits of code review.

2023-04-26

Bath R Launch: Why Code?

R
simulation
teaching

Research transparency and integrity benefit greatly from computationally reproducible code, but the barriers to learning the required skills can seem overwhelming. In this talk, I will summarise the benefits of using code to process and analyse data, give some practical tips for developing your skills, demonstrate how data simulation can improve your research, and discuss the benefits of code review.

2023-03-30

Code Review

R
simulation
teaching
code check

Sharing analyses using computationally reproducible code is increasingly common, with more journals encouraging or even requiring this level of transparency. However, this increased focus on writing code hasn’t been matched by an increased focus on assessing code, either informally within research teams or formally during the review process. A recent survey suggests that people don’t arrange for their code to be reviewed because there is a lack of time, it’s not seen as a priority, there are not enough colleagues with the appropriate expertise, or they are embarrassed for others see their code. In this talk, I’ll summarise the work of a group of researchers developing a code review guide for beginners. I’ll cover the goals of code check, explain some key concepts, and present checklists for both code reviewers and code writers to get their code in shape for review.

2022-11-21

ManyFaces: A big team science approach to face perception and recognition

team science
faces

Here we introduce ManyFaces, a recently formed big team science group for face perception and face recognition research. This symposium will introduce the scope and aims of ManyFaces and highlight the work of two of the working groups within ManyFaces: stimulus meta-database and stimulus collection.

Broadly, the aim of ManyFaces is to improve, diversify, and crowdsource key aspects of face research, including perception and recognition. This involves, for example, the collection and use of face stimuli; sharing existing stimulus sets; standardising stimulus collection procedures; and organising stimulus collection across multiple labs to obtain larger and more diverse face stimulus sets. ManyFaces also aims to crowdsource data collection across our members’ labs to test key research questions in face perception and recognition, enabling larger-scale designs and more diverse participant samples and generalisable findings. Finally, we aim to organise training workshops for key methods (e.g., morphing) and analyses (e.g., mixed effects models) used in face research.

The stimulus meta-database working group has compiled a guide to face stimulus meta-databases and resource lists. Various researchers have created lists or meta-databases documenting the broad variety of face stimulus sets that are available for research use. However, these lists vary in how comprehensive they are and in the type of information they provide about each stimulus set. Our guide therefore provides an overview of the most useful of these lists, noting key information such as the kinds of stimuli included in each list, the information provided about each stimulus set, the user friendliness of the list, and the degree of overlap among lists. This guide should aid researchers in finding the most appropriate stimuli for their research and is now publicly available on the Open Science Framework (https://osf.io/mbqt3/). This working group is also currently surveying ManyFaces members about any face stimulus sets they have and are willing to share directly with other researchers, with the aim of compiling a guide to stimulus sets that cannot be found via existing lists and databases.

The stimulus collection working group is working toward the goals of understanding what kinds of stimuli face researchers need to address their research questions and how to acquire these stimuli in a standardised and reproducible manner. This will involve surveying members about their stimulus needs and the stimulus collection equipment they have, developing guides to stimulus collection (e.g., photography/video setup, ethics templates/consent forms), and eventually organising the collection, storage, and distribution of stimuli.

Altogether, ManyFaces is in its beginning stages but is taking steps toward more reproducible, generalisable, and inclusive face research practices.

2022-10-27

Why Code?

R
simulation
teaching

Research transparency and integrity benefit greatly from computationally reproducible code, but the barriers to learning the required skills can seem overwhelming. In this talk, I will summarise the benefits of using code to process and analyse data, give some practical tips for developing your skills, demonstrate how data simulation can improve your research, and discuss the benefits of code review.

2022-09-28

How to set up a big team science project

team science

Starting a big team science project requires a number of tools and resources for organising communication and information. Your project is likely to need a website, social media presence, a way for members to sign up, a discussion forum, email lists, file collaboration, and file storage. It will also need a core team to work on the organisational infrastructure and agree on processes for conduct and decision making. In this workshop, we will discuss the needs that a new project has and how to meet those needs with little or no budget and limited volunteer time.

2022-09-23

Data Simulation Workshops

simulation
R

This workshop will cover power analyses across a wide range of statistical tests, including linear mixed models.

2022-04-27

Everything is cool when you’re part of a team

PSA
team science

The “replication crisis” has led to a call for initiatives to increase the replicability of psychological science, such as data and code sharing, pre-registration, registered reports, and reproducible workflows. Similarly, researchers have questioned the extent to which studies of WEIRD populations (Western, Educated, Industrialised, Rich, and Democratic) generalise to the majority of people in the rest of the world. Here, I will discuss how large-scale collaborations can improve both replicability and generalisability, with a focus on the Psychological Science Accelerator, a globally distributed network of more than 1300 researchers from more than 70 countries across all six populated continents.

2021-10-19

Everything is cool when you’re part of a team

PSA
team science

The “replication crisis” has led to a call for initiatives to increase the replicability of psychological science, such as data and code sharing, pre-registration, registered reports, and reproducible workflows. Similarly, researchers have questioned the extent to which studies of WEIRD populations (Western, Educated, Industrialised, Rich, and Democratic) generalise to the majority of people in the rest of the world. Here, I will discuss how large-scale collaborations can improve both replicability and generalisability, with a focus on the Psychological Science Accelerator, a globally distributed network of more than 1300 researchers from more than 70 countries across all six populated continents.

2021-10-19

Open Science Practices in the School of Psychology and Neuroscience

open science

This session will outline some open science practices in the University of Glasgow’s School of Psychology and Neuroscience. For example, our staff and students stay on top of new developments through regular seminars and workshops sponsored by the Methods and MetaScience network. Our undergraduate and postgraduate methods curricula were overhauled to embed open science practices and the skills needed to achieve them. After a brief outline, we will open the floor for discussion around the benefits and challenges of increasing open science practices.

2021-10-19

An ounce of prevention is worth a pound of cure

research
teaching
project planning

This seminar will highlight a few things you can do during the planning stage of your project to help you develop healthy research habits, such as thinking about formulating suitable research questions (qual and quant), including positive and negative controls to test whether the collected data make sense, justifying sample size and other design decisions, or simulating data to create a concrete data cleaning and analysis plan. The talk is aimed at undergraduate and MSc students just starting their dissertations, but is applicable to anyone starting a qualitative or quantitative research project.

2021-10-13

Mixed effects models for designs with randomly sampled stimuli

mixed effects
simulation
R

Experimental designs that sample both subjects and stimuli from a larger population need to account for random effects of both subjects and stimuli using mixed-effects models. However, much of this research is analyzed using analysis of variance on aggregated responses because researchers are not confident specifying and interpreting mixed-effects models. I will explain how to simulate data with random-effects structure (using the faux R package) and analyze the data using linear mixed-effects regression (with the lme4 R package), with a focus on interpreting the output in light of the simulated parameters.

2021-10-05

Large-scale collaborative science (with Lisa DeBruine)

team science
psyTeachR
R

In this episode, we chat with Lisa DeBruine (University of Glasgow) about her experience with large-scale collaborative science and how her psychology department made the switch from SPSS to R.

2019-02-17
No matching items