Emma James, University of York@emljamesResearch Fellow (Postdoc) at SLAM Lab Topic: A (half) marathon not a sprint: Preparing to conduct multi-session memory studies online Abstract: Memory researchers often want to assess how new information is consolidated or forgotten over prolonged periods of time. Running multi-session studies online requires significant investment from participants and researchers alike, and can be a costly endeavour if things go wrong. This talk will provide tips for assessing and maintaining data quality during learning and retrieval tasks. We will consider the administrative load of running multi-session studies, and ways to ease the stress of tracking eligible participants. While online studies can be an extraordinarily rapid form of data collection, appropriate pacing during multi-session experiments is important for both minimising data loss and maintaining sanity.
Gwen Brekelmans, UCL @GwenBrekelTeaching fellow/RA/Lab manager at the Language Learning Lab (PI: Dr Elizabeth Wonnacott) Topic: Running language learning experiments with adults and children Abstract: Learning to speak a second or foreign language is often difficult and requires a lot of practice. Research in second language learning has long since used laboratory-based experiments to investigate how best to get people to learn various aspects of the second language, such as its vocabulary, the speech sounds, and the grammatical structure. More recently, online methods have become more prevalent in this research. In this talk, I will discuss my work on language learning with adult and child participants, focussing on those experiments in which participants learnt to perceive novel sounds from a second language. I will discuss two different ways of using online methods, i.e. fully online methods for adult participants versus online methods with offline supervision for primary school children, and will talk about the different considerations and challenges that come with these approaches. I will also go into the challenges of running multi-session experiments.
Miles Tufft, UCL PhD Student at the Eyethink Lab (PI: Prof Daniel Richardson) Topic: All about generating social context online Abstract: In our lab we study how social contexts structure cognition. Behaviours and their associated cognitive mechanisms do not exist in a vacuum but in a world that is naturally social and rich in context. As such we are always situated within an ongoing and dynamic interplay between our actions and the contexts in which they play out. We aim to recreate social contexts in the lab while maintaining an acceptable degree of control, allowing us to systematically investigate the relationship between higher order social factors such as social hierarchies or group membership and lower level cognitive phenomena such as interference effects. In order to do this we often manipulate participants’ expectations, deceiving them into believing they are engaged in social interactions when in reality they are not. Situational believability is therefore crucial to the validity of our manipulations. Translating this into an online environment presents its own set of unique challenges that reflect the decontextualised nature of meeting online, while at the same time presenting a number of benefits in terms of efficiency and scalability. Using examples from our current experiments, I will share our learnings and the specific methods we have adopted to turn up the social dial online. In particular, I will discuss the balance that we’ve had to strike between experimental complexity and situational believability that will lead to an ongoing sense of connection while maintaining focus and engagement.
Gabe Chierchia, UCL Topic: All about cognitive and working with teenagers Abstract: Our research focuses on social, cognitive and affective development during adolescence. For example, we investigate how age modulates susceptibility to social and peer influence, training of different cognitive abilities, and mental health.
This typically involves comparing large groups of people of different ages, such as adolescents and young adults. In addition, these groups must also be evaluated for basic cognitive abilities such as non-verbal reasoning, which are also known to change with age and thus need to be controlled for independent of the task of interest.
Some of the challenges this research faces are that 1) copyright issues of well-validated non-verbal reasoning tasks frequently prevent these from being computerized and this limits their usefulness in a number of settings; 2) recruiting and testing large samples of adolescents in their schools; and 3) allowing plausibility of experiments involving social interaction.
I will present the Matrix Reasoning Item Bank (or MaRs-IB). This is a novel item bank we developed that is now online (Gorilla open materials), and free to use for non-commercial purposes. I’ll show how the MaRs-IB is sensitive to age differences in non-verbal reasoning accuracy and, relative to its pen-and-paper analogues, it further allows capturing potential speed-accuracy trade-offs in reasoning. I will also discuss how online/on-site testing has greatly helped making recruitment and testing more efficient and social experiments more plausible to the participants.
Max Rollwage, UCL@M_RollwagePhD Student at the Max Planck UCL Centre for Computational Psychiatry and Ageing Research Topic: All about metacognition Abstract: My research focuses on metacognition, which can be described as the monitoring and controlling our own cognitive processes, or “thinking about thinking”. In the lab, this can be measured by asking participants to make simple decisions and then rate their confidence. In this talk I will discuss how adapted to studying metacognition online, showing you the task we used and the measures we put in order to get high quality data.
Alex Irvine, MRC Cognition and Brain Sciences Unit, Cambridge @alexanderirvineTopic: All about timing accuracy Abstract: My talk at BeOnline 2020 will be on the accuracy and consistency of visual presentation and recording of keyboard responses online. So if you are running research that involves presenting images for short durations, or looks at small differences (less than a few hundred milliseconds) in reaction times, then this talk will be of interest to you.
The preprint paper I am presenting looks at different platforms - Gorilla, jsPsych, PsychoPy/Pavlovia and lab.js on 4 different web browsers and 4 different device categories. It is split up into two parts.
In the first, we used a photodiode to detect onset of stimuli on the screen, and a robotic actuator to press the keyboard at specific times. This allowed us to look at the key metrics of: accuracy (the difference between expected and actual timing), and precision (the variability of these differences). The philosophy behind how we set up these testing rigs, was to represent as closely as possible the devices that participants would have in their own homes — so we created the tasks on each platform as a normal user would, and used standard keyboards.
In the second part of the paper, we look at the demographics of a large sample of Gorilla participants (200,000 people), where they are, what platforms were used to recruit them, and what their devices looked like. This allows us to put the timing results in the first part into context.
The results show that for most platforms and devices, timing was very good. It also showed that the most common browser and device combination of PCs and Chrome, showed sufficiently good timing.
If you want to see what the results are in full, then you should virtually attend my talk, and if you are really keen go and find the preprint on PsyArxiv before hand. Thanks, and hope to see you there.
Emily Breese, Open University @Emily_Louise548PhD Student at the Heath Lab (PI: Dr Chris Heath) Topic: Piloting, pitfalls and progress: developing an online cognitive assessment battery Abstract: Careful piloting of cognitive assessment tasks before using them to study participants is an essential element of any project workflow. Piloting serves many functions, including confirming that instructions are clear, ensuring that participant understanding of the interface and data output are consistent, as well as providing important checks on a range of technical factors. Here I will discuss my experiences of piloting cognitive assessment tasks implemented in Gorilla, including the additional challenges associated with working with remote, online participants and how I have modified my approach to piloting to address these issues.
Jenni Rodd, UCL@JenniRoddPI at the The Word Lab Topic: Ensuring data quality when you can’t see your participants Abstract: Researchers are often nervous about collecting experimental data online because unlike lab-based experiments they cannot meet their participants and can’t directly observe their behaviour. I will summarise some of the issues that arise from ‘invisible’ participants, and suggest a framework that helps researchers to (i) maintain data quality and (ii) allow principled decision about when online data collection is (and is not) appropriate. I will also suggest that this principled approach to data quality control is often missing in lab-based settings and that the lessons we have learned due to running experiments online can be broadly applied to a range of experimental approaches.
Daniel Richardson, UCL @eyethinkdcrTopic: Teaching Research Methods - UCL uses Gorilla to Transform Psychology Syllabus Abstract: First-year psychology students at UCL typically took part in experiments as subjects. But as guinea pigs, this meant less time scrutinising data, and more time poring over mechanics.
“We needed to flip the teaching model on its head,” Dan says. “Rather than connecting everything in the final year, we needed to build in research right from the start.”
Shane Timmons, Economic and Social Research Institute@_shanetimmonsReseach Officer at the Behavioural Research Unit (PI: Dr Pete Lunn) Topic: Behavioural Public Policy Using Online Experiments Abstract: Applying psychology and behavioural economics to policy often involves running experiments, either in the lab, online or in the field. In this talk, I'll outline how my team decides which method to use and I'll then focus on the benefits of online experiments for behavioural public policy. I'll use examples from studies we've done on consumer choice and the environment and conclude with an overview of some recent experiments we've conducted to inform the Irish government's response to the COVID-19 pandemic.
Dorothy Bishop, University of Oxford@deevybeeTopic: Beyond Assessment to Training Abstract: Computerised methods have potential for training children's language skills by presenting a large number of trials with feedback in a game-like format, and adapting to the level of performance. In a previous study, we trained comprehension of a) new vocabulary and b) reversible sentences to show that children with developmental language disorder (DLD) learned at a similar rate to younger language-matched children. We are now extending this approach to online administration via the Gorilla platform. I will demonstrate our first foray using this method for a student project conducted by Nicole Tan Li Ning, co-supervised by Adam Parker. We were able to demonstrate learning, but found no difference between blocked and interleaved presentation of vocabulary and sentence items for typically-developing children. Children enjoy the tasks, so it is possible to test large numbers to tune test presentation to optimise learning. We are currently exploring an extension to clinical groups.
Nick Hodges, Cauldron Science@NikoraisuFounder and CTO at Cauldron Science Topic: All about tools Abstract: By going beyond traditional screen-by-screen, stimulus-and-response, accuracy-and-reaction-time tasks, there is a whole world of interesting ways of researching human behaviour to explore. Here at Gorilla, we've been working on some new tools to explore this space, and I'm going to showcase them for the first time.
Gaia Sceriff, University of Oxford@GaiaScerifPI at Attention, Brain and Cognitive Development (ABCD) Topic: Eye-tracking online - baby steps! Abstract: In this talk, I will tell you about our journey toward webcam-based eye-tracking over the last three years, with pros, cons, and excitement ahead.
Adjmal Sarwary, MindTrace @AdjmalSarwaryCEO at MindTrace Topic: All about using Mindtrace with Gorilla Abstract: Online experiments are more important than ever and researchers want to make the most out of them. Online eye tracking has been "around" for a while but everyone who tried to use it so far ran into multiple problems. This is where MindTrace comes in. We have developed proper online eye tracking and emotion tracking. With our latest iteration, we are now also accessible on Gorilla and can be easily used by everyone. We are researchers ourselves and know the pain of designing experiments and then also having to deal with hardware. We've set things up for ease of use. When all is done, you get your data in a simple format that any software can import.
Maarten Speekenbrink, UCL@MaartenSpeekTopic: All about multiplayer experiments Abstract: Running studies online provides us access to a much wider variety of participants than running studies in the lab. It also allows us to let participants interact with people they would not normally interact with, in a convenient and completely anonymous environment. In this talk, I will describe how we conducted multiplayer experiments in our lab. I will briefly describe the technology used, as well as practical pitfalls we encountered and our solutions. I will also showcase some of the multiplayer experiments we were able to conduct, with topics ranging from climate change to social learning in economic games.
Andrew Gordon, Prolific@ProlificAcademic Lead at Prolific Topic: Conducting and Optimizing Online Research with Prolific Abstract: Recent years have seen a surge in the amount of research being run online. Some estimates suggest that up to half of all cognitive science research will soon involve online samples (Stewart, Chandler, & Paolacci, 2017). For many researchers, the current COVID-19 pandemic has hastened the need to transition to online methods of data collection, and while online sampling has clear benefits relative to in-person data collection (e.g., faster data collection larger, more representative samples), many remain unfamiliar with this approach. Moreover, concerns persist regarding the quality of data that can be gleaned from online research methods. At Prolific our goal is to aid researchers in the collection of high-quality, trustworthy data by building the most advanced and trusted infrastructure for research on the internet. With a network of 100,000+ pre-screened survey participants across North America and Europe, and a commitment to establishing an open, thoughtful and trustworthy research space, our aim is to connect people around the world in order to facilitate robust and reliable research. This talk will provide an overview of how to get started with online research, with a specific focus on what you can do to ensure high-quality data, and how we at Prolific ensure data quality from a participant perspective. Following the talk, members of our team will be available to help answer any questions or concerns you may have about transitioning to online research.
Justin Fidler, Sona Systems@SonaSystemsPresident of Sona Systems Topic: Running Your Own Participant Pool Abstract: Running your own participant pool gives you the benefit of recruiting participants who fit your needs, and having them participate in many studies over time. We'll briefly discuss the aspects to consider to this approach, including offering course credit or payment for participation, and offering studies in both an online or lab setting.
Golnessa Masson, University of Cambridge@JulietU_STopic: Conducting preferential looking studies with infants online: Unique challenges and solutions Abstract: We will present our study which assessed the short-term effects of providing personalised cancer risk information in an online format on cancer risk beliefs and self-reported behaviour. We randomised 1018 participants, recruited through the online platform Prolific, to either a control group receiving cancer-specific lifestyle advice or one of three intervention groups receiving their computed 10-year risk of developing one of the five most common preventable cancers either as a bar chart, a pictograph or a qualitative scale alongside the same lifestyle advice. The software package Gorilla was used for randomisation and delivery of risk information as well as the pre and post-intervention questionnaires. The primary outcome was change from baseline in computed risk relative to an individual with a recommended lifestyle (RRI) at three months. Secondary outcomes included: health-related behaviours, risk perception, anxiety, worry, intention to change behaviour, and a newly defined concept, risk conviction. We will briefly present the trial findings as well as discussing the benefits and challenges of our chosen online methods.
Sandy Gould, University of Birminghams@sjjgoPI in Human-Computer Interaction Topic: Inside the empirical sausage factory (or: who your online participants are and how they work) Abstract: It's easy to think of online studies as a magic box, with data appearing from the aether. In our research we've been trying to understand the people who make online studies possible and how they go about their work. In this talk I will reflect on what we've learnt about our participants' working contexts and the constraints placed on them by the tasks we set and the nature of the platforms we use. These shape how people behave and, in turn, the results that we obtain.
Colleen Mills-Finnerty, Veterans Administration Palo Alto & Stanford University, Psychiatry & Behavioral Science@CMillsFinnPI of Behaviour is Biology lab Topic: Challenges and opportunities studying older adults online Abstract: Our population is facing an inversion in which older people will soon outnumber younger. Understanding predictors of late life wellbeing is especially pertinent in the time of social isolation due to COVID19. In this talk I will discuss the unique opportunities as well as challenges and limitations of studying older adults online. I will outline options for collecting survey, behavioral, and neuropsychological data using browser-based methods. I will also present results from several large online samples examining links between older age and better mental health, as well as preliminary results from follow up studies. Finally I will cover future directions of the role of online interventions in personalized medicine and improving well being in later life.
Hélio Clemente Cuve, University of Oxford@HCuvePhD Student at the Clinical Social Cognition lab (PI: Prof. Geoff Bird) Topic: Studying emotion processing in autism in the Lab and online Abstract: In this talk I will share a combination of lab-based and online studies of emotion processing and Autism. Using timecourse modelling and information theory, we show how the temporal characteristics of eye gaze during facial emotion processing are influenced by co-occurring alexithymia in autism. We also leverage dimension reduction and network approaches to psychopathology to elucidate the specificity and overlap of clinical symptoms relevant to emotion processing in autism.
Natasha Kirkham, Birkbeck, University of London@natashakirkhamTopic: Noise and development: Studying the effect of noise and chaos on home learning. Abstract: Noise is a part of all of our lives, all of our environments. In a series of experiments we have been looking at how noise impacts on attention and cognitive development in infancy. We are currently re-focusing this work to investigate how learning at home in potentially high noise, high chaos atmospheres can be affecting attention in children. I will talk about some past work and the current move to online testing.
Courtenay Norbury, Sarah Griffiths, and Laura Lucas, UCL@lilacCourtTopic:SCALES - benefits and challenges of moving longitudinal study of language on-line Abstract: the Surrey Communication and Language in Education Study (SCALES) has been tracking the language and cognitive skills of children from Reception to Year 8. We've moved from pen and paper, to in person testing using an on-line testing suite, to (trying) to test children fully on-line in the wake of COVID-19. We will share how we've combined standardised tests with more experimental measures, the benefits and some of the challenges we face in maintaining a representative sample.
Lorijn Zaadnoordijk, Trinity College Dublin@LorijnSZPostdoc at the Cusack Lab (PI: Prof. Rhodri Cusack) Topic: Conducting preferential looking studies with infants online: Unique challenges and solutions Abstract: Infant research frequently relies on looking behavior. In many cognitive domains, looking time paradigms are used, for example, to assess what infants anticipate, whether they can discriminate between stimuli, or whether they have learned something during the experiment. In addition, recently there has been renewed attention to how infants’ looking behavior in their environment shapes what and how they learn. Infants are not passively receiving information, they actively seek it out by attending to certain types of organisms or objects over others at different stages in development. As such, ideally, developmental scientists could run large-scale, cross-sectional looking behavior and learning studies. Such studies, unfortunately, are time consuming and costly. There has been increasing attention to reducing the costs and improving the sample sizes of experimental infant studies, which are typically low in infant studies (Bergmann, et al., 2018). The logistic and practical challenges related to coming to the lab as well as the small age ranges can be identified as partially responsible. Online data collection offers the possibility to acquire larger samples as participants can stay at home and participate at a convenient time. This makes large-scale studies more feasible in terms of both recruitment time as well as reaching a more diverse population.
Being able to conduct looking behavior studies online would therefore open exciting novel research possibilities. However, unlike in the lab, where state-of-the-art eye-trackers may be available, the looking data that is acquired online is based on a webcam, which contains more noise and does not provide the gaze direction or a reference axis to remove the effects of head motion. Manual coding of these data is labor-intensive and prone to day-to-day and interrater variability. An automated approach would thus be desirable and improve reproducibility. However, although algorithms such as Amazon Recognition can relatively reliably indicate whether the infant is looking at the screen, detecting whether the infant is fixating on the left or right side of the screen is still problematic (Chouinard, Scott & Cusack, 2019). In addition to testing infants online, we have been exploring methods to improve the accuracy of gaze estimation algorithms in online infant experiments.
In this presentation, I will describe the unique challenges that developmental scientists face when transitioning to online testing. I will touch upon a range of topics (from ethics to analyses) while describing our experiences with online looking behavior studies and solutions we and others have explored. Finally, I will present various state-of-the-art resources and initiatives that play a key role in online testing in the developmental science community.
Shannon Ross-Sheehy, University of Tennessee@ross_sheehyPI of Infant Visual Cognition lab Topic: Downloadable apps versus browser-based testing: Pros and cons of each approach for developmental visual cognition Abstract: Although our visual cognition research is focused primarily on infant and adult eye tracking, our online presence for the last few years has included behavioral attention and working memory tasks for preschoolers. Our initial foray into the “at home” testing space was driven by a need to continue longitudinal work we had begun in-lab when the infants were 5-months-of-age. Our first approach was to painstakingly program our full psychometric tasks into a downloadable game that parents would engage in with their 4-year-olds. Although largely successful, parents were at times reluctant to download our apps. In addition, ever-changing security warnings meant parents were often unable to run the apps without circumventing security settings. Thus, although these “at home” tasks were on par with lab-based tasks in terms of visual precision, these benefits were unrealized if parents were too nervous to download the apps, didn’t have a suitable computer, or lacked the skills needed to circumvent security nag screens. Once it became clear that online data collection might be a reality for the foreseeable future, we decided to reprogram our tasks into Gorilla. I will discuss briefly the issues I faced during the transition (e.g., screen calibration, controlling flow from Qualtrics to Gorilla, consent, etc.), as well as what is gained by using browser-based techniques.
Nadine Lavan, UCL@nadinelavanPostdoc at the Vocal Communication Lab (PI: Carolyn McGettigan) Topic: Studying voice perception via online experiments Abstract: Voices are a central part of everyday social interactions. Research questions about how we perceive other people’s identities and traits from their voice do not always require specialist audio equipment, and therefore can be readily addressed using online experiments. However, when using auditory stimuli, additional considerations are required to ensure good quality data from online participants. In this talk, I will talk about voice identity learning study I have run online to illustrate my approach to implementing audio playback, establishing participant inclusion and exclusion criteria, and other quality assurance checks (e.g. headphone checks).
Kyle Jasmin, Birkbeck, University of London Topic: Are online auditory data robust? Abstract:Online experiments provide a convenient way to collect responses to auditory stimuli, but are the data robust? In my talk I will discuss why I think they are, based on similarity of online and in-lab data, reliability analysis, and correlations between online behavior and fMRI data.
Rachel Theodore, University of Connecticut@UConnNBL Topic: Conduction speech perception experiments online: Some tools, successes, and challenges Abstract: In this talk, I'll report on our experiences moving speech perception experiments from the laboratory to online testing environment, highlighting our success in replicating many standard speech perception paradigms (e.g., categorical perception, lexically guided perceptual learning, distributional learning, talker normalization, perceptual learning of noise-vocoded speech) using online protocols. Along the way, I'll describe some challenges we've encountered (e.g., headphone compliance, low-effort participants, variability in response times) and the ways in which we've attempted to mitigate these challenges.