Host: UCL Division of Psychology and Language Sciences
Venue: Lower Ground Lecture Theatre, 26 Bedford Way, WC1H 0AP
Date: Wednesday 13th June
Time: 10am - 5pm followed by drinks, nibbles and chat
Tickets: SOLD OUT
News: Follow @JenniRodd and @EvershedJo on twitter for updates and for news of #BeOnline2019
Programme: Click Here
Videos: Click HereThe exact programme for the day is being created to fit with the speakers' schedules.
Cauldron Science's remit is to support behavioural researchers by providing online software that liberates behavioural science from the lab so that it can be done in front-line settings such as schools, hospitals and homes. To accelerate online behavioural research we created Gorilla, a flexible and powerful platform for creating behavioural experiments and deploying them online. In this talk, I will share with how online research and the tools to support it fit into the wider dialogue of the replication crisis, Open Science and impact on society.
We are at the cusp of some far-reaching technological advances that will be of tremendous benefit to research. Within a few short years we will be able to test thousands of people from any demographic with 'connected' technology every bit as good as we use in our labs today — indeed more so. Here I discuss on-web versus in-lab, predicted technological advances and issues with online research.
Researchers are often nervous about collecting experimental data online because unlike lab-based experiments they cannot meet their participants and can’t directly watch their behaviour. I will summarise some of the issues that arise from ‘invisible’ participants, and suggest possible solutions that (i) maintain data quality and (ii) allow principled decision about when online data collection is (and is not) appropriate. I will also suggest that this principled approach to data quality control is often missing in lab-based settings and that the lessons we have learned due to running experiments online can be broadly applied to a range of experimental approaches.
If you were starting an online research project today, what could you build and how would you build it? In this talk, I’ll guide you through the different components that go into running an online study and how they fit together. We’ll explore the wider landscape of browsers, apps, servers and the cloud, and cover the details of screen refresh rates, peripheral polling rates, frame synchronisation, browser timing precision, autoplay and more.
Prolific helps researchers around the world find participants for online studies. It is tailored for scientific research, highly scalable, and can flexibly allow researchers to recruit any target demographics. In this talk, cofounder Katia will discuss the varied challenges when recruiting participants online, and introduce Prolific, which offers solutions to many of these challenges. She will also discuss some differences between running lab and online studies and how to design online studies wisely in order to collect high quality data and keep participants happy. Finally, she will reflect on how the above considerations inform Prolific’s mission to make online data collection trustworthy.
Sea Hero Quest is an app designed to be a potential diagnostic for the early detection of Alzheimer’s dementia. The app tests the users ability to navigate a virtual boat through different environments in search of sea creatures. It records navigation performance and sends this data to scientists for analysis. The tasks were developed by scientists, the work funded by T-mobile and built by the game developers Glitchers Ltd. Initial analysis of the results has revealed the changing pattern of navigation skill over the life-time between genders and between countries of the world. Future research with the data will seek to apply machine learning to identify patterns of navigation performance and relate these to demographics.
Over 10 years, the UCL Deafness, Cognition and Language Research Centre (DCAL) established many standardised assessments of cognition and language that were designed specifically for people who were deaf. To ensure that these assessments were easily available to practitioners working with deaf children and adults, we developed an online ‘portal’ to host the tests. We have been through, and are still going through, many iterations of the DCAL Portal to make it as user friendly as possible. In this talk, I will share with you what we have learnt along the way.
Oral language skills are a critical foundation for the development of literacy skills and for formal education more broadly. Our group is engaged in developing language interventions and language assessment tools. Online App based assessments hold huge promise for allowing class teachers to identify children with poor language skills who would benefit from targeted language interventions. Such tools can also potentially help teachers to understand more about the nature and prevalence of oral language difficulties. I will describe a system that we are currently developing (ATLAS- Automated Testing of LAnguage Skills). Children with poor language skills require suitable educational interventions. To date there is limited evidence that computerised remediation programmes work (I will describe one that doesn’t). An important issue is the extent to which effective child friendly tools can be developed to improve the delivery of language and other educational interventions.
The Unity game engine is a cross-platform development environment. It has great potential as a tool for collecting large amounts of behavioural data, both in the lab, the class room and remotely - e.g. over the internet and on user's own devices. I present an early version of an iPad app which uses a series of measures to track cognitive, social and emotional development in primary school children. Describing the process of developing this software in-house, and the strengths of the Unity environment for creating scalable and highly usable experimental applications.
This talk will discuss some of the key insights I have gained using online methods to collect data in the area of health psychology. I will take you on a personal tour of the successes and failures of online research over the past 7 years, and some key things I wish I had known before I started. This will touch on using online testing for large sample sizes, using market research agencies to get epidemiologically valid samples, how to design an online task to ensure your data are as good as possible, and what approach I now advocated based on my experience.
Most social cognition research never tests people in actual social situations. We conduct research where people interact with each other in controlled experiments, meaning online platforms that coordinate interaction are essential for our studies. This allows us to do large-scale, replicable, open science compatible studies. However, we have encountered several challenges: development and hosting costs for custom software, shortcomings of established platforms in terms of interaction and open science compatibility, and a strong prejudice in psychiatry journals against online studies. We’ll discuss some potential solutions and some remaining frustrations.
Children vary enormously in their cognitive and linguistic abilities. It is important that our experiments capture this variability and that our theories try to explain it. This requires (a) large samples and (b) reliable metrics. Time and resources for data collection in schools are precious and limited. An ideal solution therefore is to develop online experiments that can be completed by children at school, or in the home. In this talk I will consider our initial forages into the world of online testing with primary school children, focusing on some of the particular issues that arise when working with children, schools and families.
Our auditory environment is full of complex communicative signals, alerts, and behaviorally relevant acoustic backgrounds, all of which need to be learned de novo by each person over development and into adulthood. What is more, the auditory system has to be flexible enough to learn to represent and parse entirely new sound environments - for instance, distinguishing the myriad sources of traffic sounds that are important for one's survival while cycling through central London. The duration of such learning - over days, weeks, and months - makes it awkward, expensive, and often entirely impractical to study in traditional lab settings, so the possibility of doing extended, larger N, and more dynamic auditory learning studies online is very exciting.
We have been working on a number of such experiments (as have several other auditory groups) - I'll discuss some of our initial paradigms and findings, as well as touching on some of the ongoing challenges that arise in designing and carrying out online auditory learning and training studies.