Mind meets Machine series

When: 14:00–14:45 (first Wednesday of the month)
Where: T5, CS Building, Konemiehentie 2,
02150 Espoo

About the Colloquium
The Mind Meets Machine series is a new monthly forum at Aalto CS that brings together researchers and students across disciplines. We focus on the intersection of mind and technology, from cognitive mechanisms to applied AI, aiming to share cutting-edge research, spark dialogue, and foster collaborations across schools and departments.



November 5, 2025 — Roland Fleming (University of Giessen, Germany )

Learning to See Stuff

Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape.  We find that the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis.  We argue this has important implications for thinking about vision more broadly.


Bio:
Roland Fleming is Kurt Koffka Professor of Experimental Psychology at Giessen University, Executive Director of the Center of Mind, Brain and Behavior and head of the Excellence Cluster “The Adaptive Mind”. He studied at Oxford and MIT, and did a postdoc at the Max Planck Institute for Biological Cybernetics. He has been awarded the Vision Sciences Society Young Investigator Award as well as two ERC Grants (Consolidator and Advanced).  In 2022 he was elected Fellow of the Royal Society of Biology.


January 7, 2026 — Heiko Hecht & Christoph von Castell (JGU Mainz, Germany)

The best way to measure cybersickness

The Fast Motion Sickness Scale (FMS) has become a valuable tool to assess motion sickness whenever quick unobtrusive measures need to be taken. It is well-anchored and provides scalable, quantitative data, however, it is indifferent to individual symptoms. If anything, it focuses on nausea and neglects many other symptoms of motion sickness such as eye-strain or dizziness. We seek to exploit the advantages of traditional symptom-based questionnaires without subscribing to their disadvantages. To do so, we propose to diversify the FMS to include the most important symptom groups. In particular, we propose the use of three separate FMS variants, each focusing separately on nausea, oculomotor discomfort, and dizziness. We report first findings with these variants of the FMS. 

Visual acceleration signals for pedestrians’ time-to-collision estimation

Pedestrians estimating the time-to-collision (TTC) of approaching vehicles mainly rely on distance and speed, often neglecting acceleration. This can lead to TTC overestimation and unsafe crossing decisions when vehicles accelerate. Previous work showed that a light band around a vehicle’s windshield indicating acceleration can reduce this bias, but it remains unclear whether such signals help pedestrians distinguish between different acceleration rates. In a VR traffic scenario using the prediction-motion paradigm, thirty participants judged TTC for vehicles moving at constant speed, low (1.5 m/s²), or high acceleration (3.0 m/s²) under three conditions: no signal, a binary signal (light band on/off), and an informative signal conveying acceleration magnitude (flashing frequency). Both signals reduced TTC overestimation, but only the informative signal led to differentiated TTC estimates for low versus high acceleration. We discuss these findings in the context of traffic safety and the design of intuitive vehicle-to-pedestrian communication.


PAST EVENTS

October 1, 2025 — Antti Oulasvirta (Aalto)
Simulators Propel Computational Cognitive Models
Emerging software simulators now provide richly detailed replicas of real-world environments. Training and testing computational cognitive models within these simulators makes it possible to account for a wider range of factors, thereby enabling more relevant predictions, inferences, and interventions—while remaining grounded in theory. This capability is opening a new frontier for the application of computational cognitive models across engineering, computing, and design.


Bio: Antti Oulasvirta leads the Computational Behavior Lab (cbl.aalto.fi) at Aalto University. Prior to joining Aalto, he was a Senior Researcher at the Max Planck Institute for Informatics. He received his doctorate in Cognitive Science from the University of Helsinki in 2006. He was awarded the ERC Starting Grant (2015-2020) for research on computational design of user interfaces and the ERC Advanced Grant (2024-2029) for studying computational models of human behavior. He is a Fellow of ELLIS (European Laboratory for
Learning and Intelligent Systems). In 2025, he was invited to the SIGCHI Academy.