In the first week of February we have two distinguished speakers visiting University of Helsinki and Aalto University: Rosalind Picard from MIT Media Lab and Hans Gellersen from University of Lancester
Rosalind Picard, MIT Media Lab
3rd of February, 15:15 at Pieni Juhlasali Main Building, Fabiankatu 33
(part of Helsinki Distinguished Lecture Series on Future Information Technology )
Register here (free of charge).
Years ago, I set out to create technology with emotional intelligence, demonstrating the ability to sense, recognize, and respond intelligently to human emotion. At MIT, we designed studies and developed signal processing and machine learning techniques to see what affective insights could be reliably obtained. In this talk I will highlight the most surprising findings during this adventure. These include new insights about the “true smile of happiness,” discovering new ways cameras (and your smartphone, even in your handbag) can compute your bio-signals, finding electrical signals on the wrist that reveal insight into deep brain activity, and learning surprising implications of wearable sensing for autism, anxiety, sleep, memory, epilepsy, and more. What is the grand challenge we aim to solve next?
Bio: Rosalind Picard, ScD, FIEEE is founder and director of the Affective Computing Research Group at the MIT Media Laboratory, co-founder of Affectiva, providing emotional intelligence technology used by 1/3 of the Global Fortune 100, and co-founder and Chief Scientist of Empatica, improving lives with clinical-quality wearable sensors and analytics. Picard is the author of over 250 articles in computer vision, pattern recognition, machine learning, signal processing, affective computing, and human-computer interaction. She is known internationally for her book, Affective Computing, which helped launch the field by that name. Picard holds bachelors in Electrical Engineering (EE) from Georgia Tech and Masters and Doctorate degrees in EE and CS from MIT. Picard’s inventions have been twice named to “top ten” lists, including the New York Times Magazine’s Best Ideas of 2006 for the Social Cue Reader, and 2011’s Popular Science Top Ten Inventions for a Mirror that Monitors Vital Signs.
Hans Gellersen, Universiy of Lancaster
February 2, 2017, at 11:00, TUAS building, room AS2, Otaniemi
Rethinking eye gaze for human-computer interaction
Eye movements are central to most of our interactions. We use our eyes to see and guide our actions and they are a natural interface that is reflective of our goals and interests. At the same time, our eyes afford fast and accurate control for directing our attention, selecting targets for interaction, and expressing intent. Even though our eyes play such a central part to interaction, we rarely think about the movement of our eyes and have limited awareness of the diverse ways in which we use our eyes — for instance, to examine visual scenes, follow movement, guide our hands, communicate non-verbally, and establish shared attention.
Bio: Hans Gellersen is Professor of Interactive Systems at Lancaster University. Hans’ research interest is in sensors and devices for ubiquitous computing and human-computer interaction. He has worked on systems that blend physical and digital interaction, methods that infer context and human activity, and techniques that facilitate spontaneous interaction across devices. In recent work he is focussing on eye movement as a source of context information and modality for interaction