Research

We study computational principles of human-computer interaction [OUP book 2018]: models, simulations, and algorithms for identifying better user interfaces and methods for interactive AI that understands users. Our research combines theories in cognitive sciences, optimization, machine learning, and electrical engineering.


Computational Design with Combinatorial Optimization

Combinatorial optimization offers a rigorous and powerful formalism for computational design of user interfaces [IEEE Computer 2017]. Large numbers of candidate designs can be searched systematically while considering effects on users. It is a transparent (‘’white box’’) approach and ideal for design, where control and explainability are critical. However, only ten years ago, applications in user interfaces were limited to keyboards due to lack of appropriate formulations and tools. Our research addresses three bottlenecks:

Evaluation functions are mathematical models used in optimization to assess candidate designs, for example in terms of usability, aesthetics, or learnability. The challenge is to express human factors in a computationally efficient way without compromising predictive validity. We have contributed evaluative functions for fundamental sensorimotor processes, with applications in visualizations [IEEE TVCG 2017], gesture input [CHI’15], two-thumb input [CHI’13]skim reading [CHI’16], skill transfer between input devices [DIS’14], and web layouts [DIS’16].

Interactive optimization refers to techniques for steering optimizers under uncertain and changing objectives in design. Present results include: Pareto front steering [UIST’13], real-time landscape visualizations, methods for quick adaptation under changing tasks [DIS’16], and explorative optimization for design under uncertainty [TOCHI 2017].

Exact methods follow a structured (non-random) search approach that guarantees the optimal solution and can estimate bounds for solution quality. The challenge is to formulate design problems as integer programming (IP) tasks such that efficient solutions with known properties can be exploited. So far, we have defined graphical user interface design tasks as assignment, packing, and covering problems. We have improved the efficiency of keyboard optimization [UIST’14] and expanded the applications of IP to menu systems [submitted], distributed UIs [CHI’18], interactive layouts [in progress], and functionality selection [TOCHI 2017].


Behavioral Modeling with Computational Rationality

Computational rationality refers to the study of computational principles of intelligent behavior in complex, real-world problems where organisms can do calculations only approximately. Computational rationality views interactive behavior as utility-maximization under constraints posed by task, own capabilities, and the UI. Unlike good-old-fashioned cognitive models, computationally rational agents can adapt their behavior to situational contingencies, which permits the study of how task-adaptive behavior emerges. The challenge is to identify and model users’ goals (rewards), limitations (bounds), task environments, and adaptive capabilities.

Reinforcement learning is the study of how organisms learn from experience. We have studied RL as a model of human-like behavior in information search [PhD Thesis 2014], typing [CHI’17], graphical UIs, and menu interaction [CHI’15]. Inverse modeling refers to principled methods for inferring parameter values of models from data. We have studied approximate Bayesian computation for inferring HVS parameters in RL models from log data [CHI’17] .

Ability-based optimization refers to computational identification of best possible design for an individual using computationally rational models. The user model can predict an upper bound to usability given a person’s abilities and adaptive capabilities. Using such a model, an optimizer can make more accurate predictions on best-effort interaction. The approach has been used for adapting graphical layouts to the visual history of a user [IUI’18] and improving text entry methods for users with impairments [IEEE Pervasive Comp 2018].

The Bayesian brain hypothesis views interaction as a problem where the brain must learn to predict the consequences of the actions it takes. A new neuromechanic model of button-pressing [CHI’18] explains how the central nervous system can achieve predictive, self-corrective control of buttons based on limited feedback. Effects of button design on kinematics and task accuracy could be accurately simulated. The theory led to significant  improvements in button designs, improving users’ temporal accuracy via a technique called impact activation [CHI’18]


Mathematical Models of Human Performance

Mathematical models capture essential factors in human-computer interaction in a transparent and computationally efficient manner. They advance theory-formation and offer a powerful tool for design, because they predict outcomes in interaction as a function of design, user, and task. There’s more to mathematical modeling in HCI than Fitts’ law!

Nonlinear regression models predict some variable of interest as a function of external factors describing the design, user, or task. We have contributed to models on temporal pointing [CHI’16], moving target interception [CHI’18], gesture input [CHI’15]functional area of thumb-based input [CHI’14], menu interaction [CHI’14], just-noticeable-difference in input [CHI’14], tactile guidance in visual tasks [UIST’12], as well as automated modeling [CHI’14] from data.


Biomechanical Simulation

Biomechanics is the study of mechanics in human movement, including velocities and angles of limb segments, forces and moments at joints, and muscle activations. Recently, biomechanical simulation software has become available with detailed human body models. In motion capture based biomechanical simulation, observed human motion is explained in terms of motion of a an anatomically correct but generalized full-body model. This method is unobtrusive and offers a valuable source of information for interaction where fatigue, stress, and injuries are possible. Our research has contributed to validation of biomechanical simulations [CHI’14], comparison of common input methods [CHI’15], and computational methods for summarizing biomechanical data [TOCHI 2015] and solving design problems interactively using visualizations [VIS’14].


Control Theory and Dynamics

Control theory offers a rich formalism for understanding and simulating interaction techniques via inputs, outputs, feedback, and system states. Interaction is defined via a system aiming to change a control signal to a desired level (reference) and updating its behavior according to feedback. Control theory allows understanding the closed-loop case in HCI, where the state is fed back to the input. Read this paper for a comparison of models in pointing [TOCHI 2017].



Information Theory

Information theory offers a principled way to understand and measure human as transmission of messages. Transmission occurs when a message is selected from a set of possible messages and transferred to the receiver over a noisy channel. User interfaces can be compared according to throughput, the rate at which the user can send messages. We have expanded this concept to full-body movements [CHI’13] following the Gaussian channel interpretation of Paul Fitts but applying it to mutual information I(x; y) in movement sequence x and its repetition y. This allows estimating maximum information available to an external observer  in any repeated human movement. Try it out at infocapacity.hiit.fi!


Datasets

We collect breakthrough dataset that permit deeper analysis, larger coverage, or higher fidelity and can thereby push the envelope in modeling, machine learning, and simulation in this area.