Videos

Interactive Layout Transfer

During the design of graphical user interfaces (GUIs), one typical objective is to ensure compliance with pertinent style guides, ongoing design practices and design systems. However, designing compliant layouts is challenging, time consuming, and can distract creative thinking in design. This paper presents a method for interactive layout transfer, where a source draft layout is transferred automatically using a selected reference layout, while complying with relevant guidelines.

Project paper
Project website


C-RWD: A Computational Responsive Web Design Service

This thesis proposes C-RWD, which is a computational responsive web design service based on the LaaS platform. The contribution of C-RWD is that it proposes a novel framework which can automate the generation of responsive interfaces while optimizing the interface for user’s personalization. In practical applications, it can reduce the time and cost of RWD production and generate a personalized and optimized interface for users.

Project paper
Project website


GRIDS: Interactive Layout Design with Integer Programming

This paper proposes a novel optimisation approach for the generation of diverse grid-based layouts. Further, we present techniques for interactive diversification, enhancement, and completion of grid layouts. These capabilities are demonstrated using GRIDS, a wireframing tool that provides designers with real-time layout suggestions.

Project paper
Project website


Layout as a Service

Here we present LaaS, a service platform for self-optimizing web layouts to improve their usability at individual, group, and population levels. No hand-coded rules or templates are needed, as LaaS uses combinatorial optimization to generate web layouts for stated design objective.

Project paper
Project website


Understanding Visual Saliency in Mobile User Interfaces

Here we present findings from a controlled study with 30 participants and 193 mobile UIs. The results speak to a role of expectations in guiding where users look at. We also release the first annotated dataset for investigating visual saliency in mobile UIs.

Project paper
Project website


SemanticCollage: Enriching Digital Mood Board Design with Semantic Labels

This is SemanticCollage, a digital mood board tool that attaches semantic labels to images by applying a state-of-the-art semantic labeling algorithm. A structured observation with 12 professional designers demonstrated how semantic labels help designers successfully guide image search and find relevant words that articulate their abstract, visual ideas.

Project paper
Project website


Human Strategic Steering Improves Performance of Interactive Optimization

A central concern in an interactive intelligent system is optimization of its actions, to be maximally helpful to its human user. In recommender systems, the action is to choose what to recommend, and the optimization task is to recommend items the user prefer. Our study shows that the users who understand how the optimization works, strategically provide biased answers, which results in the algorithm finding the optimum significantly faster. Our work highlights that next-generation intelligent systems will need user models capable of helping users who steer systems to pursue their goals.

Project paper
Project website


Optimal Sensor Position for a Computer Mouse

This paper first discusses the mechanisms via which sensor position affects mouse movement and reports the results from a study of a pointing task, in which the sensor position was systematically varied. Accordingly, variable-sensor-position mice are then presented, with a demonstration that high accuracy can be achieved with two static optical sensors. A virtual sensor model is described that allows software-side repositioning of the sensor.

Project paper
Project website


Swap: A Replacement-based Text Revision Technique for Mobile Devices

Here we present Swap, a novel replacement-based technique to facilitate text revision on mobile devices. Study results showed that Swap reduced efforts in caret control and repetitive backspace pressing during the text revision process. Most participants preferred to use the replacement-based technique rather than backspace and caret, and commented that the technique is easy to learn and makes text revision rapid and intuitive.

Project paper


Button Simulation and Design via FDVV Models

In this paper, we extend force–displacement (FD) modeling of buttons to include vibration (V) and velocity-dependence characteristics (V). The resulting FDVV models better capture tactility characteristics of buttons. They increase the range of simulated buttons and the perceived realism relative to FD models. This end-to-end approach enables the analysis, prototyping, and optimization of buttons, and supports exploring designs that would be hard to implement mechanically.

Project paper
Project website


How We Type: Eye and Finger Movement Strategies in Mobile Typing

This paper presents new findings from a transcription task with mobile touchscreen devices. Movement strategies were found to emerge in response to sharing of visual attention: attention is needed for guiding finger movements and detecting typing errors. When typing with two fingers, although users make more errors, they manage to detect and correct them more quickly. We release the extensive dataset on everyday typing on smartphones.

Project paper
Project website


Self-Adapting Web Self-adapting web menus

This paper presents SAM, a modular and extensible JavaScript framework for self-adapting menus on webpages. SAM allows control of two elementary aspects for adapting web menus: (1) the target policy, which assigns scores to menu items for adaptation, and (2) the adaptation style, which specifies how they are adapted on display. Researchers can use SAM to experiment adaptation policies and styles, and benchmark techniques in an ecological setting with real webpages. Practitioners can make websites self-adapting, and end-users can dynamically personalise typically static web menus.

Paper
Project website


May AI?: Design Ideation with Cooperative Contextual Bandits

This paper presents cooperative contextual bandits (CCB) as a machine-learning method for interactive ideation support in design. We developed a CCB for an interactive design ideation tool that suggests inspirational and situationally relevant materials (“may AI?”), explores and exploits inspirational materials with the designer, and explains its suggestions to aid reflection.

Project paper
Project website


Forgetting of passwords: ecological theory and data

This paper contributes new data and a set of analyses for password recall, building on the ecological theory of memory and forgetting. The theory and data shed new light on password management, account usage, password security and memorability.

Project paper
Project website


One Button

This paper presents Button Simulator, a low-cost 3D printed physical button capable of displaying any force-displacement curve. By reading the force-displacement curves of existing push-buttons, we can easily replicate the force characteristics from any buttons onto our Button Simulator. One can even go beyond existing buttons and design nonexistent ones as the form of arbitrary force-displacement curves, then use ButtonSimulator to render the sensation. This project will be open-sourced and the implementation details will be released.

Project paper
Project website


Selection-based Text Entry in Virtual Reality

In this paper, we study text entry in VR by selecting characters on a virtual keyboard. We discuss the design space for assessing selection-based text entry in VR. Then, we implement six methods that span different parts of the design space and evaluate their performance and user preferences. Our results show that pointing using tracked hand-held controllers outperforms all other methods.

Project paper
Project website


Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands

We developed an apparatus that tracks the user’s hands and a physical keyboard, and visualize them in VR. In a text input study we investigated the achievable text entry speed and the effect of hand representations and transparency on typing performance, workload, and presence. With our apparatus, experienced typists benefited from seeing their hands, and reach almost outside-VR performance. We conclude that optimizing the visualization of hands in VR is important, especially for inexperienced typists.

Project paper


Impact Activation Improves Rapid Button Pressing

This paper presents a technique and empirical evidence for an activation technique called Impact Activation, where the button is activated at its maximal impact point. We argue that this technique is advantageous particularly in rapid, repetitive button pressing, which is common in gaming and music applications. We report on a study of rapid button pressing, wherein users’ timing accuracy improved significantly with use of Impact Activation.

Project paper
Project website


AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time

In this paper, we cast the problem of UI distribution as an assignment problem and propose to solve it using combinatorial optimization. We present a mixed integer programming formulation which allows real-time applications in dynamically changing collaborative settings.

Project paper
Project website


Modelling Learning of New Keyboard Layouts

We designed a model to predict how users learn to locate keys on a keyboard, with a goal of explaining variance in novices’ typing performance by visual search. This allows predicting search times and visual search patterns for completely and partially new layouts.The model complements models of motor performance and learning in text entry by predicting change in visual search patterns over time

Project paper
Project website


Control Theoretic Models of Pointing

In this paper we compare four manual control theory models on their ability to model targetting behaviour by human users using a mouse. We describe an experimental framework for acquiring pointing actions and automatically fitting the parameters of mathematical models to the empirical data. We find that the identified control models can generate a range of dynamic behaviours that capture aspects of human pointing behaviour to varying degrees. We conclude that control theory offers a promising complement to Fitts’ law based approaches in HCI.

Project paper
Project website


WatchSense: On-and Above-Skin Input Sensing through a Wearable Depth Sensor

WatchSense uses a depth sensor embedded in a wearable device to expand the input space of a smart watch to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering aboveit. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both.

Project paper
Project website


What is Interaction?

This essay discusses what interaction is. We first argue that only few attempts to directly define interaction exist. Nevertheless, we extract from the literature distinct and highly developed concepts that can be associated with different scopes and ways of construing the causal relationships between the human and the computer. Based on this discussion, we list desiderata for future work on interaction, emphasizing the need to improve scope and specificity, to better account for the effects and agency that computers have in interaction, and to generate strong propositions about interaction.

Project paper


Towards Perceptual Optimization of the Visual Design of Scatterplots

This paper contributes to research exploring the use of perceptual models and quality metrics to set parameters – such as marker size and opacity, aspect ratio, color, and rendering order – automatically for enhanced visual quality of a scatterplot. A key consideration in this paper is the construction of a cost function to capture several relevant aspects of the human visual system. We show how the cost function can be used in an optimizer to search for theoptimal visual design for a user’s dataset and task objective.

Project paper
Project website


How we type: Movement Strategies and Performance in Everyday Typing

This paper revisits the present understanding of typing, which originates mostly from studies of trained typists using the ten-finger touch typing system. Our goal is to characterise the majority of present-day users who are untrained and employ diverse, self-taught techniques. We report several differences between the self-taught and trained typists. The most surprising finding is that self-taught typists can achieve performance levels comparable with touch typists, even when using fewer fingers. Motion capture data exposes the predictors of high performance. We release an extensive dataset on everyday typing behavior.

Project paper
Project website


HCI Research as Problem Solving

This essay contributes a meta-scientific account of human–computer interaction (HCI) research as problem solving. We build on the philosophy of Larry Laudan, who has developed problem and solution as the foundational concepts of science. We elaborate upon Laudan’s concept of problem-solving capacity as a universal criterion for determining the progress of solutions, offering a rich, generative, and ‘discipline-free’ view of HCI, and resolving some existing debates about what HCI is or should be.

Project paper


Sketchplore

This paper studies a novel concept for integrating real-time design optimisation to a sketching tool. Sketchplorer is a multi-touch sketching tool that uses a real-time layout optimiser, automatically inferring the designer’s task to search for both local improvements to the current design and global alternatives. Using predictive models of sensorimotor performance and perception, these suggestions steer the designer toward more usable and aesthetic layouts without overriding the designer or demanding extensive input.

Project paper
Project website


Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input

In this paper, we propose a real-time solution that uses a single commodity RGB-D camera to track both an object and a hand manipulating the object simultaneously. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. We conducted extensive experiments on several existing datasets and introduce a new, annotated hand-object dataset.

Project paper
Project website


Informing the Design of Novel Input Methods with Muscle Coactivation Clustering

This article presents a novel summarization of biomechanical and performance data for user interface designers. Previously, there was no simple way for designers to predict how the location, direction, velocity, precision, or amplitude of users’ movement affects performance and fatigue. We cluster muscle coactivation data from a 3D pointing task covering the whole reachable space of the arm. We identify 11 clusters of pointing movements with distinct muscular, spatio-temporal, and performance properties.

Project paper
Project website


The Emergence of Interactive Behavior: A Model of Rational Menu Search

In this article we test the hypothesis that menu search is rationally adapted to the ecological structure of interaction, cognitive and perceptual limits, and the goal to maximise the trade-off between speed and accuracy. Unlike in previous models, no assumptions are made about the strategies available to or adopted by users, rather the menu search problem is specified as a reinforcement learning problem and behaviour emerges by finding the optimal policy.

Project paper
Project website


Investigating the Dexterity of Multi-Finger Input for Mid-Air Text Entry

This paper investigates input by free motion of fingers. Our goal is to inform the design of high-performance input methods by providing detailed analysis of the performance and anatomical characteristics of finger motion. The method is expressive, potentially fast, and usable across many settings. We apply our findings to text entry by computational optimization of multi-finger gestures in mid-air. To this end, we define a novel objective function that considers performance, anatomical factors, and learnability.

Project paper
Project website


Performance and Ergonomics of Touch Surfaces

This study is the first work to compare different types of touch surfaces for two critical factors: performance and ergonomics. Our data come from a pointing task carried out on five common touch surface types: public display, tabletop, laptop, tablet, and smartphone. Ergonomics indices were calculated from biomechanical simulations of motion capture data combined with recordings of external forces. We provide an extensive dataset for researchers.

Project paper
Project website


Fast and Robust Hand Tracking Using Detection-Guided Optimization

In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera. Our algorithm uses a novel detection-guided optimization strategy that increases the robustness and speed of pose estimation. Our approach needs comparably less computational resources, which makes it extremely fast. The approach also supports varying static, or moving, camera-to-scene arrangements.

Project paper
Project website


iSkin: Flexible, Stretchable and Visually Customizable On-Body Touch Sensors for Mobile computing

Here we introduce iSkin, a novel class of skin-worn sensors for touch input on the body. iSkin is a very thin, flexible and stretchable sensor overlay made of biocompatible materials. Integrating capacitive and resistive touch sensing, the sensor is capable of detecting touch input with two levels of pressure. iSkin supports single or multiple touch areas of custom shape and arrangement, as well as more complex widgets. These contributions enable new types of on-body devices all fostering direct, quick, and discreet input for mobile computing.

Project paper
Project website


Spotlights: Attention-optimized Highlights for Skim Reading

This paper contributes a novel technique that can improve user performance in skim reading. Spotlights complements the regular continuous technique at high speeds (2–20pages/s). We present a novel design rule informed by theories of the human visual system for dynamically selecting objects and placing them on transparent overlays on top of the viewer. This improves the quality of visual processing at high scrolling rates.

Project paper
Project website


An Edge-Bundling Layout for Interactive Parallel Coordinates

In this paper we present an edge-bundling method using density-based clustering for each dimension, to overcome visual cluttering and overplotting in parallel-coordinates figures. Moreover, it allows rendering the clustered lines using polygons, decreasing rendering time remarkably. In addition, we design interactions to support multidimensional clustering with this method. A user study shows improvements over the classic parallel coordinates plot.

Project paper


Text Entry Method Affects Password Security

This paper aims to answer a foundational question for usable security: whether text entry methods affect password generation and password security. In the study, the participants generated passwords for multiple virtual accounts using different text entry methods. They were also asked to recall their passwords afterwards. We conclude that text entry methods have effect on password security, however, the effect is subtler than expected.

Project paper


User Generated Free-Form Gestures for Authentication

In this study we collected a dataset with a generate-test-retest paradigm, where participants generated free-form gestures, repeated them, and were later retested for memory. We modify a recently proposed metric for analyzing information capacity of continuous full-body movements for analyzing the security of either template or free-form gestures. We conclude with strategies for generating secure and memorable free-form gestures.

Project paper
Project website


Is Motion-Capture-Based Biomechanical Simulation Valid for HCI Studies?

Motion-capture-based biomechanical simulation method bears great potential for studies in HCI, where the physical ergonomics of a design is important. To make the method more broadly accessible, we study its predictive validity for movements and users typical to studies in HCI. We discuss the sources of error in biomechanical simulation and present results from two vali-dation studies conducted with a state-of-the-art system.

Project paper
Project website


Modeling the Functional Area of the Thumb on Mobile Touchscreen Surfaces

We present a predictive model for the functional area of the thumb on a touchscreen surface: the area of the interface reachable by the thumb of the hand that is holding the device. We derive a quadratic formula by analyzing the kinematics of the gripping hand. The model predicts the functional area for a given surface size, hand size, and position of the index finger on the back of the device. Designers can use this model to ensure that a user interface is suitable for interaction with the thumb. The model can also be used in­versely – that is, to infer the grips assumed by a given user interface layout.

Project paper
Project website


Model of Visual Search and Selection Time in Linear Menus

This paper presents a novel mathematical model for visual search and selection time in linear menus. We present novel data that replicates and extends the Nielsen menu selection paradigm and uses eye-tracking and mouse tracking to confirm model predictions. The same parametrization yielded a high fit to both menu selection time and gaze distributions. The model has the potential to improve menu designs by helping designers identify more effective solutions without conducting empirical studies.

Project paper
Project website


Modeling the Perception of User Performance

In this paper we use user study data to derive a model that predicts the amount of change required in an interface for users to reliably detect a difference. We extend methodology from psychophysics to the study of interactive performance and conduct two experiments in order to create a model of users’ perception of their own performance. The model is useful as a heuristic for predicting if a new interface design is better enough for users to reliably appreciate the obtained gain in user performance.

Project paper


Automated Nonlinear Regression Modeling for HCI

Predictive models in HCI, such as models of user performance, are often expressed as multivariate nonlinear regressions. However, existing modeling tools in HCI, along with the common statistical packages, are limited to predefined nonlinear models or support linear models only. To assist researchers in the task of identifying novel non-linear models, we propose a stochastic local search method that constructs equations iteratively.

Project paper


Real-time Hand Tracking Using a Sum of Anisotropic Gaussians Model

In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature.

Project paper
Project website


Multi-Touch Rotation Gestures: Performance and Ergonomics

Rotations performed with the index finger and thumb involve some of the most complex motor action among common multi-touch gestures, yet little is known about the factors affecting performance and ergonomics. This note presents results from a study where the angle, direction, diameter, and position of rotations were systematically manipulated. The data show surprising interaction effects among the variables, and help us identify whole categories of rotations that are slow and cumbersome for users.

Project paper


Improving Two-thumb Text Entry

We study the design of split keyboards for fast text entry with two thumbs on mobile touchscreen devices. The layout of KALQ was determined through first studying how users should grip a device with two hands. We then assigned letters to keys computationally, using a model of two-thumb tapping. KALQ minimizes thumb travel distance and maximizes alternation between thumbs. An error-correction algorithm was added to help address linguistic and motor errors. Users reached a rate of 37 words per minute (with a 5% error rate) after a training program.

Project website


Information Capacity of Full-body Movements

We present a novel metric for information capacity of full-body movements. It accommodates HCI scenarios involving continuous movement of multiple limbs. Throughput is calculated as mutual information in repeated motor sequences. It is affected by the complexity of movements and the precision with which an actor reproduces them. Computation requires decorrelating co-dependencies of movement features and temporal alignment of sequences. HCI researchers can use the metric as an analysis tool when designing and studying user interfaces.

Project paper
Project website


MenuOptimizer: Interactive Optimization of Menu Systems

This paper contributes to the design of menus with the goal of interactively assisting designers with an optimizer in the loop. MenuOptimizer supports designers’ abilities to cope with uncertainty and recognize good solutions. It allows designers to delegate combinatorial problems to the optimizer, which should solve them quickly enough without disrupting the design process.

Project website


Interactive Markerless Articulated Hand Motion Tracking Using RGB and Depth Data

We present a novel method that can capture a broad range of articulated hand motions at interactive rates. Our hybrid approach combines, in a voting scheme, a discriminative, part-based pose retrieval method with a generative pose estimation method based on local optimization. Our part-based strategy helps reduce the search space drastically in comparison to a global pose retrieval strategy. Quantitative results show that our method achieves state-of-the-art accuracy on challenging sequences and a near-realtime performance of 10 fps on a desktop computer.

Project paper
Project website