Planning is a powerful tool for making series of decisions that can lead to success. Recent work in Artificial Intelligence (AI) has shown that remarkable capabilities can be achieved using Reinforcement Learning. AlphaGo is one such example, where an AI-powered agent mastered the incredibly challenging game of Go, and could consistently outperform expert human players.
We have now developed similar methods through with user interfaces (UI) can adapt themselves automatically to improve usability. The key to making good adaptations is to plan every change to the UI by fully considering its impact on usability โ both benefits and costs to the user.
We formulate this problem of adapting an interface as a stochastic sequential decision-making problem where an adaptive system should:
Our work develops reinforcement learning methods for solving such problems:
We showcase our approach for applications in adaptive menus that can reorganise themselves by swapping, moving, and grouping items to reduce average selection time for a user.
Adapting an interface requires taking into account both the positive and negative effects that changes may have on the user. A carelessly picked adaptation may impose high costs to the user โ for example, due to surprise or relearning effort โ or "trap" the process to a suboptimal design immaturely. However, effects on users are hard to predict as they depend on factors that are latent and evolve over the course of interaction. We propose a novel approach for adaptive user interfaces that yields a conservative adaptation policy: It finds beneficial changes when there are such and avoids changes when there are none. Our model-based reinforcement learning method plans sequences of adaptations and consults predictive HCI models to estimate their effects. We present empirical and simulation results from the case of adaptive menus, showing that the method outperforms both a non-adaptive and a frequency-based policy.
Watch the five-minute presentation video to learn more:
Our implementation (in Python) of the adaptive menus application is available on Github along with examples and instructions.
PDF, 5.3 MB
Adapting User Interfaces with Model-based Reinforcement Learning.
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI โ21).
@inproceedings{todi21adaptive,
author = {Todi, Kashyap and Bailly, Gilles, and Leiva, Luis A., and and Oulasvirta, Antti},
title = {{Adapting User Interfaces with Model-based Reinforcement Learning}},
year = {2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3411764.3445497},
doi = {10.1145/3411764.3445497},
booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
keywords = {Adaptive User Interfaces, Reinforcement Learning, Predictive Models, Monte Carlo Tree Search},
series = {CHI '21}}
How can adaptive interfaces and #HCI benefit from #AI and reinforcement learning?
— Kashyap Todi (@kashtodi) May 7, 2021
๐งต A thread on our #CHI2021 paper w. @gilles_bailly @luileito @oulasvirta
๐ Project page: https://t.co/cIgcRGhDJB
๐ฅ Watch the video: https://t.co/0fl2brUy1N
๐๐ @AaltoResearch @sig_chi @sigchi
For questions and further information, please contact:
Kashyap Todi
kashyap.todi@gmail.com
Acknowledgements: This work has been funded by the Finnish Center for Artificial Intelligence (FCAI), Academy of Finland projects "Human Automata" and "BAD", Agence Nationale de la Recherche (grant number ANR-16-CE33-0023), and HumaneAI Net (H2020 ICT 48 Network of Centers of Excellence).