Umwelt: Accessible structured editing of multi-modal data representations
Authors: J. Zong; I. Pedraza Pineros; M. K. Chen; D. Hajas; A. Satyanarayan
Published: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 46:1–46:20). Association for Computing Machinery (2024)
DOI · Publisher’s page · PDF · Accessible HTML
Abstract
We present Umwelt, an authoring environment for interactive multimodal data representations. In contrast to prior approaches, which center the visual modality, Umwelt treats visualization, sonification, and textual description as coequal representations: they are all derived from a shared abstract data model, such that no modality is prioritized over the others. To simplify specification, Umwelt evaluates a set of heuristics to generate default multimodal representations that express a dataset’s functional relationships. To support smoothly moving between representations, Umwelt maintains a shared query predicated that is reified across all modalities — for instance, navigating the textual description also highlights the visualization and filters the sonification. In a study with 5 blind / low-vision expert users, we found that Umwelt’s multimodal representations afforded complementary overview and detailed perspectives on a dataset, allowing participants to fluidly shift between task- and representation-oriented ways of thinking.
Media
-
📝
Umwelt: Experiencing the World Through Multiple Senses
Much of accessibility research in data visualisation has focused on “making the visual accessible” — translating a chart into text or sound after the fact. With Umwelt, our new system presented at CHI 2024, we take a different approach: treating visualisation, sonification, and structured text as coequal ways of representing data.
-
🎧
No more tyranny for visual representations
Tune in to learn about Umwelt, a groundbreaking authoring environment designed for blind and partially sighted individuals and their sighted collaborators to independently create and explore multimodal data representations. We'll discuss how it de-centers traditional visual-first approaches, treating visualization, sonification, and textual descriptions as co-equal ways to understand data, and how this enables complementary overview and detailed perspectives for analysis.
-
🎬
Umwelt: Accessible Structured Editing of Multi-Modal Data Representations
This pre-recorded conference talk introduces Umwelt, an open source, screen reader accessible, structured editor for multi-modal data representations, enabling collaboration between sighted and blind collaborators, through cross-modal context switching.