Much of accessibility research in data visualisation has focused on “making the visual accessible” — translating a chart into text or sound after the fact. With Umwelt, our new system presented at CHI 2024, we take a different approach: treating visualisation, sonification, and structured text as coequal ways of representing data.

Rather than starting with a chart and converting it into other modalities, Umwelt builds all representations from a shared abstract data model. This means blind and low-vision users can independently author multimodal representations without needing to begin from a visual form. The structured editor generates smart defaults, but users can iteratively refine and reconfigure each modality. Because the representations are linked, navigating a textual description highlights the visual chart and filters the sonification — keeping context consistent across senses.

In our study with blind and low-vision experts, participants valued this ability to fluidly move between modalities, using sonification for overviews, text for precise values, and visuals for mixed-ability collaboration. They also saw multimodality as a way to manage cognitive load and communicate more effectively with colleagues.

Our takeaway: accessible data analysis should not be about replacing visuals, but about empowering people to shift between modalities in ways that reinforce one another. This has two major implications: improving collaboration between blind and sighted colleagues, and enhancing UX by designing for cross-modal correspondences that strengthen the meaning carried by each individual representation.