We are living in a data society in which data is generated at amazing speed; individuals, companies, organizations, and governments are on the brink of being drawn into a massive deluge of data. The great challenge is to extract the relevant information from vast amounts of data and communicate it effectively.
Typical scenarios include decision and policy making for urban and environmental planning or understanding relationships and dependencies in complex networks, e.g., social networks or networks from the field of bioinformatics. These scenarios are not only of interest to specialized experts; in fact, there is a trend toward including the broad public, which requires the information to be presented in a reliable, faithful, and easy-to-understand fashion.
Visual computing can play a key role in extracting and presenting the relevant information.
In visual computing research the aspect of quantification is often neglected. The SFB-TRR 161 seeks to close this gap.
The long-term goal is to strengthen the research field by establishing the paradigm of quantitative science in visual computing.
Researchers from projects A08 and C06 win award at conference on physical, cognitive, and perceptual augmentation of humans through digital technologies.
» more »
Opportunity to research future interaction paradigms for Mixed Reality in Visual Computing.
» more »
New project focuses on methods that detect subtle quality differences in highly compressed images.
» more »
May 22nd, 2026, 11 am - 12 pm
University of Stuttgart
Held by:
Michaël Aupetit, QCRI, Qatar
Abstract:
Alignment is often framed as making machines conform to human values. But the challenge is not one-sided: humans and machines need mutual alignment, or at least mutual understanding, to work well together. In practice, this means building systems that generate language, images, or actions that align with human values and judgments, while also adapting their representations in response to human actions. Those values are rooted in human sense-making, including visual perception. In this talk, I argue that this view connects today's alignment problem to an older challenge in dimensionality reduction: the distortions that arise when machine-generated representations fail to preserve structures humans find meaningful.
From this perspective, the relationship between clusters and classes is more than a familiar problem in visual analytics; it is a useful test case for alignment itself. For domain experts, the task is not simply to translate machine-generated groupings into human categories, but to build a shared framework in which both representations can inform one another. If this remains difficult even in the constrained setting of data exploration, it may help explain the broader challenge of aligning AI systems with human concepts, values, and goals. By revisiting this problem through the lenses of dimensionality reduction and visual analytics, the talk offers both a concrete case and a broader reflection on what alignment should mean.
Bio:
Dr. Michaël Aupetit is a Senior Scientist at the Qatar Center for Artificial Intelligence (QCAI), part of QCRI, where he works on AI foundation models and their applications in health and materials science. Prior to joining QCRI in 2014, he spent a decade at CEA LIST in Paris-Saclay as a research scientist and senior expert in data mining and visual analytics, developing decision-support systems for complex industrial challenges. He holds an HDR in Computer Science from Paris-Sud University and a PhD in Industrial Engineering from Grenoble INP.
Location:
University of Stuttgart, Visualization Research Center (VISUS), Room: 00.012
The talk will be available via WebEx.
Meeting-ID (access code): 2780 607 5384
Meeting password: B7i9dYZGMZ2
For participants via WebEx: The transmission will kindly be managed by Patrick Gralka. He will be on site and monitor/manage the WebEx-Session incl. Q&A during and after the talk as well. Please don´t hesitate to get in touch with Patrick in case of questions or problems regarding the transmission/your online participation: Patrick.Gralka@visus.uni-stuttgart.de
Jun 1st, 2026, 10.30 - 11.30 am
ETVIS 2026
Held by:
Barbara Tversky, Stanford University and Columbia University
Abstract:
As many have noted, thought does not happen just between the ears. We use the space around us and our actions in it to think, communicate, and create. Artists, architects, mathematicians, in fact, all of us, put thought on a page because the mind cannot hold all our thoughts. When artists and architects sketch, words get in the way. When they reexamine their sketches, they get new ideas, ideas they had not intended. Sketches are messy, meant to be ambiguous and stimulate new interpretations. Visualizations like maps and diagrams also use lines, to convey information unambiguously. Hands draw lines, the eye is biased to see lines, lines link ideas in the mind and neurons in the brain. The thinking is in the loop, the eye, the hand, and the mind, presenting challenges to GenAI.
More information about ETVIS 2026
Jun 16th - 17th, 2026, full days
The SFB-TRR 161 produces videos to give insights into the projects and the ongoing research. Please visit our YouTube Channel.
PhD students of the projects at the Universities of Stuttgart and Konstanz learn and do research together on their way to their doctoral degree in visual computing.
The scientists of the SFB-TRR 161 as well as guest authors blog about their activities in computer graphics, visualization, computer vision, augmented reality, human-computer interaction, and psychology.
FOR SCIENTISTS
Projects
People
Publications
Graduate School
Equal Opportunity
FOR PUPILS
PRESS AND MEDIA