We are living in a data society in which data is generated at amazing speed; individuals, companies, organizations, and governments are on the brink of being drawn into a massive deluge of data. The great challenge is to extract the relevant information from vast amounts of data and communicate it effectively.
Typical scenarios include decision and policy making for urban and environmental planning or understanding relationships and dependencies in complex networks, e.g., social networks or networks from the field of bioinformatics. These scenarios are not only of interest to specialized experts; in fact, there is a trend toward including the broad public, which requires the information to be presented in a reliable, faithful, and easy-to-understand fashion.
Visual computing can play a key role in extracting and presenting the relevant information.
In visual computing research the aspect of quantification is often neglected. The SFB-TRR 161 seeks to close this gap.
The long-term goal is to strengthen the research field by establishing the paradigm of quantitative science in visual computing.
SFB-TRR 161 publication wins award at IEEE Workshop on Uncertainty Visualization
» more »
From November 2 to 7, 2025, SFB-TRR 161 researchers present their recent findings at one of the largest conferences within the visualization community.
» more »
Stanford/Elsevier ranking identifies leading researchers based on standardized citation metrics.
» more »
Nov 11th, 2025, 4 pm - 6 pm
University of Konstanz
Held by:
Abstract:
Spatial computing introduces new ways to perceive, interact with, and derive insights from both 3D and multidimensional, abstract data. Advances in Extended Reality (XR), including Virtual Reality (VR) and Augmented Reality (AR), along with AI, are transforming data visualisation and analysis. On one hand, beyond the 2D screen, immersive environments offer more display space, direct interaction, and a greater sense of presence with data. On the other hand, AI provides capabilities such as automated exploratory data analysis, or classification and segmentation of data that deeply transform the way we handle data. This talk explores how immersive technologies and AI merge to enhance human-in-the-loop approaches for data-related tasks.
Bio:
Dr. Maxime Cordeil is a Senior Lecturer in the Electrical Engineering and Computer Science (EECS) School at The University of Queensland (UQ), Brisbane, Australia. He is a recognised international leader in Immersive Analytics research. Immersive Analytics is a field at the intersection of Extended Reality, Human-Computer Interaction and Data Visualisation and Visual Analytics research. Before joining UQ, he was a Postdoc and then a Lecturer in the Immersive Analytics research group at Monash University. He obtained his PhD from the University of Toulouse and the Higher French Institute of Aeronautics and Space, France in 2013. In 2021 and 2022, Dr Cordeil has been recognised as Field Leader in Computer Graphics, for his work on Visualisation, by The Australian Research Magazine yearly ranking. He has authored over 60 publications in major conferences and journals in the domains of data visualisation and analytics, human-computer interaction and XR, which include IEEE VIS/TVCG, ACM CHI/UIST, ISMAR and IEEE VR.
Location:
University of Konstanz, Room: ZT1201
The lecture will be transmitted to the University of Stuttgart, Visualization Research Center (VISUS), Room: 00.012
The lecture will be available via Zoom.
Meeting-ID (access code): 687 1599 4772
Meeting password: 500259
For participants via Zoom: The transmission will kindly be managed by Dimitar Garkov. He will be on site and monitor/manage the Zoom-Session incl. Q&A during and after the talk as well. Please don´t hesitate to get in touch with Dimitar in case of questions or problems regarding the transmission/your online participation: dimitar.garkov@uni-konstanz.de
Nov 17th, 2025, 4 pm - 6 pm
University of Konstanz
Held by:
Mike Preuss, Leiden University
Abstract:
Many tend to believe now that LLMs and only LLMs are AI. Far from it, meanwhile doubts are on the rise if they may lead to much more progress, not to speak of AGI. However, many big AI successes in recent years happened in Game AI (AlphaGo, AlphaStar, OpenAIFive, etc.). One of the strenghts of this field is that it provides methods to create immersive scenarios in a semi-automated way and this is probably where it can assist Immersive Analytics (IA). Thinking the other way around, IA could also be very useful to better understand the algorithms behind Game AI via immersive experiences. And this is not limited to Game AI of course. Coming from the AI perspective, I intend to introduce the recent past developments and current situation and attempt to develop a vision where AI and IA are better connected and mutually support each other.
Bio:
Mike Preuss is associate professor at the Leiden Institute of Advanced Computer Science and most interested in using modern AI algorithms to solve practical problems, most notably in ChemAI (as for retrosynthesis), but generally in contexts where human expertise and new AI methods meet. This encompasses LLM and image/video generation tools and how they can be integrated into human workflows meaningfully. Partly automated Procedural Content generation (PCG) is actually a well-known concept in game AI for a long time already and profits greatly from these new developments. Recently, Mike is also involved with quantum games (quantum versions of board games as Checkers) and drone research.
Location:
University of Konstanz, Room ZT 702.
University of Stuttgart: The lecture will be transmitted to VISUS, room 00.012
The talks are available via Zoom.
Meeting ID: tba
Passcode: tba
For participants via Zoom: The transmission will kindly be managed by Dimitar Garkov. He will be on site and monitor/manage the Zoom-Session incl. Q&A during and after the talk as well. Please do not hesitate to get in touch with Dimitar in case of questions or problems regarding the transmission/your online participation: dimitar.garkov@uni-konstanz.de
Nov 24th, 2025, 4 pm - 6 pm
University of Konstanz
Held by:
Tony Huang, University of Sydney
Abstract:
Many real-world scenarios require a remote expert to guide a local user in performing physical tasks, such as remote machine maintenance. To support this type of collaboration, various theories and systems have been developed. A common approach involves providing visual communication cues—such as hand gestures—within a shared visual space. These gestures can be represented in different formats, including raw or projected hands, digital gesture representations, and sketches. However, the effects of combining these gestural formats have not been fully explored or understood.
Bio:
Dr. Tony Huang is an Associate Professor at the University of Technology Sydney, Australia. He holds a PhD in Computer Science from the University of Sydney. His primary research interests lie in Human-Computer Interaction and Visualization. His work has been supported by various funding bodies, including commercial partners and Australian federal and state governments. Dr. Huang has served as a conference chair, a PC chair, or an organization chair for international conferences. He is an Associate Editor for Behaviour and Information Technology and a Co-chair of the IEEE SMC's Technical Committee on visual analytics and communication. He has also guest-edited several special issues for SCI-indexed journals.
Location:
University of Konstanz, Room ZT 702.
University of Stuttgart: The lecture will be transmitted to VISUS, room 00.012
The talks are available via Zoom.
Meeting ID: tba
Passcode: tba
For participants via Zoom: The transmission will kindly be managed by Dimitar Garkov. He will be on site and monitor/manage the Zoom-Session incl. Q&A during and after the talk as well. Please do not hesitate to get in touch with Dimitar in case of questions or problems regarding the transmission/your online participation: dimitar.garkov@uni-konstanz.de
Dec 1st, 2025, 4 pm - 6 pm
University of Stuttgart
Held by:
Heike Leitte, RPTU University Kaiserslautern-Landau
Abstract:
tba
Bio:
tba
Location:
tba
Dec 8th, 2025, 4 pm - 6 pm
University of Konstanz
Held by:
Michael Wybrow, Monash University
Abstract:
tba
Bio:
tba
Location:
tba
Dec 15th, 2025, 4 pm - 6 pm
Location tba
Held by:
tba
Abstract:
tba
Bio:
tba
Location:
tba
Jan 12th, 2026, 4 pm - 6 pm
Location tba
Held by:
tba
Abstract:
tba
Bio:
tba
Location:
tba
Jan 19th, 2026, 4 pm - 6 pm
Location tba
Held by:
tba
Abstract:
tba
Bio:
tba
Location:
tba
Jan 26th, 2026, 4 pm - 6 pm
LMU Munich
Held by:
Joanna Bergström, University of Copenhagen
Abstract:
tba
Bio:
tba
Location:
tba
Feb 2nd, 2026, 4 pm - 6 pm
University of Stuttgart
Held by:
Michael Doggett, Lund University
Abstract:
tba
Bio:
tba
Location:
tba
The SFB-TRR 161 produces videos to give insights into the projects and the ongoing research. Please visit our YouTube Channel.
PhD students of the projects at the Universities of Stuttgart and Konstanz learn and do research together on their way to their doctoral degree in visual computing.
The scientists of the SFB-TRR 161 as well as guest authors blog about their activities in computer graphics, visualization, computer vision, augmented reality, human-computer interaction, and psychology.
FOR SCIENTISTS
Projects
People
Publications
Graduate School
Equal Opportunity
FOR PUPILS
PRESS AND MEDIA