We are living in a data society in which data is generated at amazing speed; individuals, companies, organizations, and governments are on the brink of being drawn into a massive deluge of data. The great challenge is to extract the relevant information from vast amounts of data and communicate it effectively.
Typical scenarios include decision and policy making for urban and environmental planning or understanding relationships and dependencies in complex networks, e.g., social networks or networks from the field of bioinformatics. These scenarios are not only of interest to specialized experts; in fact, there is a trend toward including the broad public, which requires the information to be presented in a reliable, faithful, and easy-to-understand fashion.
Visual computing can play a key role in extracting and presenting the relevant information.
In visual computing research the aspect of quantification is often neglected. The SFB-TRR 161 seeks to close this gap.
The long-term goal is to strengthen the research field by establishing the paradigm of quantitative science in visual computing.
Published by the IEEE Computer Society, TVCG is a top-tier journal in the field of visualization.
» more »
Fellowship of the Alexander von Humboldt Foundation provides unique funding opportunity on the postdoctoral level
» more »
Book introduces a new computational paradigm and outlines a roadmap for further research
» more »
Jan 24th, 2025, 2.15 pm -3.15 pm
University of Stuttgart
Held by:
Meinard Müller, International Audio Laboratories Erlangen
Abstract:
Music information retrieval (MIR) is a dynamic research field at the intersection of engineering and the humanities, connecting disciplines such as signal processing, machine learning, musicology, and digital humanities. In this presentation, we explore learning in MIR from both technological and educational perspectives, using music as a tangible application domain. Our focus is on integrating deep learning with traditional engineering approaches to develop explainable hybrid models. By collaborating with domain experts and utilizing specialized music corpora, we demonstrate how computational tools can advance musicological research while uncovering data biases and confounding factors in modern technologies. Furthermore, we emphasize how music can facilitate interactive learning in technical disciplines, promoting innovation at the crossroads of technology and education.
Bio:
Meinard Müller received the Diploma degree (1997) in mathematics and the Ph.D. degree (2001) in computer science from the University of Bonn, Germany. After his postdoctoral studies (2001-2003) in Japan and his habilitation (2003-2007) in multimedia retrieval in Bonn, he worked as a senior researcher at Saarland University and the Max-Planck Institut für Informatik (2007-2012). Since 2012, he has held a professorship for Semantic Audio Signal Processing at the International Audio Laboratories Erlangen, a joint institute of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and the Fraunhofer Institute for Integrated Circuits IIS. His research interests include music processing, music information retrieval, audio signal processing, and motion processing. He wrote a monograph titled "Information Retrieval for Music and Motion" (Springer 2007) and a textbook titled "Fundamentals of Music Processing" (Springer 2015). In 2020, he was elevated to IEEE Fellow for contributions to music signal processing.
Location:
University of Stuttgart, VISUS, Room 00.012
The talk will be available via WebEx.
Meeting-ID (access code): 2792 424 0404
Meeting password: cJcKB7XGM63
For participants via WebEx: The transmission will kindly be managed by Patrick Gralka. He will be on site and monitor/manage the WebEx-Session incl. Q&A during and after the talk as well. Please don´t hesitate to get in touch with Patrick in case of questions or problems regarding the transmission/your online participation: Patrick.Gralka@visus.uni-stuttgart.de
Jan 27th, 2025, 4 pm - 6 pm
LMU Munich
Held by:
Felix Putze, University of Bremen
Abstract:
Mixed Reality systems can benefit from cognitive models of their users to predict their attentional state or their mental workload level. This enables these systems to provide an individualized user experience, by adjusting the content and style of the virtual presentation. I will show how we can use various biosignals, such as brain activity and eye gaze, to construct such models. However, in complex situations, we need additional context information for accurately interpreting the signals of the user. I will show how such context information can be integrated into the models in a principled way.
Bio:
Dr. Felix Putze is a senior researcher at the Cognitive Systems Lab of the University of Bremen. In his research, he is interested in using biosignals for cognitive modeling, with the purpose of creating adaptive user interfaces or for creating novel experiment environments to study human cognition. Especially, he studies complex, naturalistic scenarios in Virtual Reality or the real world. Currently, he is Principal Investigator in the research unit “Lifespan AI”, in which he is interested on modeling multimodal, long-term sequences of health data, combining the sensor information with context information and semantic knowledge.
Location:
LMU Munich, Room: third floor, room 357
The lecture will be transmitted to the University of Stuttgart, Visualization Research Center (VISUS), Room: 00.012. and to the University of Konstanz, Room: ZT 702.
All doctoral researchers are asked to take part in the events of the lecture series.
Jan 30th, 2025, 10.30 am - 12 pm
University of Stuttgart
Held by:
Michael Burch, FH Graubünden, CH
Abstract:
We live in a data-driven era. Data plays a crucial role in nearly any kind of research field. The amount of data requires a combination of data science, visualization, algorithmic, as well as human-computer interaction concepts to find patterns and anomalies in it. In this presentation I will discuss a few visualization projects from our research group in Switzerland in academia and industry. I will describe some real-world application domains that are particularly important for the industry in Switzerland while also looking into hardware issues we were confronted with to tackle such data-related research problems.
Bio:
Michael Burch studied computer science and mathematics at the Saarland University in Saarbrücken. He received his PhD from the University of Trier in 2010 in the fields of information visualization and visual analytics. After 8 years of having been a PostDoc in the Visualization Research Center (VISUS) in Stuttgart, he moved to the Eindhoven University of Technology (TU/e) as an assistant professor for visual analytics. From October 2020 he works as a lecturer in visualization at the University of Applied Sciences in Chur, Switzerland. Michael Burch is in many international program committees and published more than 200 conference papers and journal articles in the field of visualization. His main interests are in information visualization, visual analytics, eye tracking, and data science.
Location:
University of Stuttgart, VISUS, Room 00.012
The talk will be available via WebEx.
Feb 3rd, 2025, 4 pm - 6 pm
University of Stuttgart
Held by:
Jürgen Bernard, Universität Zürich
Location:
University of Stuttgart, Visualization Research Center (VISUS), Room: 00.012
The lecture will be transmitted to the University of Konstanz, Room: ZT 702
All doctoral researchers are asked to take part in the events of the lecture series.
Feb 7th, 2025, 10.30 am -12 pm
University of Stuttgart
Held by:
Elias Elmquist, Linköping University, Sweden
Abstract:
Audiovisual data interfaces have the potential to increase information retention to get a more comprehensive understanding of a dataset or situation compared to unimodal interfaces, while also involving more senses to further engage the user. This can enable the distribution of data variables to any of the two senses to reduce the risk of cognitive overload, or highlighting specific data variables by mapping them to both of the sensory modalities. However, the success of an audiovisual data interface is dependent on the integration of the two senses and how this integration is utilized in the resulting interface.
In this talk I will present the work included in my PhD thesis, which aims to provide a human-centered integration of sonification and visualization. Sonification is the use of sound to convey information, and the human-centered approach involves creating perceptually motivated designs by utilizing how the auditory and visual systems complement each other, and working with domain experts and users during the design process of the sonification. I will also briefly mention my overall research interests, and what I intend to be working on in the near future at your university.
Bio:
Elias Elmquist is a fourth-year PhD candidate in the Media and Information Technology division at Linköping University in Norrköping, Sweden. He studies the integration of visualization and its auditory counterpart sonification for analytical and communicative tasks. His general research interests include multi-sensory integration and its application to immersive environments.
Location:
University of Stuttgart, VISUS, Room 00.012
The talk will be available via WebEx.
Meeting-ID (access code): 2732 469 8153
Meeting password: uFABnDmF527
For participants via WebEx: The transmission will kindly be managed by Patrick Gralka. He will be on site and monitor/manage the WebEx-Session incl. Q&A during and after the talk as well. Please don´t hesitate to get in touch with Patrick in case of questions or problems regarding the transmission/your online participation: Patrick.Gralka@visus.uni-stuttgart.de
Feb 10th, 2025, 4 pm - 6 pm
University of Stuttgart
Held by:
Dr. Vladimir Molchanov, Münster University
Abstract:
Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We propose approaches to visually convey the transformation that was applied to the scatterplot. We further propose a generalization of our regularization scheme using sector-based transformations with the aim of increasing sample uniformity of the resulting scatterplot. We document the improvement of our approach using various uniformity measures.
Bio:
Vladimir Molchanov received a PhD in Mathematics in 2008 from Jacobs University Bremen. He is currently a postdoctoral researcher in the group of Prof. Linsen at the University of Münster. His research focuses on multidimensional data visualization, projection methods, and interactive data analysis.
Location:
University of Stuttgart, VISUS, Room 00.012
The lecture will be available via WebEx.
Meeting-ID (access code): tbd
Meeting password: tbd
For participants via WebEx: The transmission will kindly be managed by Patrick Gralka. He will be on site and monitor/manage the WebEx-Session incl. Q&A during and after the talk as well. Please don´t hesitate to get in touch with Patrick in case of questions or problems regarding the transmission/your online participation: Patrick.Gralka@visus.uni-stuttgart.de
Apr 3rd, 2025, 9.00 am - 4.00 pm
University of Stuttgart
In Kooperation mit dem Visualisierungsinstitut der Universität Stuttgart (VISUS) bietet der SFB-TRR 161 zum Girls' Day 2025 folgenden Workshop an:
Hightech trifft Kreativität: Dein Girls' Day in der virtuellen Welt
Du hast Interesse an modernster Technik? Bei uns kannst du erfahren, wie es ist, mit den neuesten Technologien für Virtuelle Realität zu arbeiten. Wir bieten einen Einblick in unsere spannenden Forschungsarbeiten und eine spielerische Einführung in die Programmierung an.
Neben coolen Demos von aktuellen Forschungsprojekten bis hin zu Fragerunden mit etablierten Forscherinnen ist bei uns ein volles Programm gegeben. Wir freuen uns auf dich!
Geeignet für Klasse 9 - 10
May 26th - 28th, 2025, full days
Humboldt Haus, Aichberg bei Lindau am Bodensee
Oct 6th - 7th, 2025, full days
The SFB-TRR 161 produces videos to give insights into the projects and the ongoing research. Please visit our YouTube Channel.
PhD students of the projects at the Universities of Stuttgart and Konstanz learn and do research together on their way to their doctoral degree in visual computing.
The scientists of the SFB-TRR 161 as well as guest authors blog about their activities in computer graphics, visualization, computer vision, augmented reality, human-computer interaction, and psychology.
FOR SCIENTISTS
Projects
People
Publications
Graduate School
Equal Opportunity
FOR PUPILS
PRESS AND MEDIA