32nd International Conference on Multimedia Modeling

Starts soon

January 29-31, 2026
Prague, Czech Republic

Accepted Special Sessions

In total four special sessions has been accepted for MMM 2026.

Extended Reality and Multimedia Modelling (XR-MM)

The Extended Reality and Multimedia Modelling (XR-MM) special session at the Multimedia Modelling 2026 conference invites researchers, industry experts, and enthusiasts to explore the latest advancements in extended reality (XR) and multimedia technologies. This session will focus on the development and integration of XR solutions with multimedia analysis, retrieval, and processing methods, emphasizing seamless and interactive experiences that transform the way we live, work, and interact with our surroundings.
Details and Call for papers

Human-Centric Multimodal Behavior Analysis

This special session delves into cutting-edge methods and applications in human-centric multimodal behavior analysis. By integrating vocal intonations, facial expressions, gestures, physiological signals, human actions, body poses, and person re-identification techniques, the session aims to advance understanding in areas such as affective state recognition, real-time behavior tracking, robust learning methods, human action recognition, and person re-identification. Synthesizing insights from computer vision, speech processing, wearable sensing, and machine learning, the session seeks to drive innovation in healthcare, human-computer interaction, social robotics, and security applications.
Details and Call for papers

Modelling Robustness and Security for Multimedia AI (MARS-26)

The MARS (Modelling Robustness and Security for Multimedia AI) Special Session investigates the critical intersection of Multimedia Modelling, Artificial Intelligence, and Security. As AI/ML models become integral to multimedia analysis, processing, and generation, understanding and mitigating their vulnerabilities is paramount. This session focuses on challenges posed by adversarial attacks, data poisoning, model theft, and other security threats targeting multimedia AI systems. Through keynote speeches, paper presentations, and expert discussions, MARS-26 aims to foster research into novel modelling techniques for attack characterization, robust model development, defense strategies, and security verification in multimedia AI applications.
Details and Call for papers

Multi Object Multi Sensor Tracking (MOMST)

This session aims to bring together researchers, practitioners, and industry experts working on the challenges of tracking multiple objects using data from multiple (visual) sensors in various use cases such as traffic management, etc.
Details and Call for papers

Extended Reality and Multimedia Modelling (XR-MM)

The Extended Reality and Multimedia Modelling (XR-MM) special session at the Multimedia Modelling 2026 conference invites researchers, industry experts, and enthusiasts to explore the latest advancements in extended reality (XR) and multimedia technologies. This session will focus on the development and integration of XR solutions with multimedia analysis, retrieval, and processing methods, emphasizing seamless and interactive experiences that transform the way we live, work, and interact with our surroundings.

Topics of Interests

The XR-MM 2026 special session will address the following key topics:
  • Technologies for next-generation XR applications across all domains: Exploring cutting-edge solutions in virtual reality (VR), augmented reality (AR), and mixed reality (MR) that push the boundaries of XR experiences.
    • Human factors in design of XR interfaces and interactions.
  • Real-time 3D Modeling and Rendering: Investigating innovative techniques for creating realistic and dynamic 3D models and environments, enabling high-quality visuals and interactions in XR applications.
    • 3D modeling and asset management for immersive XR experiences
    • Synthetic Dataset Generation and Benchmarking for XR Calibration and Reconstruction
    • Learning-Based Calibration and Pose Estimation for Multi-View XR Systems with Low Overlap and Sparse Data
  • AI for XR Content Creation: Utilizing artificial intelligence and machine learning for content analysis, understanding and retrieval to facilitate XR content generation.
    • Generative AI and foundation models for XR content creation and synthetic data generation
    • Multimedia analysis and AI-based approaches for media mining and adaptation in XR experiences
  • AI-Driven Multimedia and XR Integration: Utilizing artificial intelligence and machine learning to enhance recognition and manipulation in XR environments, leading to more intuitive and engaging experiences.
    • Active object detection and real-time scene understanding from first- and third-person perspectives
    • Processing of egocentric multimedia datasets and streams for immersive XR environments
  • Multisensory & multimodal Interfaces and Wearable Technologies: Investigating the latest advancements in haptic feedback, gesture recognition, and sensory input/output devices that facilitate natural and immersive interactions with XR and multimedia content.
  • Adaptive and Interactive Content Delivery: Developing methods for optimizing and personalizing multimedia content based on user preferences, context, and device capabilities, ensuring a seamless XR experience.
  • Security, privacy aspects, and mitigations for XR multimedia content

Organizers

  • Claudio Vairo, CNR-ISTI, IT
  • Imad H. Elhajj, AUB, LB
  • Leonel Toledo, i2CAT, ES
  • Dimitrios Zarpalas, CERTH, EL
  • Georg Thallinger (georg.thallinger@joanneum.at), JOANNEUM RESEARCH, AT

Human-Centric Multimodal Behavior Analysis

This special session delves into cutting-edge methods and applications in human-centric multimodal behavior analysis. By integrating vocal intonations, facial expressions, gestures, physiological signals, human actions, body poses, and person re-identification techniques, the session aims to advance understanding in areas such as affective state recognition, real-time behavior tracking, robust learning methods, human action recognition, and person re-identification. Synthesizing insights from computer vision, speech processing, wearable sensing, and machine learning, the session seeks to drive innovation in healthcare, human-computer interaction, social robotics, and security applications.

Call for Papers

The MMM 2025 Special Session on Human-Centric Multimodal Behavior Analysis invites original research contributions that delve into the complexities of human behavior through the integration of multiple data modalities. This session aims to foster discussions on advanced methodologies and applications that enhance the understanding and interpretation of human behaviors in real-world contexts.

Topics of Interests

We welcome submissions on topics including, but not limited to:
  • Multimodal Affective State Recognition
  • Multimodal Human Action Recognition
  • Multimodal Human Pose Estimation
  • Real-Time Multimodal Behavior Tracking
  • Multimodal Fusion Techniques for Behavior Analysis
  • Privacy-Preserving Multimodal Learning
  • Cross-Modality Person Re-Identification
  • Explainable AI in Multimodal Behavior Analysis
  • Multimodal Human-Computer Interaction
  • Multimodal Behavior Analysis in Healthcare Applications
  • Security Applications of Multimodal Behavior Analysis

Submission Types

  • Full research papers
  • Survey papers

Organizers


Modelling Robustness and Security for Multimedia AI (MARS-26)

As AI and Machine Learning models revolutionize multimedia modelling, they introduce significant security vulnerabilities that must be addressed. The MARS-26 special session invites original, unpublished research focused on the robustness and security aspects of AI/ML models applied to multimedia data. We welcome submissions presenting novel modelling techniques, theoretical analyses, empirical studies, and application-focused perspectives.

Topics of Interests

  • Modelling adversarial attacks specifically targeting multimedia models (e.g., image classifiers, object detectors, speech recognition, video analysis, generative models)
  • Developing robust multimedia models and training strategies resistant to adversarial perturbations
  • Techniques and models for detecting adversarial examples in multimedia data or streams
  • Leveraging adversarial example techniques for generating counterfactual explanations of multimedia AI model decisions
  • Model watermarking and fingerprinting techniques for tracing and protecting the intellectual property of multimedia AI models
  • Security and robustness analysis of multimedia compression models or feature extraction pipelines against attacks
  • Modelling, detecting, and defending against data poisoning and backdoor attacks in multimedia datasets and models
  • Explainable AI (XAI) methods applied to understand and improve the security and robustness of multimedia models
  • Certification and verification techniques for proving robustness properties of multimedia AI systems
  • Security and privacy implications of federated learning for multimedia applications
  • Modelling fairness and bias mitigation in the context of robust multimedia AI
  • Benchmark datasets and evaluation protocols for assessing multimedia AI security and robustness

Submission Types

  • Full research papers (12 pages)
  • Short papers (6 pages)
  • Demo papers (4 pages)
All submissions must follow the MMM 2026 formatting guidelines and be submitted through the conference submission system. Accepted papers will be published in the conference proceedings.

Special Session Format

  • Keynote address (45 minutes)
  • Technical paper presentations (20 minutes each)
  • Interactive panel discussion (60 minutes)

Organizers

  • Michael A. Riegler, Simula Research Laboratory (michael@simula.no)
  • Nandor Knust, UiT
For any questions regarding the special session, please contact Michael A. Riegler at michael@simula.no.

Multi Object Multi Sensor Tracking (MOMST)

Scope and Topics

We invite original contributions to the Special Session on Multi Object Multi Sensor Tracking (MOMST), held as part of the 28th International Conference on Multimedia Modeling (MMM 2026). This session aims to bring together researchers, practitioners, and industry experts working on the challenges of tracking multiple objects using data from multiple sensors.

Topics of interest include, but are not limited to:
  • Multi-object multi-sensor tracking (MOMST) systems and architectures
  • Sensor fusion techniques for robust tracking
  • Data association and hypothesis tracking algorithms
  • Deep learning-based tracking methods (CNNs, RNNs, Transformers)
  • Multi-camera and multi-modal tracking
  • Real-time object tracking in autonomous vehicles, robotics, or surveillance
  • Evaluation frameworks and benchmarks (e.g., MOTChallenge, AI City Challenge)
  • Datasets and simulation environments for MOMST research
  • Applications in smart cities, security, transport, and Industry 4.0

Submission Guidelines

Submissions must follow the standard MMM 2026 guidelines and be submitted via the main MMM conference submission system. Please make sure to select the "Special Session: MOMST" track during submission. For paper templates and formatting instructions, visit: https://mmm2026.cz

Submitted papers are limited to 12 content pages, including all figures, tables, and appendices, in the Springer LNCS style. Additional 2 pages containing only cited references are allowed.

Organizers


Topics of Interest

The traditional topics of interest include:
 

Multimedia Content Analysis

  • Multimedia indexing
  • Multimedia mining
  • Multimedia abstraction and summarisation
  • Multimedia annotation, tagging and recommendation
  • Multimodal analysis for retrieval applications
  • Semantic analysis of multimedia and contextual data
  • Interactive learning
  • Multimedia knowledge acquisitionand construction
  • Multimedia verification
  • Multimedia fusion methods
  • Multimedia content generation

Multimedia Signal Processing and Communications

  • Media representation and algorithms
  • Multimedia sensors and interaction modes
  • Multimedia privacy, security and content protection
  • Multimedia standards and related issues
  • Multimedia databases, query processing, and scalability
  • Multimedia content delivery, transport and streaming
  • Wireless and mobile multimedia networking
  • Sensor networks (video surveillance, distributed systems)
  • Audio, image, video processing, coding and compression
  • Multi-camera and multi-view systems

Multimedia Applications and Services

  • Media content retrieval, browsing and recommendation tools
  • Extended reality (AR/VR/MR) and virtual environments
  • Real-time and interactive multimedia applications
  • Multimedia analytics applications
  • Egocentric, wearable and personal multimedia
  • Urban and satellite multimedia
  • Mobile multimedia applications
  • Question answering, multimodal conversational AI and hybrid intelligence
  • Multimedia authoring and personalization
  • Cultural, educational and social multimedia applications
  • Multimedia for e-health and medical applications

Ethical, Legal and Societal Aspects of Multimedia

  • Fairness, accountability, transparency and ethics in multimedia modeling
  • Environmental footprint of multimedia modeling
  • Large multimedia models and LLMs
  • Multimodal pretraining and representation learning
  • Reproducibility, interpretability,explainability and robustness
  • Embodied multimodal applications and tasks
  • Responsible multimedia modeling and learning
  • Legal and ethical aspects of multimodal generative AI
  • Multimedia research valorisation
  • Digital transformation

Call for Special Session Proposals

Special sessions at MMM 2026 will focus on state-of-the-art research directions within the multimedia field. Special session papers, which can be invited or submitted, will complement the regular research papers and be included in the proceedings of the conference.

It is recommended to organise special sessions in a panel format, where the authors have reduced time to present their work, followed by an extensive Q&A session with the audience. This has been shown to work well in recent editions of MMM. However, If a proposal envisages a potential for a large number of submissions, or if particular value may be provided by the additional discussions that could come with the format of longer special session activities, we are open to other proposals for session formats.

Special Session Proposal Submission Process

Normally, each special session will include four to five papers. In addition to invited papers, if any, the conference will also welcome open submissions to special sessions. In order to ensure the high quality of accepted papers, all papers submitted to special sessions, including invited papers, will be peer-reviewed through a strict review process. If a special session has many high-quality submissions, some of the submissions might be moved to regular sessions. The review process will be coordinated with the main technical program review process of the MMM, as coordinated by the PC chairs. The organizers of each special session must provide 1-2 reviews per submitted / invited paper, while the regular program committee will provide another two reviews. Final decision on acceptance / rejection will be made jointly by the special session organizers and the MMM 2026 PC chairs.

Special session papers must follow the same guidelines as regular research papers with respect to restrictions on formatting, length, and double-blind reviews.

Special Session Proposal Guidelines

Please include the following information in your proposal:
  • Title of the proposed special session.
  • Name, affiliation, brief biography and contact information for each of the organizers.
  • A session abstract including significance justification and a brief overview of the state-of-the-art of the proposed special session topic.
    • Note: The session abstract should be in a format that can be copied directly to the conference web-page to advertise the session.
  • List of invited papers, if applicable, or tentative contributions, including for each paper: tentative title, author list, and preferably a short abstract.
  • Tentative list of reviewers for the special session.
  • Description of the session format (e.g. panel, technical talks, poster session).
  • Plans for advertising the special session (e.g. targeted distribution lists, projects, communities).
  • If applicable, plans for exploitation of the results of the special session (e.g. summary papers).

Submission Instructions

Please submit proposals containing the information from above by email to the Special Session chairs using subject "MMM 2026 Special Session Proposal" by the submission deadline (in the Important Dates section) with the email below:

special-sessions@mmm2026.cz

Receipt of a proposal will be confirmed by the chairs. Proposals will be evaluated based on the timeliness of the topic and relevance to MMM, qualifications of the organizers, and the quality and community interest of the topic and proposed invited papers.

Important Dates

  • Submission Deadline: 14 April 2025
  • Notification of Acceptance: 25 April 2025