Next Issue
Volume 1, December
 
 

Multimedia, Volume 1, Issue 1 (September 2025) – 4 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
29 pages, 1164 KB  
Article
Imagining Ecocentric Futures Through Media: Biocentric Evaluation Questionnaire for Degrowth and Non-Anthropocentric Societies
by Erik Geslin
Multimedia 2025, 1(1), 4; https://doi.org/10.3390/multimedia1010004 - 12 Sep 2025
Viewed by 798
Abstract
Media shape and reflect social imaginaries, influencing collective beliefs, norms, and aspirations. Video games and films frequently depict themes like urbanization, dystopian futures, and resource-driven expansion, often envisioning humanity colonizing new planets after depleting Earth’s resources. Such narratives risk reinforcing exploitative attitudes toward [...] Read more.
Media shape and reflect social imaginaries, influencing collective beliefs, norms, and aspirations. Video games and films frequently depict themes like urbanization, dystopian futures, and resource-driven expansion, often envisioning humanity colonizing new planets after depleting Earth’s resources. Such narratives risk reinforcing exploitative attitudes toward the environment, extending them to new frontiers. Research has shown that media, especially video games, influence societal perceptions and shape future possibilities. While largely reflecting anthropocentric worldviews, these media also have the potential to promote ecocentric perspectives. In the context of biodiversity loss and planetary imbalance, media’s role in fostering non-anthropocentric values is crucial. This study introduces the Non-Anthropocentric Media Evaluation Questionnaire (NAMEQ), a tool designed to help media producers assess whether their work aligns with ecocentric principles, and to support academic researchers and students in the study and analysis of media from a biocentric perspective. Applying this framework to 138 widely distributed video games and films reveals a strong dominance of anthropocentric narratives. While some works incorporate ecocentric themes, they remain inconsistent. The findings underscore the need for a more deliberate and coherent representation of bio-centric values in media, advocating for a shift in cultural narratives toward perspectives that recognize and respect the intrinsic value of the non-human world. Full article
Show Figures

Figure 1

32 pages, 7175 KB  
Article
VisFactory: Adaptive Multimodal Digital Twin with Integrated Visual-Haptic-Auditory Analytics for Industry 4.0 Engineering Education
by Tsung-Ching Lin, Cheng-Nan Chiu, Po-Tong Wang and Li-Der Fang
Multimedia 2025, 1(1), 3; https://doi.org/10.3390/multimedia1010003 - 18 Aug 2025
Viewed by 911
Abstract
Industry 4.0 has intensified the skills gap in industrial automation education, with graduates requiring extended on boarding periods and supplementary training investments averaging USD 11,500 per engineer. This paper introduces VisFactory, a multimedia learning system that extends the cognitive theory of multimedia learning [...] Read more.
Industry 4.0 has intensified the skills gap in industrial automation education, with graduates requiring extended on boarding periods and supplementary training investments averaging USD 11,500 per engineer. This paper introduces VisFactory, a multimedia learning system that extends the cognitive theory of multimedia learning by incorporating haptic feedback as a third processing channel alongside visual and auditory modalities. The system integrates a digital twin architecture with ultra-low latency synchronization (12.3 ms) across all sensory channels, a dynamic feedback orchestration algorithm that distributes information optimally across modalities, and a tripartite student model that continuously calibrates instruction parameters. We evaluated the system through a controlled experiment with 127 engineering students randomly assigned to experimental and control groups, with assessments conducted immediately and at three-month and six-month intervals. VisFactory significantly enhanced learning outcomes across multiple dimensions: 37% reduction in time to mastery (t(125) = 11.83, p < 0.001, d = 2.11), skill acquisition increased from 28% to 85% (ηp2=0.54), and 28% higher knowledge retention after six months. The multimodal approach demonstrated differential effectiveness across learning tasks, with haptic feedback providing the most significant benefit for procedural skills (52% error reduction) and visual–auditory integration proving most effective for conceptual understanding (49% improvement). The adaptive modality orchestration reduced cognitive load by 43% compared to unimodal interfaces. This research advances multimedia learning theory by validating tri-modal integration effectiveness and establishing quantitative benchmarks for sensory channel synchronization. The findings provide a theoretical framework and implementation guidelines for optimizing multimedia learning environments for complex skill development in technical domains. Full article
Show Figures

Figure 1

20 pages, 2745 KB  
Article
Uses of Metaverse Recordings in Multimedia Information Retrieval
by Patrick Steinert, Stefan Wagenpfeil, Ingo Frommholz and Matthias L. Hemmje
Multimedia 2025, 1(1), 2; https://doi.org/10.3390/multimedia1010002 - 10 Aug 2025
Viewed by 556
Abstract
Metaverse Recordings (MVRs), screen recordings of user experiences in virtual environments, represent a mostly underexplored field. This article addresses the integration of MVR and Multimedia Information Retrieval (MMIR). Unlike conventional media, MVRs can include additional streams of structured data, such as Scene Raw [...] Read more.
Metaverse Recordings (MVRs), screen recordings of user experiences in virtual environments, represent a mostly underexplored field. This article addresses the integration of MVR and Multimedia Information Retrieval (MMIR). Unlike conventional media, MVRs can include additional streams of structured data, such as Scene Raw Data (SRD) and Peripheral Data (PD), which capture graphical rendering states and user interactions. We explore the technical facets of recordings in the Metaverse, detailing diverse methodologies and their implications for MVR-specific Multimedia Information Retrieval. Our discussion not only highlights the unique opportunities of MVR content analysis, but also examines the challenges they pose to conventional MMIR paradigms. It addresses the key challenges around the semantic gap in existing content analysis tools when applied to MVRs and the high computational cost and limited recall of video-based feature extraction. We present a model for MVR structure, a prototype recording system, and an evaluation framework to assess retrieval performance. We collected a set of 111 MVRs to study and evaluate the intricacies. Our findings show that SRD and PD provide significant, low-cost contributions to retrieval accuracy and scalability, and support the case for integrating structured interaction data into future MMIR architectures. Full article
Show Figures

Figure 1

2 pages, 131 KB  
Editorial
Welcome to a New Open Access Journal for Multimedia
by Michele Nappi
Multimedia 2025, 1(1), 1; https://doi.org/10.3390/multimedia1010001 - 27 Feb 2025
Viewed by 2206
Abstract
It is with great enthusiasm that I announce Multimedia, the new MDPI journal dedicated to the ever-evolving and multidisciplinary field of multimedia [...] Full article
Next Issue
Back to TopTop