Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (28)

Search Parameters:
Keywords = digital musical interface

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1247 KB  
Article
ERLD-HC: Entropy-Regularized Latent Diffusion for Harmony-Constrained Symbolic Music Generation
by Yang Li
Entropy 2025, 27(9), 901; https://doi.org/10.3390/e27090901 - 25 Aug 2025
Viewed by 1133
Abstract
Recently, music generation models based on deep learning have made remarkable progress in the field of symbolic music generation. However, the existing methods often have problems of violating musical rules, especially since the control of harmonic structure is relatively weak. To address these [...] Read more.
Recently, music generation models based on deep learning have made remarkable progress in the field of symbolic music generation. However, the existing methods often have problems of violating musical rules, especially since the control of harmonic structure is relatively weak. To address these limitations, this paper proposes a novel framework, the Entropy-Regularized Latent Diffusion for Harmony-Constrained (ERLD-HC), which combines a variational autoencoder (VAE) and latent diffusion models with an entropy-regularized conditional random field (CRF). Our model first encodes symbolic music into latent representations through VAE, and then introduces the entropy-based CRF module into the cross-attention layer of UNet during the diffusion process, achieving harmonic conditioning. The proposed model balances two key limitations in symbolic music generation: the lack of theoretical correctness of pure algorithm-driven methods and the lack of flexibility of rule-based methods. In particular, the CRF module learns classic harmony rules through learnable feature functions, significantly improving the harmony quality of the generated Musical Instrument Digital Interface (MIDI). Experiments on the Lakh MIDI dataset show that compared with the baseline VAE+Diffusion, the violation rates of harmony rules of the ERLD-HC model under self-generated and controlled inputs have decreased by 2.35% and 1.4% respectively. Meanwhile, the MIDI generated by the model maintains a high degree of melodic naturalness. Importantly, the harmonic guidance in ERLD-HC is derived from an internal CRF inference module, which enforces consistency with music-theoretic priors. While this does not yet provide direct external chord conditioning, it introduces a form of learned harmonic controllability that balances flexibility and theoretical rigor. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 4086 KB  
Article
Piezoelectric Energy Harvesting System to Charge Batteries with the Use of a Portable Musical Organ
by Josué Esaú Vega-Ávila, Guillermo Adolfo Anaya-Ruiz, José Joel Román-Godínez, Gabriela Guadalupe Esquivel-Barajas, Jorge Ortiz-Marín, Rogelio Gudiño-Valdez and Hilda Aguilar-Rodríguez
Energies 2025, 18(7), 1850; https://doi.org/10.3390/en18071850 - 6 Apr 2025
Viewed by 3085
Abstract
In recent years, the increase in energy demand has been an incentive to search for new ways to generate energy. An alternative is producing this energy from daily human activities. To do this, piezoelectric devices have been used in different human activities to [...] Read more.
In recent years, the increase in energy demand has been an incentive to search for new ways to generate energy. An alternative is producing this energy from daily human activities. To do this, piezoelectric devices have been used in different human activities to collect energy. Some of these potential activities are transportation, biomedicine, and electronic devices. Harvesting energy from the mechanical force applied by a pianist during their performance is one of these activities that can be used. The implementation of piezoelectric devices under the keys of an electric organ was carried out. A theoretical model was developed to estimate the amount of energy we could recover. The system was characterized by controlled forces. The volume generated by the forces was measured via a Musical Instrument Digital Interface (MIDI) using the open-source music production software “LMMS (Linux MultiMedia Studio) 1.2.2 version”. The electric potential difference was measured as a function of the volume generated by the pianist. The voltages generated for different frequencies of the pianist’s rhythm were studied. The efficiency calculated in the mathematical model agreed with that obtained in the implemented system. The study results indicate that the batteries were recharged, which resulted in 53 s of organ operation. Full article
(This article belongs to the Section D2: Electrochem: Batteries, Fuel Cells, Capacitors)
Show Figures

Figure 1

19 pages, 19517 KB  
Article
Design and Implementation of the Python-Driven Digital Horn System: A Novel Approach for Electric Vehicle Sound Systems
by Hakan Tekin, Hikmet Karşıyaka and Davut Ertekin
Appl. Sci. 2024, 14(23), 10977; https://doi.org/10.3390/app142310977 - 26 Nov 2024
Viewed by 2476
Abstract
Electric and hybrid vehicles are known for their significant reduction in road noise. However, concerns have emerged regarding their silent operation, potentially increasing risks for other road users. To mitigate this, the Acoustic Vehicle Alert System (AVAS) has been mandated by regulations such [...] Read more.
Electric and hybrid vehicles are known for their significant reduction in road noise. However, concerns have emerged regarding their silent operation, potentially increasing risks for other road users. To mitigate this, the Acoustic Vehicle Alert System (AVAS) has been mandated by regulations such as R138 by UNECE in the USA and Europe. This regulation dictates the generation of sound in electric vehicles of categories M and N1 during normal, reverse, and forward motion without the internal combustion engine engaged. Compliance involves meeting specific sound requirements based on vehicle mode and condition. This paper introduces a Python-based approach to designing digital horn sounds, leveraging music theory and signal processing techniques to replace traditional mechanical horns in electric vehicles equipped with AVAS devices. The aim is to offer a practical and efficient means of generating digital horn sounds using this software. The software includes an application capable of producing and customizing horn sounds, with the HornSoundGeneratorGUI class providing a user-friendly interface built with the Tkinter library. To validate the digital horn produced sounds by the software and ensure compliance with AVAS regulations, comprehensive electrical and acoustic tests were conducted in a fully equipped quality laboratory. The results demonstrated that the sound levels achieved met the required 105–107 dB/2 m standard specified by the regulation. Full article
Show Figures

Figure 1

14 pages, 881 KB  
Article
Show-and-Tell: An Interface for Delivering Rich Feedback upon Creative Media Artefacts
by Colin Dodds and Ahmed Kharrufa
Multimodal Technol. Interact. 2024, 8(3), 23; https://doi.org/10.3390/mti8030023 - 14 Mar 2024
Cited by 2 | Viewed by 2361
Abstract
In this paper, we explore an approach to feedback which could allow those learning creative digital media practices in remote and asynchronous environments to receive rich, multi-modal, and interactive feedback upon their creative artefacts. We propose the show-and-tell feedback interface which couples graphical [...] Read more.
In this paper, we explore an approach to feedback which could allow those learning creative digital media practices in remote and asynchronous environments to receive rich, multi-modal, and interactive feedback upon their creative artefacts. We propose the show-and-tell feedback interface which couples graphical user interface changes (the show) to a text-based explanations (the tell). We describe the rationale behind the design and offer a tentative set of design criteria. We report the implementation and deployment into a real-world educational setting using a prototype interface developed to allow either traditional text-only feedback or our proposed show-and tell feedback across four sessions. The prototype was used to provide formative feedback upon music students’ coursework resulting in a total of 103 pieces of feedback. Thematic analysis was used to analyse the data obtained through interviews and focus groups with both educators and students (i.e., feedback givers and receivers). Recipients considered show-and-tell feedback to possess greater clarity and detail in comparison with the single modality text-only feedback they are used to receiving. We also report interesting emergent issues around control and artistic vision, and we discuss how these issues could be mitigated in future iterations of the interface. Full article
Show Figures

Figure 1

15 pages, 3402 KB  
Article
Design and Implementation of Impromptu Mobile Social Karaoke for Digital Cultural Spaces in the New Normal Era
by Choonsung Shin and Yangmi Lim
Appl. Sci. 2023, 13(22), 12319; https://doi.org/10.3390/app132212319 - 14 Nov 2023
Cited by 1 | Viewed by 2245
Abstract
Although music singing and sharing is a most common cultural activity, users are not allowed to instantly share their own songs due to fixed streaming and listening services on the music service platforms. In order to deal with instant singing and listening together [...] Read more.
Although music singing and sharing is a most common cultural activity, users are not allowed to instantly share their own songs due to fixed streaming and listening services on the music service platforms. In order to deal with instant singing and listening together with other users, this research was conducted to propose a mobile social karaoke system that supports group creation and music sharing via smartphones. The proposed karaoke system consists of a social music cloud that provides impromptu mobile singing and sharing services on the user’s devices. The social music cloud manages a group of users and supports data streaming and message sharing among the users’ devices. The users’ devices enable users to utilize karaoke services based on touch and voice-based natural interfaces in consideration of mobile specifications. After testing the system’s usability and stability, the results illustrated that the voice-based interface was effective in controlling and using the service according to the devices’ mobility and availability. In addition, when karaoke services are utilized in small groups, music transmission/reception is possible without being affected by the number of users. This study has three implications: first, it presented an important possibility for the creation of more active digital cultural spaces and changing mobile life by providing users with a recreation function; second, it provides convenient touch and voice UI for mobile users; and, third, it improves performance and management through its distributed processing. Full article
(This article belongs to the Special Issue Sentiment Analysis for Social Media III)
Show Figures

Figure 1

22 pages, 289 KB  
Article
Exploring the Opportunities of Haptic Technology in the Practice of Visually Impaired and Blind Sound Creatives
by Jacob Harrison, Alex Lucas, James Cunningham, Andrew P. McPherson and Franziska Schroeder
Arts 2023, 12(4), 154; https://doi.org/10.3390/arts12040154 - 13 Jul 2023
Cited by 5 | Viewed by 4098
Abstract
Visually impaired and blind (VIB) people as a community face several access barriers when using technology. For users of specialist technology, such as digital audio workstations (DAWs), these access barriers become increasingly complex—often stemming from a vision-centric approach to user interface design. Haptic [...] Read more.
Visually impaired and blind (VIB) people as a community face several access barriers when using technology. For users of specialist technology, such as digital audio workstations (DAWs), these access barriers become increasingly complex—often stemming from a vision-centric approach to user interface design. Haptic technologies may present opportunities to leverage the sense of touch to address these access barriers. In this article, we describe a participant study involving interviews with twenty VIB sound creatives who work with DAWs. Through a combination of semi-structured interviews and a thematic analysis of the interview data, we identify key issues relating to haptic audio and accessibility from the perspective of VIB sound creatives. We introduce the technical and practical barriers that VIB sound creatives encounter, which haptic technology may be capable of addressing. We also discuss the social and cultural aspects contributing to VIB people’s uptake of new technology and access to the music technology industry. Full article
(This article belongs to the Special Issue Feeling the Future—Haptic Audio)
27 pages, 8922 KB  
Article
Brass Haptics: Comparing Virtual and Physical Trumpets in Extended Realities
by Devon John Blewett and David Gerhard
Arts 2023, 12(4), 145; https://doi.org/10.3390/arts12040145 - 10 Jul 2023
Cited by 1 | Viewed by 3864
Abstract
Despite the benefits of learning an instrument, many students drop out early because it can be frustrating for the student, expensive for the caregiver, and loud for the household. Virtual Reality (VR) and Extended Reality (XR) offer the potential to address these challenges [...] Read more.
Despite the benefits of learning an instrument, many students drop out early because it can be frustrating for the student, expensive for the caregiver, and loud for the household. Virtual Reality (VR) and Extended Reality (XR) offer the potential to address these challenges by simulating multiple instruments in an engaging and motivating environment through headphones. To assess the potential for commercial VR to augment musical experiences, we used standard VR implementation processes to design four virtual trumpet interfaces: camera-tracking with tracked register selection (two ways), camera-tracking with voice activation, and a controller plus a force-feedback haptic glove. To evaluate these implementations, we created a virtual music classroom that produces audio, notes, and finger pattern guides loaded from a selected Musical Instrument Digital Interface (MIDI) file. We analytically compared these implementations against physical trumpets (both acoustic and MIDI), considering features of ease of use, familiarity, playability, noise, and versatility. The physical trumpets produced the most reliable and familiar experience, and some XR benefits were considered. The camera-based methods were easy to use but lacked tactile feedback. The haptic glove provided improved tracking accuracy and haptic feedback over camera-based methods. Each method was also considered as a proof-of-concept for other instruments, real or imaginary. Full article
(This article belongs to the Special Issue Feeling the Future—Haptic Audio)
Show Figures

Figure 1

23 pages, 6366 KB  
Article
Perceptual Relevance of Haptic Feedback during Virtual Plucking, Bowing and Rubbing of Physically-Based Musical Resonators
by Marius George Onofrei, Federico Fontana and Stefania Serafin
Arts 2023, 12(4), 144; https://doi.org/10.3390/arts12040144 - 7 Jul 2023
Cited by 2 | Viewed by 2383
Abstract
The physics-based design and realization of a digital musical interface asks for the modeling and implementation of the contact-point interaction with the performer. Musical instruments always include a resonator that converts the input energy into sound, meanwhile feeding part of it back to [...] Read more.
The physics-based design and realization of a digital musical interface asks for the modeling and implementation of the contact-point interaction with the performer. Musical instruments always include a resonator that converts the input energy into sound, meanwhile feeding part of it back to the performer through the same point. Specifically during plucking or bowing interactions, musicians receive a handful of information from the force feedback and vibrations coming from the contact points. This paper focuses on the design and realization of digital music interfaces realizing two physical interactions along with a musically unconventional one, rubbing, rarely encountered in assimilable forms across the centuries on a few instruments. Therefore, it aims to highlight the significance of haptic rendering in improving quality during a musical experience as opposed to interfaces provided with a passive contact point. Current challenges are posed by the specific requirements of the haptic device, as well as the computational effort needed for realizing such interactions without occurrence during the performance of typical digital artifacts such as latency and model instability. Both are however seemingly transitory due to the constant evolution of computer systems for virtual reality and the progressive popularization of haptic interfaces in the sonic interaction design community. In summary, our results speak in favor of adopting nowadays haptic technologies as an essential component for digital musical interfaces affording point-wise contact interactions in the personal performance space. Full article
(This article belongs to the Special Issue Feeling the Future—Haptic Audio)
Show Figures

Figure 1

14 pages, 1518 KB  
Article
Music, Art Installations and Haptic Technology
by Alexandros Kontogeorgakopoulos
Arts 2023, 12(4), 142; https://doi.org/10.3390/arts12040142 - 7 Jul 2023
Cited by 1 | Viewed by 4277
Abstract
This paper presents some directions on the design, development and creative use of haptic systems for musical composition, performance and digital art creation. This research has been conducted both from an artistic and a technical point of view and its ambition, over the [...] Read more.
This paper presents some directions on the design, development and creative use of haptic systems for musical composition, performance and digital art creation. This research has been conducted both from an artistic and a technical point of view and its ambition, over the last decade, apart from the artistic outcome, was to introduce the field of haptics to artistic communities based on an open, do it yourself—DIY ethos. The five directions presented here are not in any sense exhaustive and are based principally on a series of collaborative works and more personal open-ended explorations with the medium of haptics and, more specifically, force-feedback interaction. They will be highlighted along with information about the interaction models and their application to artistic works created by the author and other colleagues. Those directions are (i) Haptic Algorithms and Systems; (ii) Performers Intercoupling; (iii) Haptic Interfaces as Part of the Artistic Practice; (iv) Electromechanical Sound Generation; and (v) Media Art and Art Installations. The interdisciplinary field of musical haptics still has a relatively minor position in the sound and music computing research agendas and, more importantly, its artistic dimension is very rarely discussed. The findings of this research aim to indicate and clarify potential research pathways and offer some results on the use of haptics and force-feedback systems in an artistic context. Full article
(This article belongs to the Special Issue Feeling the Future—Haptic Audio)
Show Figures

Figure 1

25 pages, 6905 KB  
Article
Design and Assessment of Digital Musical Devices Yielding Vibrotactile Feedback
by Stefano Papetti, Hanna Järveläinen and Federico Fontana
Arts 2023, 12(4), 143; https://doi.org/10.3390/arts12040143 - 7 Jul 2023
Cited by 3 | Viewed by 3397
Abstract
Touch has a pivotal importance in determining the expressivity of musical performance for a number of musical instruments. However, most digital musical devices provide no interactive force and/or vibratory feedback to the performer, thus depriving the somatosensory channel of a number of cues. [...] Read more.
Touch has a pivotal importance in determining the expressivity of musical performance for a number of musical instruments. However, most digital musical devices provide no interactive force and/or vibratory feedback to the performer, thus depriving the somatosensory channel of a number of cues. Is the lack of haptic feedback only an aesthetic issue, or does it remove cues essential for digital instrument playing? If so, at which level is the interaction objectively impoverished? What are the effects on musical performance? In this survey article we illustrate our recent research about the use of vibrotactile feedback in three digital instrument interfaces, using tools that we developed over several years and made available to the community in open-source form. These interfaces span a wide range of familiarity and gestural opportunities, enabling us to explore the impact of haptic feedback on different types of digital instruments. We conducted experiments with professional musicians to assess the impact of vibratory cues on both the perceived quality of the instrument and the playing experience, as well as on musical performance. Particular attention was paid to scientific rigor and repeatability of the results, so as to serve as a reference for researchers and practitioners of the musical haptics community. Our results suggest a significant role of vibrotactile feedback in shaping the perception of digital musical instruments, although the effects on musical performance varied depending on the interfaces tested. Full article
(This article belongs to the Special Issue Feeling the Future—Haptic Audio)
Show Figures

Figure 1

17 pages, 5869 KB  
Article
Enhanced Evaluation Method of Musical Instrument Digital Interface Data based on Random Masking and Seq2Seq Model
by Zhe Jiang, Shuyu Li and Yunsick Sung
Mathematics 2022, 10(15), 2747; https://doi.org/10.3390/math10152747 - 3 Aug 2022
Cited by 4 | Viewed by 2747
Abstract
With developments in artificial intelligence (AI), it is possible for novel applications to utilize deep learning to compose music by the format of musical instrument digital interface (MIDI) even without any knowledge of musical theory. The composed music is generally evaluated by human-based [...] Read more.
With developments in artificial intelligence (AI), it is possible for novel applications to utilize deep learning to compose music by the format of musical instrument digital interface (MIDI) even without any knowledge of musical theory. The composed music is generally evaluated by human-based Turing test, which is a subjective approach and does not provide any quantitative criteria. Therefore, objective evaluation approaches with many general descriptive parameters are applied to the evaluation of MIDI data while considering MIDI features such as pitch distances, chord rates, tone spans, drum patterns, etc. However, setting several general descriptive parameters manually on large datasets is difficult and has considerable generalization limitations. In this paper, an enhanced evaluation method based on random masking and sequence-to-sequence (Seq2Seq) model is proposed to evaluate MIDI data. An experiment was conducted on real MIDI data, generated MIDI data, and random MIDI data. The bilingual evaluation understudy (BLEU) is a common MIDI data evaluation approach and is used here to evaluate the performance of the proposed method in a comparative study. In the proposed method, the ratio of the average evaluation score of the generated MIDI data to that of the real MIDI data was 31%, while that of BLEU was 79%. The lesser the ratio, the greater the difference between the real MIDI data and generated MIDI data. This implies that the proposed method quantified the gap while accurately identifying real and generated MIDI data. Full article
(This article belongs to the Special Issue Application of Mathematical Methods in Artificial Intelligence)
Show Figures

Figure 1

39 pages, 26383 KB  
Article
Gesture, Music and Computer: The Centro di Sonologia Computazionale at Padova University, a 50-Year History
by Sergio Canazza, Giovanni De Poli and Alvise Vidolin
Sensors 2022, 22(9), 3465; https://doi.org/10.3390/s22093465 - 2 May 2022
Cited by 18 | Viewed by 5711
Abstract
With the advent of digital technologies, the computer has become a generalized tool for music production. Music can be seen as a creative form of human–human communication via a computer, and therefore, research on human–computer and computer–human interfaces is very important. This paper, [...] Read more.
With the advent of digital technologies, the computer has become a generalized tool for music production. Music can be seen as a creative form of human–human communication via a computer, and therefore, research on human–computer and computer–human interfaces is very important. This paper, for the Sensors Special Issue on 800 Years of Research at Padova University, presents a review of the research in the field of music technologies at Padova University by the Centro di Sonologia Computazionale (CSC), focusing on scientific, technological and musical aspects of interaction between musician and computer and between computer and audience. We discuss input devices for detecting information from gestures or audio signals and rendering systems for audience and user engagement. Moreover, we discuss a multilevel conceptual framework, which allows multimodal expressive content processing and coordination, which is important in art and music. Several paradigmatic musical works that stated new lines of both musical and scientific research are then presented in detail. The preservation of this heritage presents problems very different from those posed by traditional artworks. CSC is actively engaged in proposing new paradigms for the preservation of digital art. Full article
(This article belongs to the Special Issue 800 Years of Research at Padova University)
Show Figures

Figure 1

18 pages, 4829 KB  
Article
Tangible and Personalized DS Application Approach in Cultural Heritage: The CHATS Project
by Giorgos Trichopoulos, John Aliprantis, Markos Konstantakis, Konstantinos Michalakis and George Caridakis
Computers 2022, 11(2), 19; https://doi.org/10.3390/computers11020019 - 31 Jan 2022
Cited by 13 | Viewed by 4587
Abstract
Storytelling is widely used to project cultural elements and engage people emotionally. Digital storytelling enhances the process by integrating images, music, narrative, and voice along with traditional storytelling methods. Newer visualization technologies such as Augmented Reality allow more vivid representations and further influence [...] Read more.
Storytelling is widely used to project cultural elements and engage people emotionally. Digital storytelling enhances the process by integrating images, music, narrative, and voice along with traditional storytelling methods. Newer visualization technologies such as Augmented Reality allow more vivid representations and further influence the way museums present their narratives. Cultural institutions aim towards integrating such technologies in order to provide a more engaging experience, which is also tailored to the user by exploiting personalization and context-awareness. This paper presents CHATS, a system for personalized digital storytelling in cultural heritage sites. Storytelling is based on a tangible interface, which adds a gamification aspect and improves interactivity for people with visual impairment. Technologies of AR and Smart Glasses are used to enhance visitors’ experience. To test CHATS, a case study was implemented and evaluated. Full article
Show Figures

Figure 1

18 pages, 1193 KB  
Article
Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers
by Tim Murray-Browne and Panagiotis Tigas
Appl. Sci. 2021, 11(18), 8531; https://doi.org/10.3390/app11188531 - 14 Sep 2021
Cited by 7 | Viewed by 4012
Abstract
Most Human–Computer Interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. [...] Read more.
Most Human–Computer Interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. We explore an alternative approach to designing interaction that we call the emergent interface: interaction leveraging unsupervised machine learning to replace designed abstractions with contextually derived emergent representations. The approach offers opportunities to create interfaces bespoke to a single individual, to continually evolve and adapt the interface in line with that individual’s needs and affordances, and to bridge more deeply with the complex and imprecise interaction that defines much of our non-digital communication. We explore this approach through artistic research rooted in music, dance and AI with the partially emergent system Sonified Body. The system maps the moving body into sound using an emergent representation of the body derived from a corpus of improvised movement from the first author. We explore this system in a residency with three dancers. We reflect on the broader implications and challenges of this alternative way of thinking about interaction, and how far it may help users avoid being limited by the assumptions of a system’s designer. Full article
(This article belongs to the Special Issue Advances in Computer Music)
Show Figures

Figure 1

41 pages, 24478 KB  
Review
Review: Development and Technical Design of Tangible User Interfaces in Wide-Field Areas of Application
by Alice Krestanova, Martin Cerny and Martin Augustynek
Sensors 2021, 21(13), 4258; https://doi.org/10.3390/s21134258 - 22 Jun 2021
Cited by 24 | Viewed by 13870
Abstract
A tangible user interface or TUI connects physical objects and digital interfaces. It is more interactive and interesting for users than a classic graphic user interface. This article presents a descriptive overview of TUI’s real-world applications sorted into ten main application areas—teaching of [...] Read more.
A tangible user interface or TUI connects physical objects and digital interfaces. It is more interactive and interesting for users than a classic graphic user interface. This article presents a descriptive overview of TUI’s real-world applications sorted into ten main application areas—teaching of traditional subjects, medicine and psychology, programming, database development, music and arts, modeling of 3D objects, modeling in architecture, literature and storytelling, adjustable TUI solutions, and commercial TUI smart toys. The paper focuses on TUI’s technical solutions and a description of technical constructions that influences the applicability of TUIs in the real world. Based on the review, the technical concept was divided into two main approaches: the sensory technical concept and technology based on a computer vision algorithm. The sensory technical concept is processed to use wireless technology, sensors, and feedback possibilities in TUI applications. The image processing approach is processed to a marker and markerless approach for object recognition, the use of cameras, and the use of computer vision platforms for TUI applications. Full article
(This article belongs to the Special Issue Application of Human-Computer Interaction in Life)
Show Figures

Figure 1

Back to TopTop