Next Article in Journal
Multidirectional Ultrasound Propagation Velocity as a Predictor of Open Porosity and Water Absorption in Volcanic Rocks: Traditional Regression and Machine Learning
Previous Article in Journal
Coupled Effects of Spatially Non-Uniform Ground Motions and Bolt Corrosion on Seismic Response of Long Large-Diameter Shield Tunnels
Previous Article in Special Issue
Pose-Driven Body Shape Prediction Algorithm Based on the Conditional GAN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Future Human–Technology Interactions and Their Intelligent Applications

by
Diego Resende Faria
School of Science, Loughborough University, Epinal Way, Loughborough LE11 3TU, UK
Appl. Sci. 2026, 16(7), 3224; https://doi.org/10.3390/app16073224
Submission received: 21 January 2026 / Accepted: 19 March 2026 / Published: 26 March 2026

1. Introduction

Artificial intelligence is fundamentally transforming how humans interact with digital technologies. Early human–computer interaction systems relied primarily on explicit commands and graphical user interfaces. However, advances in machine learning, multimodal sensing, adaptive computing, ubiquitous systems, and mobile intelligence have enabled new forms of interaction in which technologies can interpret human behavior, emotional signals, and contextual information [1,2,3,4,5,6,7,8,9].
Human-centered artificial intelligence (HCAI) has emerged as a key paradigm guiding this transformation. Unlike traditional AI approaches, which prioritize computational performance alone, HCAI emphasizes transparency, usability, reliability, and human oversight. The goal of human-centered AI is not simply to automate tasks but to design intelligent systems that augment human capabilities while maintaining trust and accountability [1,10,11,12,13]. In this sense, contemporary intelligent systems are increasingly evaluated not only according to predictive power or computational speed but also according to interpretability, user trust, safety, fairness, and social acceptability.
This editorial is not a mere summary of the articles included in this Special Issue. Instead, it situates these contributions within the broader research landscape of human-centered artificial intelligence, affective computing, assistive technologies, and AI-enabled well-being systems. By connecting the contributions of this Special Issue with prior research across multiple disciplines, this article provides a broader perspective on emerging research directions and identifies open challenges that continue to shape the future of intelligent human–technology interaction.
Human-centered AI research has emphasized the importance of designing systems that support human decision-making rather than replacing it. As intelligent technologies become embedded in everyday life, ensuring that these systems remain interpretable, reliable, and ethically aligned with human values has become a central concern [1,10,11,12,13]. Parallel developments in affective computing have expanded the ability of intelligent systems to detect and interpret human emotions. Humans communicate emotional information through multiple modalities, including speech, facial expressions, body language, and textual communication. Multimodal emotion recognition systems are designed to capture these signals and integrate them into computational models capable of interpreting emotional states [14,15,16,17,18,19,20,21,22].
The growth of multimodal affective computing reflects broader developments in machine learning architectures capable of processing heterogeneous data sources. Deep neural networks, transformer models, and probabilistic fusion approaches have enabled new capabilities in emotion recognition and human behavior modeling [2,3,4,9,18,23,24]. Beyond emotional understanding, intelligent technologies are increasingly applied to assistive systems designed to improve accessibility for individuals with disabilities. Advances in indoor positioning, wearable sensors, and speech-based interfaces have given rise to new assistive technologies that improve independence and participation in everyday activities [25,26,27,28,29].
Another emerging research direction involves the use of AI to support mental health care and digital well-being. Machine learning models can analyze behavioral patterns derived from smartphone usage, wearable sensors, and online interactions to detect early indicators of mental health conditions such as anxiety and depression. AI-driven conversational agents and behavioral intervention technologies have been proposed as scalable tools for mental health support, although researchers consistently stress that such systems should complement, rather than replace, professional care [30,31,32,33,34,35,36,37,38].
Despite these advances, several challenges remain. Intelligent interaction systems often rely on limited datasets that may not generalize well across cultural contexts or real-world environments. Ethical concerns related to privacy, transparency, and algorithmic bias also remain important considerations in the deployment of intelligent systems [10,11,12,13,32]. Within this broader research context, the Special Issue “Future Human–Technology Interactions and Their Intelligent Applications” aims to highlight recent advances demonstrating how intelligent technologies can enhance human interaction, accessibility, and well-being.
Summarizing these interconnected themes, Figure 1 presents a conceptual framework of human-centered multimodal interaction that underpins this Special Issue. As illustrated, future human–technology interaction systems can be understood as a set of linked layers, beginning with multimodal sensing of human signals such as pose, audio, and text, followed by intelligent interpretation through affective computing and behavioral analytics, and shaped by human-centered design principles, including ethics, transparency, and trust. These layers ultimately support applied outcomes in domains such as accessibility, healthcare, and education. The figure therefore synthesizes the broader message of this editorial: the next generation of intelligent systems will be defined not only by algorithmic capability but also by the extent to which multimodal AI is integrated with human-centered design to produce meaningful, trustworthy, and socially beneficial applications.

2. Human-Centered Artificial Intelligence

Human-centered artificial intelligence represents a paradigm shift in the design and evaluation of intelligent systems. Traditional AI research often prioritized algorithmic performance metrics such as accuracy or computational efficiency. However, as AI technologies become increasingly embedded in everyday environments, researchers have emphasized the importance of designing systems that remain interpretable, trustworthy, and aligned with human values [1,10,11,12,13].
Human-centered AI frameworks highlight the importance of transparency and human control in algorithmic systems. Intelligent technologies should be designed to augment human decision-making rather than replace it. This perspective has become particularly important in domains such as healthcare, finance, and education, where algorithmic decisions may have significant social consequences [1,11,35,36,37]. Explainable AI research has further emphasized the importance of designing models that allow users to understand how algorithmic decisions are made. Interpretable models improve user trust and enable human oversight of automated decision-making processes [10,11,12,13].
Human-centered AI research therefore sits at the intersection of computer science, human–computer interaction, psychology, design, and ethics. Designing systems capable of interacting naturally with humans requires not only advances in machine learning but also insights into human cognition, communication, and social behavior. Thus, in practice, system design must address issues such as uncertainty communication, trust calibration, cognitive workload, and the role of human judgement in mixed-initiative systems [1,6,7,11,12].
The increasing maturity of HCAI also changes how editorial overviews in this area should be written. A narrow emphasis on technical novelty alone is no longer sufficient. Instead, it is necessary to examine whether interaction systems are intelligible, socially appropriate, and deployable in realistic settings. This broader framing is especially relevant to the papers in this Special Issue, which collectively address body modeling, behavioral prediction, accessibility, emotion-aware analysis, navigation assistance, and AI-supported mental health.

3. Multimodal Affective Computing

Affective computing is a research field focused on enabling machines to detect, interpret, and respond to human emotions. Early work in affective computing explored emotion recognition using facial expressions and speech signals, establishing the conceptual basis for machine understanding of affect [16,19,39]. However, emotions are complex phenomena that are expressed through multiple modalities simultaneously [40]. Speech prosody, lexical content, facial expressions, gesture, posture, and contextual cues all contribute to how humans communicate and interpret affective states.
Multimodal emotion recognition systems are designed to integrate information from speech, facial expressions, textual communication, and physiological signals. Combining these signals can improve the accuracy and robustness of emotion detection systems because one modality may compensate for ambiguity or noise in another [14,15,17,18,19,20,21,22]. Recent advances in deep learning have significantly improved multimodal emotion recognition performance. Neural network architectures capable of processing heterogeneous data streams allow systems to capture complex relationships between modalities, while transformer-based methods have improved sequence modeling and cross-modal interaction learning [2,3,4,9,18].
At the same time, multimodal affective computing still faces several challenges. Data annotation remains difficult because emotional states are subjective and context-dependent. Models trained on limited datasets may struggle to generalize across cultures, languages, or real-world conditions. Many systems also continue to rely on benchmark datasets recorded in relatively controlled environments, which may not reflect noisy, spontaneous interaction settings [14,15,18,19,20]. These limitations are important when evaluating the contributions of the papers in this Special Issue: the most relevant question is not only whether a model achieves high performance on a dataset but whether it advances the field toward richer, more ecologically valid forms of affect-aware interaction.
This context is particularly relevant for the contribution on multimodal affective communication analysis in this Special Issue [39]. This paper brings together speech emotion recognition and text sentiment analysis, thereby aligning with a core direction in the field: moving beyond unimodal emotion inference toward layered, probabilistic, and semantically informed affect interpretation. Relative to the broader literature, its contribution is not simply to report another classifier comparison but to frame multimodal fusion as a pathway toward richer communication analysis in applied interaction settings.

4. Assistive Technologies and Accessibility

Accessibility remains one of the most important application areas for intelligent human–technology interaction systems. Advances in sensor technologies and machine learning have given rise to assistive systems that support individuals with visual impairments, mobility limitations, or cognitive disabilities [25,26,27,28,29]. In these contexts, the value of intelligent technologies lies not only in technical sophistication but in their ability to reduce barriers to participation, autonomy, and safety.
Indoor navigation systems for visually impaired users represent one example of this progress. Techniques such as ultra-wideband localization, LiDAR sensing, and sensor fusion approaches have been explored to improve navigation accuracy in indoor environments where conventional GPS is ineffective [25,26,27]. The assistive navigation literature has repeatedly shown that no single sensing technology fully solves the problem. Instead, robust systems often depend on combining localization, obstacle detection, and interaction design in ways that reduce uncertainty and cognitive load for users [25,26,27].
Educational accessibility is another critical area. Technologies that enable visually impaired students to participate independently in examinations can significantly improve educational inclusion. From a human-centered perspective, accessibility is not merely a matter of compliance or technical accommodation but of preserving dignity, privacy, fairness, and independence in high-stakes tasks [28,29]. The ExamVoice paper in this Special Issue is particularly valuable in this regard [41] because it addresses a concrete educational workflow rather than accessibility at an abstract level. Its contribution lies in showing how low-cost, pragmatic, and secure design decisions can produce meaningful gains in independent assessment.
The paper on UWB-based indoor positioning for visually impaired users similarly contributes to assistive AI by evaluating real localization trade-offs under realistic conditions [42]. Rather than presenting UWB as a universal solution, it helps clarify where such technology is useful, where it falls short, and why hybrid or multi-sensor systems remain important. This kind of empirical grounding is especially useful in a field where assistive solutions are sometimes proposed without enough attention to deployment realities.

5. AI for Mental Health and Digital Well-Being

Artificial intelligence is increasingly being explored as a tool for supporting mental health care and promoting digital well-being. Machine learning models can analyze behavioral patterns derived from smartphone usage, wearable sensors, and online interactions, while digital therapeutic systems such as AI chatbots and mobile applications have shown potential for providing scalable mental health interventions [30,31,32,33,34,35,36,37,38]. At the same time, researchers consistently emphasize that these systems should complement rather than replace human clinicians [32,33,34,35,36].
The relevance of this research to human–technology interaction is twofold. First, it broadens interaction research beyond efficiency and usability, allowing it to encompass questions of well-being, psychological support, and long-term behavioral outcomes. Second, it foregrounds ethical concerns: systems operating in mental health contexts must address privacy, safety, bias, and the risk of overreliance on automated tools [11,12,13,32,35,36]. These issues are central to current debates about trustworthy AI in care settings.
The paper reviewing tools and technologies for anxiety and depression management using AI contributes directly to this broader conversation [43]. Its strength lies in synthesizing multiple categories of tools—including chatbots, apps, wearables, virtual environments, and language-model-related developments—within a single landscape view. Relative to the wider literature, this paper is useful because it emphasizes complementarity with clinical practice rather than technological substitution. In this sense, it is aligned with the wider HCAI principle that intelligent systems should extend human support capacity without displacing human judgement and care.
The paper on phubbing [44] also connects to digital well-being, though from a different angle. Whereas much of the digital mental health literature focuses on intervention or symptom support, this contribution addresses technology-related behavioral risk by modeling internet usage trends associated with problematic interaction patterns. Its novelty lies less in introducing a new psychological theory than in bringing predictive analytics into a domain often dominated by cross-sectional self-report studies. This is a useful methodological shift, even if future work will need to strengthen the link between macro-level usage forecasts and individual-level phubbing behavior.

6. Overview of the Special Issue Contributions

The six papers collected in this Special Issue span diverse topics, but they are connected by a common concern: how intelligent technologies can better interpret, support, and adapt to human needs in realistic contexts. Table 1 provides a scannable summary of these contributions, highlighting their primary domains, technical approaches, and key impacts on human–technology interaction.
Jang et al. addressed human body modelling using a pose-driven conditional GAN that reconstructs body shape from clothed RGB imagery [45]. In the broader landscape of computer vision and human modeling, this contribution is relevant because it targets a practically important challenge: recovering body-related information without relying on multi-view capture or highly constrained settings. From a human–technology interaction perspective, such capabilities are relevant to avatar creation, ergonomics, virtual environments, and non-invasive body measurement.
Yalman et al. studied phubbing through machine learning and forecasting approaches [44]. Their contribution reflects the growing intersection between AI, behavioral analytics, and digital well-being. Rather than relying only on descriptive or correlational analysis, they used predictive models to estimate future internet-usage trajectories, broadening the methodological repertoire available for studying technology-related social behavior.
Al-Eidarous et al. propose ExamVoice, an accessibility-oriented system for blind and visually impaired students in examination settings [41]. Its significance lies in turning accessibility into an end-to-end workflow problem involving secure delivery, independence, and equitable participation.
Resende Faria et al. present a multimodal framework combining speech emotion recognition and text sentiment analysis [39]. This work is strongly aligned with current directions in multimodal affective computing and highlights the value of probabilistic fusion for richer communication analysis.
Rosiak et al. examine UWB-based indoor positioning for navigation support [42]. Their contribution is important because it places localization performance in the context of actual assistive use, discussing not only accuracy but also the practical limitations of different sensing options.
Pavlopoulos et al. review AI tools for anxiety and depression management [43]. This paper offers a broader synthesis of how AI is being integrated into mental health support pathways and underscores both potential and risk.
Collectively, these papers demonstrate that future human–technology interaction is not defined by a single application or algorithmic family. Rather, it is characterized by convergence: intelligent systems increasingly combine data-driven modeling, multimodal sensing, and human-centered design in order to address accessibility, communication, behavior, and well-being.

7. Discussion

The six papers included in this Special Issue illustrate several important trends in intelligent human–technology interaction research. First, multimodal data integration is becoming increasingly important. Systems that combine visual, acoustic, and textual signals can capture richer information about human behavior and emotional states [14,15,18,19,20]. This notion is especially clear in the paper on multimodal affective communication [39], but the same logic extends to assistive navigation and body modeling, where combining signals or priors improves robustness under uncertainty.
Second, accessibility remains a central motivation for intelligent system design. Assistive technologies for visually impaired users demonstrate how AI can improve independence and participation [25,26,27,28,29]. The strongest papers in this domain are those that address real workflows and user constraints rather than only abstract technical performance. In this respect, the papers on ExamVoice and UWB navigation make particularly practical contributions [41,42].
Third, the integration of AI into mental health support tools highlights the growing interdisciplinary nature of human-centered AI research [30,31,32,33,35,36]. Mental health technologies require not only predictive or generative intelligence but also careful attention to safety, privacy, human oversight, and sustained engagement. The review by Pavlopoulos et al. is therefore timely because it situates AI tools within broader care pathways rather than treating them as standalone replacements for clinicians [43].
A further insight is that the Special Issue papers vary not only by domain but by the level at which they intervene in the interaction pipeline. Some focus on sensing and inference, such as body reconstruction or emotion recognition. Others focus on assistive workflows, behavioral forecasting, or higher-level support ecosystems. This diversity is a strength because it shows that future intelligent interaction will likely be built from linked layers: sensing, interpretation, adaptation, and support.
At the same time, the papers also reveal persistent limitations that are common across the wider field. Many intelligent interaction systems still depend on constrained datasets, indirect proxies, or relatively short evaluation cycles. Real-world deployment, cross-context generalization, and longitudinal validation remain ongoing challenges [11,14,15,25,33,36]. Taken together, the papers in this Special Issue illustrate how the field of human–technology interaction is evolving toward systems that are not only technically intelligent but also emotionally aware, accessible, and aligned with human-centered design principles.

8. Open Challenges and Research Directions

Despite significant advances in intelligent human–technology interaction, several important challenges remain.
Multimodal data integration: Although multimodal systems have demonstrated improved emotion recognition performance, integrating heterogeneous data sources remains challenging. Different modalities often have asynchronous temporal dynamics and varying levels of reliability. Missing data, noisy channels, imperfect alignment, and context dependence continue to complicate fusion strategies [14,15,18,19,20]. Future work should therefore invest not only in more complex models but in more robust and interpretable fusion mechanisms.
Real-world deployment: Many intelligent interaction systems are evaluated using controlled datasets that may not reflect real-world environments. Future work should prioritize longitudinal studies and real-world evaluations. This is particularly important for assistive systems, affect-aware interaction, and digital well-being technologies, wherein noise, variability, and user diversity are central rather than peripheral [25,26,27,32,35].
Ethical and trustworthy AI: As intelligent systems become capable of interpreting human emotions and behavioral patterns, concerns related to privacy, transparency, and fairness are becoming increasingly important. These concerns are intensified when such systems are used in healthcare, education, or other high-stakes environments [1,10,11,12,13]. Future interaction research must continue to integrate technical progress with governance, accountability, and user rights.
Accessibility and inclusive design: Although assistive technologies have improved significantly, many systems remain difficult to deploy in real-world settings due to infrastructure requirements, cost limitations, or insufficient user-centered evaluation. Inclusive design should be treated as a core scientific concern rather than a final implementation detail [26,27,28,29].
From prediction to support: A final challenge concerns the transition from detection or prediction to meaningful support. Many systems can classify, forecast, or infer user states, but fewer demonstrate how these capabilities lead to improved outcomes, safer decisions, or better experiences over time. This gap is visible across well-being, mental health, and assistive interaction research [30,31,32,33,35,36]. Bridging it will require closer collaboration among AI researchers, clinicians, designers, educators, and end users.

9. Conclusions

Human–technology interaction is entering a new era characterized by intelligent systems capable of understanding human behavior, emotions, and contextual environments. The research presented in this Special Issue illustrates how AI technologies can enhance accessibility, emotional understanding, and human well-being.
More broadly, the six contributions show that future intelligent interaction systems will be shaped not only by technical advances in machine learning but also by the extent to which those advances are embedded in human-centered, trustworthy, and inclusive design frameworks. In this respect, this Special Issue contributes to a larger movement in the field: a shift from isolated intelligent functions toward integrated systems that sense, interpret, and support people in more adaptive and socially meaningful ways.
Future work should continue exploring multimodal interaction models, human-centered AI frameworks, and ethical design principles that ensure intelligent technologies remain beneficial to society. As the field matures, its success will increasingly depend on whether it can combine technical innovation with accessibility, transparency, and sustained real-world value.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Shneiderman, B. Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. Int. J. Hum.-Comput. Interact. 2020, 36, 495–504. [Google Scholar] [CrossRef]
  2. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  3. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2012; pp. 5998–6008. [Google Scholar]
  4. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  5. Lane, N.D.; Georgiev, P.; Qendro, L. DeepEar: Robust Smartphone Audio Sensing in Unconstrained Acoustic Environments Using Deep Learning. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Osaka, Japan, 7–11 September 2015. [Google Scholar] [CrossRef]
  6. Abowd, G.D.; Mynatt, E.D. Charting Past, Present, and Future Research in Ubiquitous Computing. ACM Trans. Comput.-Hum. Interact. 2000, 7, 29–58. [Google Scholar] [CrossRef]
  7. Weiser, M. The Computer for the 21st Century. Sci. Am. 1991, 265, 94–104. [Google Scholar] [CrossRef]
  8. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson: London, UK, 2021. [Google Scholar]
  9. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  10. Doshi-Velez, F.; Kim, B. Towards a Rigorous Science of Interpretable Machine Learning. arXiv 2017, arXiv:1702.08608. [Google Scholar] [CrossRef]
  11. Jobin, A.; Ienca, M.; Vayena, E. The Global Landscape of AI Ethics Guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  12. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
  13. European Commission High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI; European Commission: Brussels, Belgium, 2019.
  14. Pan, B.; Hirota, K.; Jia, Z.; Dai, Y. A Review of Multimodal Emotion Recognition from Datasets, Preprocessing, Features, and Fusion Methods. Neurocomputing 2023, 530, 116–138. [Google Scholar] [CrossRef]
  15. Ramaswamy, M.; Palaniswamy, S. Multimodal Emotion Recognition: A Comprehensive Review, Trends, and Challenges. Wires Data Min. Knowl. Discov. 2024, 14, e1563. [Google Scholar] [CrossRef]
  16. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  17. Cambria, E.; White, B. Jumping NLP Curves: A Review of Natural Language Processing Research. IEEE Comput. Intell. Mag. 2014, 9, 48–57. [Google Scholar] [CrossRef]
  18. Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A Review of Affective Computing: From Unimodal Analysis to Multimodal Fusion. Inf. Fusion 2017, 37, 98–125. [Google Scholar] [CrossRef]
  19. Calvo, R.A.; D’Mello, S. Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications. IEEE Trans. Affect. Comput. 2010, 1, 18–37. [Google Scholar] [CrossRef]
  20. D’Mello, S.; Kory, J. A Review and Meta-Analysis of Multimodal Affect Detection Systems. ACM Comput. Surv. 2015, 47, 43. [Google Scholar] [CrossRef]
  21. Martinez, B.; Valstar, M.F.; Jiang, B.; Pantic, M. Automatic Analysis of Facial Actions: A Survey. IEEE Trans. Affect. Comput. 2019, 10, 325–347. [Google Scholar] [CrossRef]
  22. Cambria, E.; Poria, S.; Hazarika, D.; Kwok, K. SenticNet 5: Discovering Conceptual Primitives for Sentiment Analysis by Means of Context Embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar] [CrossRef]
  23. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  24. Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  25. Alarifi, A.; Al-Salman, A.; Alsaleh, M.; Alnafessah, A.; Al-Hadhrami, S.; Al-Ammar, M.A.; Al-Khalifa, H.S. Ultra Wideband Indoor Positioning Technologies: Analysis and Recent Advances. Sensors 2016, 16, 707. [Google Scholar] [CrossRef] [PubMed]
  26. Gudauskis, M.; Žvironas, A.; Plikynas, D. Indoor Navigation Systems for Visually Impaired Persons: A Comprehensive Review of Technologies and Approaches. In Mobility of Visually Impaired People. Signals and Communication Technology; Pissaloux, E., Velazquez, R., Eds.; Springer: Cham, Switzerland, 2026. [Google Scholar] [CrossRef]
  27. Guerrero, L.A.; Vásquez, F.; Ochoa, S.F. An Indoor Navigation System for the Visually Impaired. Sensors 2012, 12, 8236–8258. [Google Scholar] [CrossRef]
  28. Harper, S.; Yesilada, Y. Web Accessibility: A Foundation for Research; Springer: London, UK, 2008. [Google Scholar]
  29. World Wide Web Consortium (W3C). Web Content Accessibility Guidelines (WCAG) 2.1; W3C Recommendation; WorldWideWeb Consortium (W3C): Wakefield, MA, USA, 2018. [Google Scholar]
  30. Fitzpatrick, K.K.; Darcy, A.; Vierhile, M. Delivering Cognitive Behavior Therapy to Young Adults with Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment. Health 2017, 4, e19. [Google Scholar] [CrossRef]
  31. Fulmer, R.; Joerin, A.; Gentile, B.; Lakerink, L.; Rauws, M. Using Psychological Artificial Intelligence (Tess) to Relieve Symptoms of Depression and Anxiety: Randomized Controlled Trial. JMIR Ment. Health 2018, 5, e64. [Google Scholar] [CrossRef]
  32. Torous, J.; Bucci, S.; Bell, I.H.; Kessing, L.V.; Faurholt-Jepsen, M.; Whelan, P.; Carvalho, A.F.; Keshavan, M.; Linardon, J.; Firth, J. The Growing Field of Digital Psychiatry: Current Evidence and the Future of Apps, Social Media, Chatbots, and Virtual Reality. World Psychiatry 2021, 20, 318–335. [Google Scholar] [CrossRef]
  33. Mohr, D.C.; Schueller, S.M.; Montague, E.; Burns, M.N.; Rashidi, P. The Behavioral Intervention Technology Model: An Integrated Conceptual and Technological Framework for eHealth and mHealth Interventions. J. Med. Internet Res. 2014, 16, e146. [Google Scholar] [CrossRef]
  34. Topol, E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again; Basic Books: New York, NY, USA, 2019. [Google Scholar]
  35. Beam, A.L.; Kohane, I.S. Big Data and Machine Learning in Health Care. JAMA 2018, 319, 1317–1318. [Google Scholar] [CrossRef]
  36. Obermeyer, Z.; Emanuel, E.J. Predicting the Future—Big Data, Machine Learning, and Clinical Medicine. N. Engl. J. Med. 2016, 375, 1216–1219. [Google Scholar] [CrossRef]
  37. Calvo, R.A.; Peters, D. Positive Computing: Technology for Wellbeing and Human Potential; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  38. Topol, E.J. The Patient Will See You Now: The Future of Medicine Is in Your Hands; Basic Books: New York, NY, USA, 2015. [Google Scholar]
  39. Resende Faria, D.; Weinberg, A.; Ayrosa, P. Multimodal Affective Communication Analysis: Fusing Speech Emotion and Text Sentiment Using Machine Learning. Appl. Sci. 2024, 14, 6631. [Google Scholar] [CrossRef]
  40. Ekman, P. Emotions Revealed; Times Books: New York, NY, USA, 2003. [Google Scholar]
  41. Al-Eidarous, W.; Alsiyami, A.; Aljabri, M.; Alqethami, S.; Almutanni, B. ExamVoice: Innovative Solutions for Improving Exam Accessibility for Blind and Visually Impaired Students in Saudi Arabia. Appl. Sci. 2024, 14, 8813. [Google Scholar] [CrossRef]
  42. Rosiak, M.; Kawulok, M.; Maćkowski, M. The Effectiveness of UWB-Based Indoor Positioning Systems for the Navigation of Visually Impaired Individuals. Appl. Sci. 2024, 14, 5646. [Google Scholar] [CrossRef]
  43. Pavlopoulos, A.; Rachiotis, T.; Maglogiannis, I. An Overview of Tools and Technologies for Anxiety and Depression Management Using AI. Appl. Sci. 2024, 14, 9068. [Google Scholar] [CrossRef]
  44. Yalman, A.; Arık, M.; Kayakuş, M.; Karaduman, M.; Karaduman, S.; Yiğit Açıkgöz, F.; Livberber, T.; Kayan, F. Predicting Phubbing Through Machine Learning: A Study of Internet Usage and Health Risks. Appl. Sci. 2025, 15, 1157. [Google Scholar] [CrossRef]
  45. Jang, J.; Byeon, J.; Jung, D.; Chang, J.; Youm, S. Pose-Driven Body Shape Prediction Algorithm Based on Conditional GAN. Appl. Sci. 2025, 15, 7643. [Google Scholar] [CrossRef]
Figure 1. Conceptual framework of human-centered multimodal interaction. The figure illustrates future intelligent interaction systems as linked layers: multimodal sensing (e.g., pose, audio, and text), intelligent interpretation (e.g., affective computing and behavioral analytics), human-centered design (e.g., ethics, transparency, and trust), and applied impact across domains such as accessibility, healthcare, and education.
Figure 1. Conceptual framework of human-centered multimodal interaction. The figure illustrates future intelligent interaction systems as linked layers: multimodal sensing (e.g., pose, audio, and text), intelligent interpretation (e.g., affective computing and behavioral analytics), human-centered design (e.g., ethics, transparency, and trust), and applied impact across domains such as accessibility, healthcare, and education.
Applsci 16 03224 g001
Table 1. Summary of contributions to the Special Issue “Future Human–Technology Interactions and Their Intelligent Applications”.
Table 1. Summary of contributions to the Special Issue “Future Human–Technology Interactions and Their Intelligent Applications”.
AuthorsPrimary DomainMethodologyKey Contribution
Jang et al. [45]Human ModelingPose-Driven GANReconstructing body shape from clothed RGB images for non-invasive measurement.
Yalman et al. [44]Digital Well-BeingPredictive MLForecasting technology-related behavioral risks through usage analytics.
Al-Eidarous et al. [41]AccessibilityVoice-based InterfaceSecure end-to-end examination workflow for blind students.
Resende Faria et al. [39]Affective ComputingMultimodal FusionProbabilistic integration of speech and text for richer communication analysis.
Rosiak et al. [42]Assistive TechUWB PositioningEvaluation of indoor localization trade-offs in real-world assistive navigation.
Pavlopoulos et al. [43]Mental HealthReview and SynthesisLandscape of AI tools for anxiety/depression management and clinical pathways.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Resende Faria, D. Future Human–Technology Interactions and Their Intelligent Applications. Appl. Sci. 2026, 16, 3224. https://doi.org/10.3390/app16073224

AMA Style

Resende Faria D. Future Human–Technology Interactions and Their Intelligent Applications. Applied Sciences. 2026; 16(7):3224. https://doi.org/10.3390/app16073224

Chicago/Turabian Style

Resende Faria, Diego. 2026. "Future Human–Technology Interactions and Their Intelligent Applications" Applied Sciences 16, no. 7: 3224. https://doi.org/10.3390/app16073224

APA Style

Resende Faria, D. (2026). Future Human–Technology Interactions and Their Intelligent Applications. Applied Sciences, 16(7), 3224. https://doi.org/10.3390/app16073224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop