Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = facial driven design

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 8141 KiB  
Review
AI-Driven Aesthetic Rehabilitation in Edentulous Arches: Advancing Symmetry and Smile Design Through Medit SmartX and Scan Ladder
by Adam Brian Nulty
J. Aesthetic Med. 2025, 1(1), 4; https://doi.org/10.3390/jaestheticmed1010004 (registering DOI) - 1 Aug 2025
Abstract
The integration of artificial intelligence (AI) and advanced digital workflows is revolutionising full-arch implant dentistry, particularly for geriatric patients with edentulous and atrophic arches, for whom achieving both prosthetic passivity and optimal aesthetic outcomes is critical. This narrative review evaluates current challenges in [...] Read more.
The integration of artificial intelligence (AI) and advanced digital workflows is revolutionising full-arch implant dentistry, particularly for geriatric patients with edentulous and atrophic arches, for whom achieving both prosthetic passivity and optimal aesthetic outcomes is critical. This narrative review evaluates current challenges in intraoral scanning accuracy—such as scan distortion, angular deviation, and cross-arch misalignment—and presents how innovations like the Medit SmartX AI-guided workflow and the Scan Ladder system can significantly enhance precision in implant position registration. These technologies mitigate stitching errors by using real-time scan body recognition and auxiliary geometric references, yielding mean RMS trueness values as low as 11–13 µm, comparable to dedicated photogrammetry systems. AI-driven prosthetic design further aligns implant-supported restorations with facial symmetry and smile aesthetics, prioritising predictable midline and occlusal plane control. Early clinical data indicate that such tools can reduce prosthetic misfits to under 20 µm and lower complication rates related to passive fit, while shortening scan times by up to 30% compared to conventional workflows. This is especially valuable for elderly individuals who may not tolerate multiple lengthy adjustments. Additionally, emerging AI applications in design automation, scan validation, and patient-specific workflow adaptation continue to evolve, supporting more efficient and personalised digital prosthodontics. In summary, AI-enhanced scanning and prosthetic workflows do not merely meet functional demands but also elevate aesthetic standards in complex full-arch rehabilitations. The synergy of AI and digital dentistry presents a transformative opportunity to consistently deliver superior precision, passivity, and facial harmony for edentulous implant patients. Full article
Show Figures

Graphical abstract

21 pages, 2869 KiB  
Article
Multimodal Feature-Guided Audio-Driven Emotional Talking Face Generation
by Xueping Wang, Yuemeng Huo, Yanan Liu, Xueni Guo, Feihu Yan and Guangzhe Zhao
Electronics 2025, 14(13), 2684; https://doi.org/10.3390/electronics14132684 - 2 Jul 2025
Viewed by 532
Abstract
Audio-driven emotional talking face generation aims to generate talking face videos with rich facial expressions and temporal coherence. Current diffusion model-based approaches predominantly depend on either single-label emotion annotations or external video references, which often struggle to capture the complex relationships between modalities, [...] Read more.
Audio-driven emotional talking face generation aims to generate talking face videos with rich facial expressions and temporal coherence. Current diffusion model-based approaches predominantly depend on either single-label emotion annotations or external video references, which often struggle to capture the complex relationships between modalities, resulting in less natural emotional expressions. To address these issues, we propose MF-ETalk, a multimodal feature-guided method for emotional talking face generation. Specifically, we design an emotion-aware multimodal feature disentanglement and fusion framework that leverages Action Units (AUs) to disentangle facial expressions and models the nonlinear relationships among AU features using a residual encoder. Furthermore, we introduce a hierarchical multimodal feature fusion module that enables dynamic interactions among audio, visual cues, AUs, and motion dynamics. This module is optimized through global motion modeling, lip synchronization, and expression subspace learning, enabling full-face dynamic generation. Finally, an emotion-consistency constraint module is employed to refine the generated results and ensure the naturalness of expressions. Extensive experiments on the MEAD and HDTF datasets demonstrate that MF-ETalk outperforms state-of-the-art methods in both expression naturalness and lip-sync accuracy. For example, it achieves an FID of 43.052 and E-FID of 2.403 on MEAD, along with strong synchronization performance (LSE-C of 6.781, LSE-D of 7.962), confirming the effectiveness of our approach in producing realistic and emotionally expressive talking face videos. Full article
Show Figures

Figure 1

30 pages, 6208 KiB  
Article
Clinical Safety and Efficacy of Hyaluronic Acid–Niacinamide–Tranexamic Acid Injectable Hydrogel for Multifactorial Facial Skin Quality Enhancement with Dark Skin Lightening
by Sarah Hsin, Kelly Lourenço, Alexandre Porcello, Michèle Chemali, Cíntia Marques, Wassim Raffoul, Marco Cerrano, Lee Ann Applegate and Alexis E. Laurent
Gels 2025, 11(7), 495; https://doi.org/10.3390/gels11070495 - 26 Jun 2025
Viewed by 1402
Abstract
Facial aging is a complex process manifesting as skin hyperpigmentation, textural irregularities, and a diminished elasticity, hydration, and evenness of tone. The escalating demand for minimally invasive aesthetic interventions has driven the development of advanced hydrogel-based injectable formulations. This clinical study assessed the [...] Read more.
Facial aging is a complex process manifesting as skin hyperpigmentation, textural irregularities, and a diminished elasticity, hydration, and evenness of tone. The escalating demand for minimally invasive aesthetic interventions has driven the development of advanced hydrogel-based injectable formulations. This clinical study assessed the safety and efficacy of Hydragel A1, an injectable hydrogel containing hyaluronic acid (HA), niacinamide, and tranexamic acid (TXA), designed to simultaneously address multiple facets of facial skin aging. A cohort of 49 female participants underwent a series of objective and subjective assessments, including the Global Aesthetic Improvement Scale (GAIS), instrumental measurements (Antera 3D, Chromameter, Cutometer, Dermascan, Corneometer), and standardized photographic documentation at baseline (Day 0) and 14, 28, and 70 days post-treatment. The results demonstrated statistically significant improvements in skin hydration, texture, elasticity, and pigmentation following Hydragel A1 administration. Notably, no serious adverse events or significant injection site reactions were observed, confirming the favorable safety profile of the investigated device. Collectively, these findings underscore the potential of a combined HA, niacinamide, and TXA injectable formulation to provide a comprehensive approach to facial skin rejuvenation, effectively targeting multiple aging-related mechanisms. Full article
(This article belongs to the Section Gel Applications)
Show Figures

Figure 1

23 pages, 1664 KiB  
Article
Seeing the Unseen: Real-Time Micro-Expression Recognition with Action Units and GPT-Based Reasoning
by Gabriela Laura Sălăgean, Monica Leba and Andreea Cristina Ionica
Appl. Sci. 2025, 15(12), 6417; https://doi.org/10.3390/app15126417 - 6 Jun 2025
Viewed by 1235
Abstract
This paper presents a real-time system for the detection and classification of facial micro-expressions, evaluated on the CASME II dataset. Micro-expressions are brief and subtle indicators of genuine emotions, posing significant challenges for automatic recognition due to their low intensity, short duration, and [...] Read more.
This paper presents a real-time system for the detection and classification of facial micro-expressions, evaluated on the CASME II dataset. Micro-expressions are brief and subtle indicators of genuine emotions, posing significant challenges for automatic recognition due to their low intensity, short duration, and inter-subject variability. To address these challenges, the proposed system integrates advanced computer vision techniques, rule-based classification grounded in the Facial Action Coding System, and artificial intelligence components. The architecture employs MediaPipe for facial landmark tracking and action unit extraction, expert rules to resolve common emotional confusions, and deep learning modules for optimized classification. Experimental validation demonstrated a classification accuracy of 93.30% on CASME II, highlighting the effectiveness of the hybrid design. The system also incorporates mechanisms for amplifying weak signals and adapting to new subjects through continuous knowledge updates. These results confirm the advantages of combining domain expertise with AI-driven reasoning to improve micro-expression recognition. The proposed methodology has practical implications for various fields, including clinical psychology, security, marketing, and human-computer interaction, where the accurate interpretation of emotional micro-signals is essential. Full article
Show Figures

Figure 1

14 pages, 2750 KiB  
Article
Subjective Evaluation of Generative AI-Driven Dialogues in Paired Dyadic and Topic-Sharing Triadic Interaction Structures
by Kaori Abe, Changqin Quan, Sheng Cao and Zhiwei Luo
Appl. Sci. 2025, 15(9), 5092; https://doi.org/10.3390/app15095092 - 3 May 2025
Viewed by 603
Abstract
As the linguistic capabilities of dialogue systems improve, the importance of how they interact with humans and build trustworthy relationships is increasing. This study investigated the effect of interaction structures in a generative AI-driven dialogue system to improve relationships through interactions. The dialogue [...] Read more.
As the linguistic capabilities of dialogue systems improve, the importance of how they interact with humans and build trustworthy relationships is increasing. This study investigated the effect of interaction structures in a generative AI-driven dialogue system to improve relationships through interactions. The dialogue system communicated with subjects in natural language via voice and included a facial expression function. The settings of dyadic and triadic interaction structures were applied to the system. The one-to-one dyadic interaction and triadic interaction with joint attention to a topic were designed following the developmental stages of children’s social communication ability. Subjective evaluations of the dialogues and the system were conducted through a questionnaire. As a result, positive evaluations were based on well-constructed structures. The system’s inappropriate behavior under failed structures reduced the quality of the dialogues and worsened the evaluation of the system. The interaction structures in the system settings needed to match the structures intended by the subjects, whether the structures were dyadic or triadic. Under the matching and successful construction, the system fully demonstrated its dialogue capability and behaved pleasantly with the subjects. By switching interaction structures to adapt to users’ demands, system behavior becomes more appropriate for users. Full article
Show Figures

Figure 1

38 pages, 2098 KiB  
Review
Rethinking Poultry Welfare—Integrating Behavioral Science and Digital Innovations for Enhanced Animal Well-Being
by Suresh Neethirajan
Poultry 2025, 4(2), 20; https://doi.org/10.3390/poultry4020020 - 29 Apr 2025
Viewed by 2191
Abstract
The relentless drive to meet global demand for poultry products has pushed for rapid intensification in chicken farming, dramatically boosting efficiency and yield. Yet, these gains have exposed a host of complex welfare challenges that have prompted scientific scrutiny and ethical reflection. In [...] Read more.
The relentless drive to meet global demand for poultry products has pushed for rapid intensification in chicken farming, dramatically boosting efficiency and yield. Yet, these gains have exposed a host of complex welfare challenges that have prompted scientific scrutiny and ethical reflection. In this review, I critically evaluate recent innovations aimed at mitigating such concerns by drawing on advances in behavioral science and digital monitoring and insights into biological adaptations. Specifically, I focus on four interconnected themes: First, I spotlight the complexity of avian sensory perception—encompassing vision, auditory capabilities, olfaction, and tactile faculties—to underscore how lighting design, housing configurations, and enrichment strategies can better align with birds’ unique sensory worlds. Second, I explore novel tools for gauging emotional states and cognition, ranging from cognitive bias tests to developing protocols for identifying pain or distress based on facial cues. Third, I examine the transformative potential of computer vision, bioacoustics, and sensor-based technologies for the continuous, automated tracking of behavior and physiological indicators in commercial flocks. Fourth, I assess how data-driven management platforms, underpinned by precision livestock farming, can deploy real-time insights to optimize welfare on a broad scale. Recognizing that climate change and evolving production environments intensify these challenges, I also investigate how breeds resilient to extreme conditions might open new avenues for welfare-centered genetic and management approaches. While the adoption of cutting-edge techniques has shown promise, significant hurdles persist regarding validation, standardization, and commercial acceptance. I conclude that truly sustainable progress hinges on an interdisciplinary convergence of ethology, neuroscience, engineering, data analytics, and evolutionary biology—an integrative path that not only refines welfare assessment but also reimagines poultry production in ethically and scientifically robust ways. Full article
Show Figures

Figure 1

28 pages, 1277 KiB  
Article
Shame Regulation in Learning: A Double-Edged Sword
by Tanmay Sinha, Fan Wang and Manu Kapur
Educ. Sci. 2025, 15(4), 502; https://doi.org/10.3390/educsci15040502 - 17 Apr 2025
Viewed by 1238
Abstract
Previous research and classroom practices have focused on dispelling shame, assuming that it negatively impacts self-efficacy and performance, and overlook the potential for shame to facilitate learning. To investigate this gap, we designed an intervention with 132 tertiary education students (45.46% male, 64.4% [...] Read more.
Previous research and classroom practices have focused on dispelling shame, assuming that it negatively impacts self-efficacy and performance, and overlook the potential for shame to facilitate learning. To investigate this gap, we designed an intervention with 132 tertiary education students (45.46% male, 64.4% European ethnicity) spanning diverse undergraduate majors to show how and why designing for experiences of shame and appropriately regulating them can differentially impact learning. Shame was induced through autobiographical recall, imagination, and failure-driven problem-solving before randomly assigning students to three conditions: two with explicit tips for either decreasing shame or maintaining shame (experimental groups) and one with no-regulation tips (control). Students worked on an introductory data science problem deliberately designed to lead to failure before receiving canonical instruction. Manipulation checks triangulating self-reported and facial expression analysis data suggested that shame was successfully regulated in the intended direction, depending on the condition. Our results, drawing on mixed-methods analyses, further suggested that relative to students decreasing shame, those who maintained shame during initial problem-solving had (i) similar post-test performance on a non-isomorphic question and improved performance on the transfer question, evidenced by accuracy in solving applied data science and inference tasks; (ii) complete reasoning across all post-test questions, as evidenced by elaborations justifying the usage of graphical and numerical representations across those tasks; and (iii) use of superior emotion regulation strategies focused on deploying attention to the problem and reappraising its inherently challenging nature with an approach orientation, as evidenced by a higher frequency of such codes derived from self-reported qualitative data during the intervention. Decreasing shame was as effective as not engaging in explicit regulation. Our results suggest that teaching efforts should be channeled to facilitate experiencing emotions that are conducive to goals, whether they feel pleasurable or not, which may inevitably involve emoting both positive and negative (e.g., shame) in moderation. However, it is paramount that emotional experiences are not merely seen by educators as tools for improved content learning but as an essential part of holistic student development. We advocate for the deliberate design of learning experiences that support, rather than overshadow, students’ emotional growth. Full article
Show Figures

Figure 1

22 pages, 3427 KiB  
Article
A Multimodal Artificial Intelligence Model for Depression Severity Detection Based on Audio and Video Signals
by Liyuan Zhang, Shuai Zhang, Xv Zhang and Yafeng Zhao
Electronics 2025, 14(7), 1464; https://doi.org/10.3390/electronics14071464 - 4 Apr 2025
Viewed by 1558
Abstract
In recent years, artificial intelligence (AI) has increasingly utilized speech and video signals for emotion recognition, facial recognition, and depression detection, playing a crucial role in mental health assessment. However, the AI-driven research on detecting depression severity remains limited, and the existing models [...] Read more.
In recent years, artificial intelligence (AI) has increasingly utilized speech and video signals for emotion recognition, facial recognition, and depression detection, playing a crucial role in mental health assessment. However, the AI-driven research on detecting depression severity remains limited, and the existing models are often too large for lightweight deployment, restricting their real-time monitoring capabilities, especially in resource-constrained environments. To address these challenges, this study proposes a lightweight and accurate multimodal method for detecting depression severity, aiming to provide effective support for smart healthcare systems. Specifically, we design a multimodal detection network based on speech and video signals, enhancing the recognition of depression severity by optimizing the cross-modal fusion strategy. The model leverages Long Short-Term Memory (LSTM) networks to capture long-term dependencies in speech and visual sequences, effectively extracting dynamic features associated with depression. Considering the behavioral differences of respondents when interacting with human versus robotic interviewers, we train two separate sub-models and fuse their outputs using a Mixture of Experts (MOE) framework capable of modeling uncertainty, thereby suppressing the influence of low-confidence experts. In terms of the loss function, the traditional Mean Squared Error (MSE) is replaced with Negative Log-Likelihood (NLL) to better model prediction uncertainty and enhance robustness. The experimental results show that the improved AI model achieves an accuracy of 83.86% in depression severity recognition. The model’s floating-point operations per second (FLOPs) reached 0.468 GFLOPs, with a parameter size of only 0.52 MB, demonstrating its compact size and strong performance. These findings underscore the importance of emotion and facial recognition in AI applications for mental health, offering a promising solution for real-time depression monitoring in resource-limited environments. Full article
Show Figures

Figure 1

15 pages, 3317 KiB  
Article
Classification of Properties in Human-like Dialogue Systems Using Generative AI to Adapt to Individual Preferences
by Kaori Abe, Changqin Quan, Sheng Cao and Zhiwei Luo
Appl. Sci. 2025, 15(7), 3466; https://doi.org/10.3390/app15073466 - 21 Mar 2025
Cited by 1 | Viewed by 726
Abstract
As the linguistic capabilities of AI-based dialogue systems improve, their human-likeness is increasing, and their behavior no longer receives a universal evaluation. To better adapt to users, the consideration of individual preferences is required. In this study, the relationships between the properties of [...] Read more.
As the linguistic capabilities of AI-based dialogue systems improve, their human-likeness is increasing, and their behavior no longer receives a universal evaluation. To better adapt to users, the consideration of individual preferences is required. In this study, the relationships between the properties of a human-like dialogue system and dialogue evaluations were investigated using hierarchical cluster analysis for individual subjects. The dialogue system driven by generative AI communicated with subjects in natural language via voice-based communication and featured a facial expression function. Subjective evaluations of the system and dialogues were conducted through a questionnaire. Based on the analysis results, the system properties were classified into two types: generally and individually relational to a positive evaluation of the dialogue. The former included inspiration, a sense of security, and collaboration, while the latter included a sense of distance, personality, and seriousness. Equipping the former properties is expected to improve dialogues for most users. The latter properties should be adjusted to individuals since they are evaluated based on individual preferences. A design approach in accordance with individuality could be useful for making human-like dialogue systems more comfortable for users. Full article
Show Figures

Figure 1

28 pages, 3886 KiB  
Article
Assessment and Improvement of Avatar-Based Learning System: From Linguistic Structure Alignment to Sentiment-Driven Expressions
by Aru Ukenova, Gulmira Bekmanova, Nazar Zaki, Meiram Kikimbayev and Mamyr Altaibek
Sensors 2025, 25(6), 1921; https://doi.org/10.3390/s25061921 - 19 Mar 2025
Viewed by 1008
Abstract
This research investigates the improvement of learning systems that utilize avatars by shifting from elementary language compatibility to emotion-driven interactions. An assessment of various instructional approaches indicated marked differences in overall effectiveness, with the system showing steady but slight improvements and little variation, [...] Read more.
This research investigates the improvement of learning systems that utilize avatars by shifting from elementary language compatibility to emotion-driven interactions. An assessment of various instructional approaches indicated marked differences in overall effectiveness, with the system showing steady but slight improvements and little variation, suggesting it has the potential for consistent use. Analysis through one-way ANOVA identified noteworthy disparities in post-test results across different teaching strategies. However, the pairwise comparisons with Tukey’s HSD did not reveal significant group differences. The group variation and limited sample sizes probably affected statistical strength. Evaluation of effect size demonstrated that the traditional approach had an edge over the avatar-based method, with lessons recorded on video displaying more moderate distinctions. The innovative nature of the system might account for its initial lower effectiveness, as students could need some time to adjust. Participants emphasized the importance of emotional authenticity and cultural adaptation, including incorporating a Kazakh accent, to boost the system’s success. In response, the system was designed with sentiment-driven gestures and facial expressions to improve engagement and personalization. These findings show the potential of emotionally intelligent avatars to encourage more profound learning experiences and the significance of fine-tuning the system for widespread adoption in a modern educational context. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

13 pages, 575 KiB  
Review
Advances in Digital Technologies in Dental Medicine: Enhancing Precision in Virtual Articulators
by Sofia Lobo, Inês Argolinha, Vanessa Machado, João Botelho, João Rua, Junying Li and José João Mendes
J. Clin. Med. 2025, 14(5), 1495; https://doi.org/10.3390/jcm14051495 - 23 Feb 2025
Cited by 1 | Viewed by 2733
Abstract
Precision in diagnosis is essential for achieving optimal outcomes in prosthodontics, orthodontics, and orthognathic treatments. Virtual articulators provide a sophisticated digital alternative to conventional methods, integrating intraoral scans, facial scans, and cone beam computed tomography (CBCT) to enhance treatment predictability. This review examines [...] Read more.
Precision in diagnosis is essential for achieving optimal outcomes in prosthodontics, orthodontics, and orthognathic treatments. Virtual articulators provide a sophisticated digital alternative to conventional methods, integrating intraoral scans, facial scans, and cone beam computed tomography (CBCT) to enhance treatment predictability. This review examines advancements in virtual articulator technology, including digital workflows, virtual facebow transfer, and occlusal analysis, with a focus on Artificial Intelligence (AI)-driven methodologies such as machine learning and artificial neural networks. The clinical implications, particularly in condylar guidance and sagittal condylar inclination, are investigated. By streamlining the acquisition and articulation of digital dental models, virtual articulators minimize material handling errors and optimize workflow efficiency. Advanced imaging techniques enable precise alignment of digital maxillary models within computer-aided design and computer-aided manufacturing systems (CAD/CAM), facilitating accurate occlusal simulations. However, challenges include potential distortions during digital file integration and the necessity for robust algorithms to enhance data superimposition accuracy. The adoption of virtual articulators represents a transformative advancement in digital dentistry, with promising implications for diagnostic precision and treatment outcomes. Nevertheless, further clinical validation is essential to ensure the reliable transfer of maxillary casts and refine digital algorithms. Future developments should prioritize the integration of AI to enhance predictive modeling, positioning virtual articulators as a standard tool in routine dental practice, thereby revolutionizing treatment planning and interdisciplinary collaboration. This review explores advancements in virtual articulators, focusing on their role in enhancing diagnostic precision, occlusal analysis, and treatment predictability. It examines digital workflows, AI-driven methodologies, and clinical applications while addressing challenges in data integration and algorithm optimization. Full article
(This article belongs to the Special Issue Clinical Advances in Dental Medicine and Oral Health)
Show Figures

Figure 1

17 pages, 3610 KiB  
Article
Multi-Level Feature Dynamic Fusion Neural Radiance Fields for Audio-Driven Talking Head Generation
by Wenchao Song, Qiong Liu, Yanchao Liu, Pengzhou Zhang and Juan Cao
Appl. Sci. 2025, 15(1), 479; https://doi.org/10.3390/app15010479 - 6 Jan 2025
Viewed by 1559
Abstract
Audio-driven cross-modal talking head generation has experienced significant advancement in the last several years, and it aims to generate a talking head video that corresponds to a given audio sequence. Out of these approaches, the NeRF-based method can generate videos featuring a specific [...] Read more.
Audio-driven cross-modal talking head generation has experienced significant advancement in the last several years, and it aims to generate a talking head video that corresponds to a given audio sequence. Out of these approaches, the NeRF-based method can generate videos featuring a specific person with more natural motion compared to the one-shot methods. However, previous approaches failed to distinguish the importance of different regions, resulting in the loss of information-rich region features. To alleviate the problem and improve video quality, we propose MLDF-NeRF, an end-to-end method for talking head generation, which can achieve better vector representation through multi-level feature dynamic fusion. Specifically, we designed two modules in MLDF-NeRF to enhance the cross-modal mapping ability between audio and different facial regions. We initially developed a multi-level tri-plane hash representation that uses three sets of tri-plane hash networks with varying resolutions of limitation to capture the dynamic information of the face more accurately. Then, we introduce the idea of multi-head attention and design an efficient audio-visual fusion module that explicitly fuses audio features with image features from different planes, thereby improving the mapping between audio features and spatial information. Meanwhile, the design helps to minimize interference from facial areas unrelated to audio, thereby improving the overall quality of the representation. The quantitative and qualitative results indicate that our proposed method can effectively generate talk heads with natural actions and realistic details. Compared with previous methods, it performs better in terms of image quality, lip sync, and other aspects. Full article
Show Figures

Figure 1

16 pages, 3547 KiB  
Review
Fixed Full-Arch Implant-Supported Restorations: Techniques Review and Proposal for Improvement
by Florin-Octavian Froimovici, Cristian Corneliu Butnărașu, Marco Montanari and Mihai Săndulescu
Dent. J. 2024, 12(12), 408; https://doi.org/10.3390/dj12120408 - 13 Dec 2024
Cited by 2 | Viewed by 5154
Abstract
Full-arch zirconia restorations on implants have gained popularity due to zirconia’s strength and aesthetics, yet they are still associated with challenges like structural fractures, peri-implant complications, and design misfits. Advances in CAD/CAM and digital workflows offer potential improvements, but a technique that consistently [...] Read more.
Full-arch zirconia restorations on implants have gained popularity due to zirconia’s strength and aesthetics, yet they are still associated with challenges like structural fractures, peri-implant complications, and design misfits. Advances in CAD/CAM and digital workflows offer potential improvements, but a technique that consistently addresses these issues in fixed, full-arch, implant-supported prostheses is needed. This novel technique integrates a facially and prosthetically driven treatment approach, which is divided into three phases: data acquisition, restoration design, and manufacturing/delivery. Digital tools, including intraoral scanning and photogrammetry, facilitate accurate implant positioning, while 3D design software enables functional and aesthetic validation before final milling. A dual software approach is used to reverse engineer a titanium bar from the final restoration design, ensuring a superior outcome to other protocols. The restoration incorporates a zirconia–titanium hybrid structure, optimizing strength, flexibility, and weight. The proposed workflow enhances restoration precision and predictability through a prosthetically driven treatment plan, by ensuring passivity and aligning with biological and mechanical principles to promote long-term stability. By starting with the proposed restoration design and reverse engineering the bar, while also allowing for flexibility in material and component choices, this technique accommodates both patient needs and financial considerations. This approach demonstrates potential for improving patient outcomes in full-arch implant restorations by minimizing complications associated with traditional methods. Further research is recommended to validate the technique’s efficacy and broaden its clinical applications. Full article
Show Figures

Figure 1

13 pages, 1482 KiB  
Article
Novel Low-Power Computing-In-Memory (CIM) Design for Binary and Ternary Deep Neural Networks by Using 8T XNOR SRAM
by Achyuth Gundrapally, Nader Alnatsheh and Kyuwon Ken Choi
Electronics 2024, 13(23), 4828; https://doi.org/10.3390/electronics13234828 - 6 Dec 2024
Viewed by 2009
Abstract
The increasing demand for high-performance and low-power hardware in artificial intelligence (AI) applications, such as speech recognition, facial recognition, and object detection, has driven the exploration of advanced memory designs. Convolutional neural networks (CNNs) and deep neural networks (DNNs) require intensive computational resources, [...] Read more.
The increasing demand for high-performance and low-power hardware in artificial intelligence (AI) applications, such as speech recognition, facial recognition, and object detection, has driven the exploration of advanced memory designs. Convolutional neural networks (CNNs) and deep neural networks (DNNs) require intensive computational resources, leading to memory access times and power consumption challenges. To address these challenges, we propose the application of computing-in-memory (CIM) within FinFET-based 8T SRAM structures, specifically utilizing P-latch N-access (PLNA) and single-ended (SE) configurations. Our design significantly reduces power consumption by up to 56% in the PLNA configuration and 60% in the SEconfiguration compared to traditional FinFET SRAM designs. These reductions are achieved while maintaining competitive delay performance, making our approach a promising solution for implementing efficient and low-power AI hardware. Detailed simulations in 7 nm FinFET technology underscore the potential of these CIM-based SRAM structures in overcoming the computational bottlenecks associated with DNNs and CNNs. Full article
(This article belongs to the Special Issue Recent Advances in AI Hardware Design)
Show Figures

Figure 1

24 pages, 5034 KiB  
Perspective
AI Detection of Human Understanding in a Gen-AI Tutor
by Earl Woodruff
AI 2024, 5(2), 898-921; https://doi.org/10.3390/ai5020045 - 18 Jun 2024
Cited by 5 | Viewed by 4534
Abstract
Subjective understanding is a complex process that involves the interplay of feelings and cognition. This paper explores how computers can monitor a user’s sympathetic and parasympathetic nervous system activity in real-time to detect the nature of the understanding the user is experiencing as [...] Read more.
Subjective understanding is a complex process that involves the interplay of feelings and cognition. This paper explores how computers can monitor a user’s sympathetic and parasympathetic nervous system activity in real-time to detect the nature of the understanding the user is experiencing as they engage with study materials. By leveraging advancements in facial expression analysis, transdermal optical imaging, and voice analysis, I demonstrate how one can identify the physiological feelings that indicate a user’s mental state and level of understanding. The mental state model, which views understandings as composed of assembled beliefs, values, emotions, and feelings, provides a framework for understanding the multifaceted nature of the emotion–cognition relationship. As learners progress through the phases of nascent understanding, misunderstanding, confusion, emergent understanding, and deep understanding, they experience a range of cognitive processes, emotions, and physiological responses that can be detected and analyzed by AI-driven assessments. Based on the above approach, I further propose the development of Abel Tutor. This AI-driven system uses real-time monitoring of physiological feelings to provide individualized, adaptive tutoring support designed to guide learners toward deep understanding. By identifying the feelings associated with each phase of understanding, Abel Tutor can offer targeted interventions, such as clarifying explanations, guiding questions, or additional resources, to help students navigate the challenges they encounter and promote engagement. The ability to detect and respond to a student’s emotional state in real-time can revolutionize the learning experience, creating emotionally resonant learning environments that adapt to individual needs and optimize educational outcomes. As we continue to explore the potential of AI-driven assessments of subjective understanding, it is crucial to ensure that these technologies are grounded in sound pedagogical principles and ethical considerations, ultimately empowering learners and facilitating the attainment of deep understanding and lifelong learning for advantaged and disadvantaged students. Full article
Show Figures

Figure 1

Back to TopTop