Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (43)

Search Parameters:
Keywords = visual memory scanning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 956 KB  
Article
The Real-World Early Neuroprotective Effects of Oral Citicoline Combination in Prodromal Dementia
by Aynur Özge, Ayhan Bingöl, Sevim Eyüboğlu, Ayşe İrem Can, Bahar Taşdelen, Ezgi Uluduz and Derya Uludüz
Nutrients 2026, 18(4), 595; https://doi.org/10.3390/nu18040595 - 11 Feb 2026
Viewed by 427
Abstract
Background/Objectives: Early intervention in the prodromal stages of dementia is a primary focus of contemporary research, as delaying clinical progression may have a substantial public health impact. Citicoline, an endogenous precursor of phosphatidylcholine and acetylcholine, has been proposed as a nutritional compound with [...] Read more.
Background/Objectives: Early intervention in the prodromal stages of dementia is a primary focus of contemporary research, as delaying clinical progression may have a substantial public health impact. Citicoline, an endogenous precursor of phosphatidylcholine and acetylcholine, has been proposed as a nutritional compound with potential relevance to multiple cognitive domains. However, real-world evidence regarding its specific contributions in prodromal dementia populations is limited. This study was conducted to examine cognitive, functional, and emotional outcomes associated with the use of an oral citicoline combined preparation in individuals with prodromal dementia and early Alzheimer’s type cognitive decline. Methods: This was a two-centre, retrospective, observational, real-world cohort study. A cohort of 100 patients receiving a combined oral citicoline preparation and 50 age-matched healthy controls were evaluated at baseline and followed for 6–9 months. Participants underwent comprehensive neuropsychological assessments that evaluated domains including executive function, attention, processing speed, working memory, visual-spatial and verbal memory, fluency, general cognition, and mood. Standardized instruments included Stroop indices, Trail Making Tests A/B, SDMT, SPART-based measures, SBST, fluency tasks, the Boston Naming Test, and MoCA. Statistical analyses included age-adjusted and education-level-stratified comparisons. Results: Use of the citicoline combined preparation was associated with improvements in several cognitive domains, including executive functions, processing speed, working memory, visual-spatial memory, and both semantic and episodic fluency (all p < 0.05). Functional memory scanning and global cognition also showed improvement over the observation period. Significant differences between groups were observed at baseline and follow-up for multiple cognitive indices (most p < 0.001). Mood outcomes were more favorable in the citicoline combined preparation group, with reductions in depressive and anxiety symptoms. Age-adjusted models identified age as an important covariate, and participants with lower educational levels demonstrated comparatively greater cognitive gains. Conclusions: In this real-world observational study, use of an oral citicoline combined preparation was associated with multidomain improvements in cognitive and mood-related outcomes in individuals with prodromal dementia/early Alzheimer-type decline. Given the observational design, these findings should be considered exploratory and require confirmation in prospective randomised controlled trials. Full article
(This article belongs to the Section Geriatric Nutrition)
Show Figures

Figure 1

29 pages, 229050 KB  
Article
DiffusionNet++: A Robust Framework for High-Resolution 3D Dental Mesh Segmentation
by Kaixin Zhang, Changying Wang and Shengjin Wang
Appl. Sci. 2026, 16(3), 1415; https://doi.org/10.3390/app16031415 - 30 Jan 2026
Viewed by 248
Abstract
Accurate segmentation of 3D dental structures is essential for oral diagnosis, orthodontic planning, and digital dentistry. With the rapid advancement of 3D scanning and modeling technologies, high-resolution dental data have become increasingly common. However, existing approaches still struggle to process such high-resolution data [...] Read more.
Accurate segmentation of 3D dental structures is essential for oral diagnosis, orthodontic planning, and digital dentistry. With the rapid advancement of 3D scanning and modeling technologies, high-resolution dental data have become increasingly common. However, existing approaches still struggle to process such high-resolution data efficiently. Current models often suffer from excessive parameter counts, slow inference, high computational overhead, and substantial GPU memory usage. These limitations compel many studies to downsample the input data to reduce training and inference costs—an operation that inevitably diminishes critical geometric details, blurs tooth boundaries, and compromises both fine-grained structural accuracy and model robustness. To address these challenges, this study proposes DiffusionNet++, an end-to-end segmentation framework capable of operating directly on raw high-resolution dental data. Building upon the standard DiffusionNet architecture, our method introduces a normal-enhanced multi-feature input strategy together with a lightweight SE channel-attention mechanism, enabling the model to effectively exploit local directional cues, curvature variations, and other higher-order geometric attributes while adaptively emphasizing discriminative feature channels. Experimental results demonstrate that the coordinates + normal feature configuration consistently delivers the best performance. DiffusionNet++ achieves substantial improvements in overall accuracy (OA), mean Intersection over Union (mIoU), and individual class IoU across all data types, while maintaining strong robustness and generalization on challenging cases, such as missing teeth and partially scanned data. Qualitative visualizations further corroborate these findings, showing superior boundary consistency, finer structural preservation, and enhanced recovery of incomplete regions. Overall, DiffusionNet++ offers an efficient, stable, and highly accurate solution for high-resolution 3D tooth segmentation, providing a powerful foundation for automated digital dentistry research and real-world clinical applications. Full article
Show Figures

Figure 1

20 pages, 2131 KB  
Article
Charting Early Brain Plasticity in Radiological Training: Functional Brain Reorganization During Early Radiological Expertise Acquisition
by Weilu Chai, Yuxin Bai, Jia Wu, Hongmei Wang, Jimin Liang, Xuemei Xie, Chenwang Jin and Minghao Dong
Brain Sci. 2025, 15(12), 1279; https://doi.org/10.3390/brainsci15121279 - 28 Nov 2025
Viewed by 565
Abstract
Background/Objectives: Radiological expertise draws on semantic knowledge and perceptual–cognitive mechanisms that support diagnostic reasoning. Early radiological training is a formative period when key cognitive processes begin to integrate. Nevertheless, how the brain pattern of early radiological expertise reorganizes during the first weeks of [...] Read more.
Background/Objectives: Radiological expertise draws on semantic knowledge and perceptual–cognitive mechanisms that support diagnostic reasoning. Early radiological training is a formative period when key cognitive processes begin to integrate. Nevertheless, how the brain pattern of early radiological expertise reorganizes during the first weeks of clinical exposure remains unknown, as prior work has relied mainly on cross-sectional designs comparing mature experts to beginners. Methods: We therefore conducted a longitudinal resting-state fMRI study in radiology interns (n = 43; 41 valid) scanned before and after short-term training. Behavioral performance improved significantly after training (p < 0.01). Regional homogeneity (ReHo) was computed for 246 Brainnetome ROIs for each subject. Results: Using a Support Vector Machine (SVM)-based recursive feature elimination (RFE) pipeline, 14 of these 246 features were identified as most discriminative, spanning regions involved in visual, semantic, memory, attentional, and decision-making processes. An SVM trained on these features effectively differentiated pre- and post-training brain states (training set: 86.67% accuracy, AUC = 0.97; validation set: 81.82% accuracy, AUC = 0.72). Conclusions: The observed neuroplastic changes provide direct evidence that multidimensional cognitive functions reorganize early in radiological expertise development and offer neural targets to inform evidence-based curriculum design, personalized training, and brain-targeted interventions (e.g., neuromodulation or neurofeedback) in radiology education. Full article
(This article belongs to the Special Issue EEG and fMRI Applications in Exploring Brain Activity)
Show Figures

Figure 1

32 pages, 2758 KB  
Article
A Hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM)–Attention Model Architecture for Precise Medical Image Analysis and Disease Diagnosis
by Md. Tanvir Hayat, Yazan M. Allawi, Wasan Alamro, Salman Md Sultan, Ahmad Abadleh, Hunseok Kang and Aymen I. Zreikat
Diagnostics 2025, 15(21), 2673; https://doi.org/10.3390/diagnostics15212673 - 23 Oct 2025
Cited by 2 | Viewed by 2262
Abstract
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional [...] Read more.
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional Neural Networks (CNNs) have proven highly effective, particularly in medical image analysis and disease detection. Methods: To further enhance these capabilities, this research introduces MediVision, a hybrid DL-based model that integrates a vision backbone based on CNNs for feature extraction, capturing detailed patterns and structures essential for precise classification. These features are then processed through Long Short-Term Memory (LSTM), which identifies sequential dependencies to better recognize disease progression. An attention mechanism is then incorporated that selectively focuses on salient features detected by the LSTM, improving the model’s ability to highlight critical abnormalities. Additionally, MediVision utilizes a skip connection, merging attention outputs with LSTM outputs along with Grad-CAM heatmap to visualize the most important regions of the analyzed medical image and further enhance feature representation and classification accuracy. Results: Tested on ten diverse medical image datasets (including, Alzheimer’s disease, breast ultrasound, blood cell, chest X-ray, chest CT scans, diabetic retinopathy, kidney diseases, bone fracture multi-region, retinal OCT, and brain tumor), MediVision consistently achieved classification accuracies above 95%, with a peak of 98%. Conclusions: The proposed MediVision model offers a robust and effective framework for medical image classification, improving interpretability, reliability, and automated disease diagnosis. To support research reproducibility, the codes and datasets used in this study have been publicly made available through an open-access repository. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

34 pages, 1807 KB  
Article
Moving Towards Large-Scale Particle Based Fluid Simulation in Unity 3D
by Muhammad Waseem and Min Hong
Appl. Sci. 2025, 15(17), 9706; https://doi.org/10.3390/app15179706 - 3 Sep 2025
Viewed by 3590
Abstract
Large-scale particle-based fluid simulations present significant computational challenges, particularly in achieving interactive frame rates while maintaining visual quality. Unity3D’s widespread adoption in game development, VR/AR applications, and scientific visualization creates a unique need for efficient fluid simulation within its ecosystem. This paper presents [...] Read more.
Large-scale particle-based fluid simulations present significant computational challenges, particularly in achieving interactive frame rates while maintaining visual quality. Unity3D’s widespread adoption in game development, VR/AR applications, and scientific visualization creates a unique need for efficient fluid simulation within its ecosystem. This paper presents a GPU-accelerated Smoothed Particle Hydrodynamics (SPH) framework implemented in Unity3D that effectively addresses these challenges through several key innovations. Unlike previous GPU-accelerated SPH implementations that typically struggle with scaling beyond 100,000 particles while maintaining real-time performance, we introduce a novel fusion of Count Sort with Parallel Prefix Scan for spatial hashing that transforms the traditionally expensive O(n²) neighborhood search into an efficient O(n) operation, significantly outperforming traditional GPU sorting algorithms in particle-based simulations. Our implementation leverages a Structure of Arrays (SoA) memory layout, optimized for GPU compute shaders, achieving 30–45% improved computation throughput over traditional Array of Structures approaches. Performance evaluations demonstrate that our method achieves throughput rates up to 168,600 particles/ms while maintaining consistent 5.7–6.0 ms frame times across varying particle counts from 10,000 to 1,000,000. The framework maintains interactive frame rates (>30 FPS) with up to 500,000 particles and remains responsive even at 1 million particles. Collision rates approaching 1.0 indicate near-optimal hash distribution, while the adaptive time stepping mechanism adds minimal computational overhead (2–5%) while significantly improving simulation stability. These innovations enable real-time, large-scale fluid simulations with applications spanning visual effects, game development, and scientific visualization. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data, 2nd Volume)
Show Figures

Figure 1

29 pages, 38860 KB  
Article
Explainable Deep Ensemble Meta-Learning Framework for Brain Tumor Classification Using MRI Images
by Shawon Chakrabarty Kakon, Zawad Al Sazid, Ismat Ara Begum, Md Abdus Samad and A. S. M. Sanwar Hosen
Cancers 2025, 17(17), 2853; https://doi.org/10.3390/cancers17172853 - 30 Aug 2025
Cited by 2 | Viewed by 2006
Abstract
Background: Brain tumors can severely impair neurological function, leading to symptoms such as headaches, memory loss, motor coordination deficits, and visual disturbances. In severe cases, they may cause permanent cognitive damage or become life-threatening without early detection. Methods: To address this, we propose [...] Read more.
Background: Brain tumors can severely impair neurological function, leading to symptoms such as headaches, memory loss, motor coordination deficits, and visual disturbances. In severe cases, they may cause permanent cognitive damage or become life-threatening without early detection. Methods: To address this, we propose an interpretable deep ensemble model for tumor detection in Magnetic Resonance Imaging (MRI) by integrating pre-trained Convolutional Neural Networks—EfficientNetB7, InceptionV3, and Xception—using a soft voting ensemble to improve classification accuracy. The framework is further enhanced with a Light Gradient Boosting Machine as a meta-learner to increase prediction accuracy and robustness within a stacking architecture. Hyperparameter tuning is conducted using Optuna, and overfitting is mitigated through batch normalization, L2 weight decay, dropout, early stopping, and extensive data augmentation. Results: These regularization strategies significantly enhance the model’s generalization ability within the BR35H dataset. The framework achieves a classification accuracy of 99.83 on the MRI dataset of 3060 images. Conclusions: To improve interpretability and build clinical trust, Explainable Artificial Intelligence methods Grad-CAM++, LIME, and SHAP are employed to visualize the factors influencing model predictions, effectively highlighting tumor regions within MRI scans. This establishes a strong foundation for further advancements in radiology decision support systems. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

16 pages, 1431 KB  
Article
Assessing Smooth Pursuit Eye Movements Using Eye-Tracking Technology in Patients with Schizophrenia Under Treatment: A Pilot Study
by Luis Benigno Contreras-Chávez, Valdemar Emigdio Arce-Guevara, Luis Fernando Guerrero, Alfonso Alba, Miguel G. Ramírez-Elías, Edgar Roman Arce-Santana, Victor Hugo Mendez-Garcia, Jorge Jimenez-Cruz, Anna Maria Maddalena Bianchi and Martin O. Mendez
Sensors 2025, 25(16), 5212; https://doi.org/10.3390/s25165212 - 21 Aug 2025
Viewed by 3694
Abstract
Schizophrenia is a complex disorder that affects mental organization and cognitive functions, including concentration and memory. One notable manifestation of cognitive changes in schizophrenia is a diminished ability to scan and perform tasks related to visual inspection. From the three evaluable aspects of [...] Read more.
Schizophrenia is a complex disorder that affects mental organization and cognitive functions, including concentration and memory. One notable manifestation of cognitive changes in schizophrenia is a diminished ability to scan and perform tasks related to visual inspection. From the three evaluable aspects of the ocular movements (saccadic, smooth pursuit, and fixation) in particular, smooth pursuit eye movement (SPEM) involves the tracking of slow moving objects and is closely related to attention, visual memory, and processing speed. However, evaluating smooth pursuit in clinical settings is challenging due to the technical complexities of detecting these movements, resulting in limited research and clinical application. This pilot study investigates whether the quantitative metrics derived from eye-tracking data can distinguish between patients with schizophrenia under treatment and healthy controls. The study included nine healthy participants and nine individuals receiving treatment for schizophrenia. Gaze trajectories were recorded using an eye tracker during a controlled visual tracking task performed during a clinical visit. Spatiotemporal analysis of gaze trajectories was performed by evaluating three different features: polygonal area, colocalities, and direction difference. Subsequently, a support vector machine (SVM) was used to assess the separability between healthy individuals and those with schizophrenia based on the identified gaze trajectory features. The results show statistically significant differences between the control and subjects with schizophrenia for all the computed indexes (p < 0.05) and a high separability achieving around 90% of accuracy, sensitivity, and specificity. The results suggest the potential development of a valuable clinical tool for the evaluation of SPEM, offering utility in clinics to assess the efficacy of therapeutic interventions in individuals with schizophrenia. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Figure 1

22 pages, 7118 KB  
Article
A Novel Natural Chromogenic Visual and Luminescent Sensor Platform for Multi-Target Analysis in Strawberries and Shape Memory Applications
by Hebat-Allah S. Tohamy
Foods 2025, 14(16), 2791; https://doi.org/10.3390/foods14162791 - 11 Aug 2025
Cited by 9 | Viewed by 1154
Abstract
Carboxymethyl cellulose (CMC) films, derived from sugarcane bagasse agricultural waste (SCB) incorporated with Betalains-nitrogen-doped carbon dots (Betalains-N–CQDs), derived from beet root waste (BR), offer a sustainable, smart and naked-eye sensor for strawberry packaging due to their excellent fluorescent and shape memory properties. These [...] Read more.
Carboxymethyl cellulose (CMC) films, derived from sugarcane bagasse agricultural waste (SCB) incorporated with Betalains-nitrogen-doped carbon dots (Betalains-N–CQDs), derived from beet root waste (BR), offer a sustainable, smart and naked-eye sensor for strawberry packaging due to their excellent fluorescent and shape memory properties. These CMC-Betalains-N–CQDs aim to enhance strawberry preservation and safety by enabling visual detection of common food contaminants such as bacteria, fungi and Pb(II). Crucially, the CMC-Betalains-N–CQD film also exhibits excellent shape memory properties, capable of fixing various shapes under alkaline conditions and recovering its original form in acidic environments, thereby offering enhanced physical protection for delicate produce like strawberries. Optical studies reveal the Betalains-N–CQDs’ pH-responsive fluorescence, with distinct emission patterns observed across various pH levels, highlighting their potential for sensing applications. Scanning Electron Microscopy (SEM) confirms the successful incorporation of Betalains-N–CQDs into the CMC matrix, revealing larger pores in the composite film that facilitate better interaction with analytes such as bacteria. Crucially, the CMC-Betalains-N–CQD film demonstrates significant antibacterial activity against common foodborne pathogens like Escherichia coli, Staphylococcus aureus, and Candida albicans, as evidenced by inhibition zones and supported by molecular docking simulations showing strong binding interactions with bacterial proteins. Furthermore, the film functions as a fluorescent sensor, exhibiting distinct color changes upon contact with different microorganisms and Pb(II) heavy metals, enabling rapid, naked-eye detection. The film also acts as a pH sensor, displaying color shifts (brown in alkaline, yellow in acidic) due to the betalains, useful for monitoring food spoilage. This research presents a promising, sustainable, and multifunctional intelligent packaging solution for enhanced food safety and extended shelf life. Full article
(This article belongs to the Section Food Packaging and Preservation)
Show Figures

Figure 1

24 pages, 1825 KB  
Article
Stronger Short-Term Memory, Larger Hippocampi and Area V1 in People with High VVIQ Scores
by David F. Marks
Vision 2025, 9(3), 53; https://doi.org/10.3390/vision9030053 - 7 Jul 2025
Viewed by 1613
Abstract
Reports of individual differences in vividness of visual mental imagery (VMI) scores raise complex questions: Are Vividness of Visual Imagery Questionnaire (VVIQ) score differences actually measuring anything? What functions do these differences serve? What is their neurological foundation? A new analysis examined visual [...] Read more.
Reports of individual differences in vividness of visual mental imagery (VMI) scores raise complex questions: Are Vividness of Visual Imagery Questionnaire (VVIQ) score differences actually measuring anything? What functions do these differences serve? What is their neurological foundation? A new analysis examined visual short-term memory (VSTM) and volumes of the hippocampi, primary visual cortices, and other cortical regions among vivid and non-vivid visual imagers. In a sample of 53 volunteers aged 54 to 80 with MRI scans, the performance of ten Low VVIQ scorers was compared to that of ten High VVIQ scorers. The groups included an aphantasic with a minimum VVIQ score and a hyperphantasic with a maximum VVIQ score. The study examined volumes for 12 hippocampal subfields, 11 fields implicated in visual mental imagery including area V1 and the fusiform gyrus, and 7 motor regions. In comparison to the Low VVIQ group, High VVIQ group yielded: (i) significantly more accurate VSTM performance; and (ii) significantly larger volumes of the hippocampi and primary visual cortex. Across 47 brain regions, the average volume for the High VVIQ group exceeded that of the Low VVIQ group by 11 percent. For 47 subfields, the volumes of the hphantasic exceeded those of the aphantasic person by an average of 57 percent. Females had more accurate visual short-term memory than males and younger people were more accurate than older people. The larger visual memory capacity of females was unmatched by larger regional volume differences, which suggests that the sex difference in visual memory is caused by factors other than cortical regional size. The study confirms the existence of robust empirical associations between VMI vividness, short-term memory, regional volume of hippocampal subfields and area V1. Full article
(This article belongs to the Special Issue Visual Mental Imagery System: How We Image the World)
Show Figures

Graphical abstract

28 pages, 114336 KB  
Article
Mamba-STFM: A Mamba-Based Spatiotemporal Fusion Method for Remote Sensing Images
by Qiyuan Zhang, Xiaodan Zhang, Chen Quan, Tong Zhao, Wei Huo and Yuanchen Huang
Remote Sens. 2025, 17(13), 2135; https://doi.org/10.3390/rs17132135 - 21 Jun 2025
Cited by 5 | Viewed by 2555
Abstract
Spatiotemporal fusion techniques can generate remote sensing imagery with high spatial and temporal resolutions, thereby facilitating Earth observation. However, traditional methods are constrained by linear assumptions; generative adversarial networks suffer from mode collapse; convolutional neural networks struggle to capture global context; and Transformers [...] Read more.
Spatiotemporal fusion techniques can generate remote sensing imagery with high spatial and temporal resolutions, thereby facilitating Earth observation. However, traditional methods are constrained by linear assumptions; generative adversarial networks suffer from mode collapse; convolutional neural networks struggle to capture global context; and Transformers are hard to scale due to quadratic computational complexity and high memory consumption. To address these challenges, this study introduces an end-to-end remote sensing image spatiotemporal fusion approach based on the Mamba architecture (Mamba-spatiotemporal fusion model, Mamba-STFM), marking the first application of Mamba in this domain and presenting a novel paradigm for spatiotemporal fusion model design. Mamba-STFM consists of a feature extraction encoder and a feature fusion decoder. At the core of the encoder is the visual state space-FuseCore-AttNet block (VSS-FCAN block), which deeply integrates linear complexity cross-scan global perception with a channel attention mechanism, significantly reducing quadratic-level computation and memory overhead while improving inference throughput through parallel scanning and kernel fusion techniques. The decoder’s core is the spatiotemporal mixture-of-experts fusion module (STF-MoE block), composed of our novel spatial expert and temporal expert modules. The spatial expert adaptively adjusts channel weights to optimize spatial feature representation, enabling precise alignment and fusion of multi-resolution images, while the temporal expert incorporates a temporal squeeze-and-excitation mechanism and selective state space model (SSM) techniques to efficiently capture short-range temporal dependencies, maintain linear sequence modeling complexity, and further enhance overall spatiotemporal fusion throughput. Extensive experiments on public datasets demonstrate that Mamba-STFM outperforms existing methods in fusion quality; ablation studies validate the effectiveness of each core module; and efficiency analyses and application comparisons further confirm the model’s superior performance. Full article
Show Figures

Figure 1

16 pages, 3367 KB  
Article
Sound Localization Training and Induced Brain Plasticity: An fMRI Investigation
by Ranjita Kumari, Sukhan Lee, Pradeep Kumar Anand and Jitae Shin
Diagnostics 2025, 15(12), 1558; https://doi.org/10.3390/diagnostics15121558 - 18 Jun 2025
Viewed by 1881
Abstract
Background/Objectives: Neuroimaging techniques have been increasingly utilized to explore neuroplasticity induced by various training regimens. Magnetic resonance imaging (MRI) enables to study these changes non-invasively. While visual and motor training have been widely studied, less is known about how auditory training affects brain [...] Read more.
Background/Objectives: Neuroimaging techniques have been increasingly utilized to explore neuroplasticity induced by various training regimens. Magnetic resonance imaging (MRI) enables to study these changes non-invasively. While visual and motor training have been widely studied, less is known about how auditory training affects brain activity. Our objective was to investigate the effects of sound localization training on brain activity and identify brain regions exhibiting significant changes in activation pre- and post-training to understand how sound localization training induces plasticity in the brain. Method: Six blindfolded participants each underwent 30-minute sound localization training sessions twice a week for three weeks. All participants completed functional MRI (fMRI) testing before and after the training. Results: fMRI scans revealed that sound localization training led to increased activation in several cortical areas, including the superior frontal gyrus, superior temporal gyrus, middle temporal gyrus, parietal lobule, precentral gyrus, and postcentral gyrus. These regions are associated with cognitive processes such as auditory processing, spatial working memory, planning, decision-making, error detection, and motor control. Conversely, a decrease in activation was observed in the left middle temporal gyrus, a region linked to language comprehension and semantic memory. Conclusions: These findings suggest that sound localization training enhances neural activity in areas involved in higher-order cognitive functions, spatial attention, and motor execution, while potentially reducing reliance on regions involved in basic sensory processing. This study provides evidence of training-induced neuroplasticity, highlighting the brain’s capacity to adapt through targeted auditory training intervention. Full article
(This article belongs to the Special Issue Brain MRI: Current Development and Applications)
Show Figures

Figure 1

16 pages, 3307 KB  
Article
Synaptic Plasticity and Memory Retention in ZnO–CNT Nanocomposite Optoelectronic Synaptic Devices
by Seung Hun Lee, Dabin Jeon and Sung-Nam Lee
Materials 2025, 18(10), 2293; https://doi.org/10.3390/ma18102293 - 15 May 2025
Cited by 8 | Viewed by 1240
Abstract
This study presents the fabrication and characterization of ZnO–CNT composite-based optoelectronic synaptic devices via a sol–gel process. By incorporating various concentrations of CNTs (0–2.0 wt%) into ZnO thin films, we investigated their effects on synaptic behaviors under ultraviolet (UV) stimulation. The CNT addition [...] Read more.
This study presents the fabrication and characterization of ZnO–CNT composite-based optoelectronic synaptic devices via a sol–gel process. By incorporating various concentrations of CNTs (0–2.0 wt%) into ZnO thin films, we investigated their effects on synaptic behaviors under ultraviolet (UV) stimulation. The CNT addition enhanced the electrical and optical performance by forming a p–n heterojunction with ZnO, which promoted charge separation and suppressed recombination. As a result, the 1.5 wt% CNT device exhibited the highest excitatory postsynaptic current (EPSC), improved paired-pulse facilitation, and prolonged memory retention. Learning–forgetting cycles revealed that repeated stimulation reduced the number of pulses required for relearning while extending the forgetting time, mimicking biological memory reinforcement. Energy consumption per pulse was estimated at 16.34 nJ, suggesting potential for low-power neuromorphic applications. A 3 × 3 device array was also employed for visual memory simulation, showing spatially controllable and stable memory states depending on CNT content. To support these findings, structural and optical analyses were conducted using scanning electron microscopy (SEM), UV-visible absorption spectroscopy, photoluminescence (PL) spectroscopy, and Raman spectroscopy. These findings demonstrate that the synaptic characteristics of ZnO-based devices can be finely tuned through CNT incorporation, providing a promising pathway for the development of energy-efficient and adaptive optoelectronic neuromorphic systems. Full article
Show Figures

Figure 1

16 pages, 5385 KB  
Article
Transforming 3D MRI to 2D Feature Maps Using Pre-Trained Models for Diagnosis of Attention Deficit Hyperactivity Disorder
by Elahe Hosseini, Seyyed Ali Hosseini, Stijn Servaes, Brandon Hall, Pedro Rosa-Neto, Ali-Reza Moradi, Ajay Kumar, Mir Mohsen Pedram and Sanjeev Chawla
Tomography 2025, 11(5), 56; https://doi.org/10.3390/tomography11050056 - 13 May 2025
Cited by 1 | Viewed by 2170
Abstract
Background: According to the World Health Organization (WHO), approximately 5% of children and 2.5% of adults suffer from attention deficit hyperactivity disorder (ADHD). This disorder can have significant negative consequences on people’s lives, particularly children. In recent years, methods based on artificial intelligence [...] Read more.
Background: According to the World Health Organization (WHO), approximately 5% of children and 2.5% of adults suffer from attention deficit hyperactivity disorder (ADHD). This disorder can have significant negative consequences on people’s lives, particularly children. In recent years, methods based on artificial intelligence and neuroimaging techniques, such as MRI, have made significant progress, paving the way for development of more reliable diagnostic tools. In this proof of concept study, our aim was to investigate the potential utility of neuroimaging data and clinical information in combination with a deep learning-based analytical approach, more precisely, a novel feature extraction technique for the diagnosis of ADHD with high accuracy. Methods: Leveraging the ADHD200 dataset, which encompasses demographic information and anatomical MRI scans collected from a diverse ADHD population, our study focused on developing modern deep learning-based diagnostic models. The data preprocessing employed a pre-trained Visual Geometry Group16 (VGG16) network to extract two-dimensional (2D) feature maps from three-dimensional (3D) anatomical MRI data to reduce computational complexity and enhance diagnostic power. The inclusion of personal attributes, such as age, gender, intelligence quotient, and handedness, strengthens the diagnostic models. Four deep-learning architectures—convolutional neural network 2D (CNN2D), CNN1D, long short-term memory (LSTM), and gated recurrent units (GRU)—were employed for analysis of the MRI data, with and without the inclusion of clinical characteristics. Results: A 10-fold cross-validation test revealed that the LSTM model, which incorporated both MRI data and personal attributes, had the best diagnostic performance among all tested models in the diagnosis of ADHD with an accuracy of 0.86 and area under the receiver operating characteristic (ROC) curve (AUC) score of 0.90. Conclusions: Our findings demonstrate that the proposed approach of extracting 2D features from 3D MRI images and integrating these features with clinical characteristics may be useful in the diagnosis of ADHD with high accuracy. Full article
Show Figures

Figure 1

28 pages, 1825 KB  
Article
Letter and Word Processing in Developmental Dyslexia: Evidence from a Two-Alternative Forced Choice Task
by Daniela Traficante, Pierluigi Zoccolotti and Chiara Valeria Marinelli
Children 2025, 12(5), 572; https://doi.org/10.3390/children12050572 - 29 Apr 2025
Viewed by 1021
Abstract
Background/Objectives: The present study aimed to investigate letter processing in children with dyslexia and typically developing readers as a function of the type of orthographic context. Methods and Results: In Experiment 1A, children performed a two-alternative forced choice task (Reicher–Wheeler paradigm) using as [...] Read more.
Background/Objectives: The present study aimed to investigate letter processing in children with dyslexia and typically developing readers as a function of the type of orthographic context. Methods and Results: In Experiment 1A, children performed a two-alternative forced choice task (Reicher–Wheeler paradigm) using as probes either high-frequency words, pronounceable pseudo-words, or unpronounceable non-words. The group differences in letter recognition were clearly distinguished from those present in typical word and pseudo-word reading conditions (Experiment 1B), as a global factor was present only in the latter case. In Experiment 2, the two-alternative forced choice task required the child to search for the target letter in the subsequent multi-letter string (i.e., words, pseudo-words, or non-words), thus reducing the memory load. Detecting the target letter was more difficult in a word than in a pseudo-word or non-word array, indicating that the word form’s lexical activation interfered with the target’s analysis in both groups of children. In Experiment 3, children performed the two-alternative forced choice task with symbols (Greek letters) either in the Reicher–Wheeler mode of presentation (Experiment 3A) or in the search condition (Experiment 3B). Children with dyslexia performed identically to typically developing readers in keeping with the selectivity of their orthographic difficulties. Conclusions: The present data indicate that children with dyslexia suffer from an early deficit in making perceptual operations that require the conjunction analysis of a set of letters. Still, this deficit is not due to an inability to scan the letter string. The deficit is confined to orthographic stimuli and does not extend to other types of visual targets. Full article
Show Figures

Figure 1

13 pages, 3352 KB  
Article
Dual-CycleGANs with Dynamic Guidance for Robust Underwater Image Restoration
by Yu-Yang Lin, Wan-Jen Huang and Chia-Hung Yeh
J. Mar. Sci. Eng. 2025, 13(2), 231; https://doi.org/10.3390/jmse13020231 - 25 Jan 2025
Viewed by 1403
Abstract
The field of underwater image processing has gained significant attention recently, offering great potential for enhanced exploration of underwater environments, including applications such as underwater terrain scanning and autonomous underwater vehicles. However, underwater images frequently face challenges such as light attenuation, color distortion, [...] Read more.
The field of underwater image processing has gained significant attention recently, offering great potential for enhanced exploration of underwater environments, including applications such as underwater terrain scanning and autonomous underwater vehicles. However, underwater images frequently face challenges such as light attenuation, color distortion, and noise introduced by artificial light sources. These degradations not only affect image quality but also hinder the effectiveness of related application tasks. To address these issues, this paper presents a novel deep network model for single under-water image restoration. Our model does not rely on paired training images and incorporates two cycle-consistent generative adversarial network (CycleGAN) structures, forming a dual-CycleGAN architecture. This enables the simultaneous conversion of an underwater image to its in-air (atmospheric) counterpart while learning a light field image to guide the underwater image towards its in-air version. Experimental results indicate that the proposed method provides superior (or at least comparable) image restoration performance, both in terms of quantitative measures and visual quality, when compared to existing state-of-the-art techniques. Our model significantly reduces computational complexity, resulting in a more efficient approach that maintains superior restoration capabilities, ensuring faster processing times and lower memory usage, making it highly suitable for real-world applications. Full article
(This article belongs to the Special Issue Application of Deep Learning in Underwater Image Processing)
Show Figures

Figure 1

Back to TopTop