Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (325)

Search Parameters:
Keywords = cue utilization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2137 KB  
Article
Recognition and Misclassification Patterns of Basic Emotional Facial Expressions: An Eye-Tracking Study in Young Healthy Adults
by Neşe Alkan
J. Eye Mov. Res. 2025, 18(5), 53; https://doi.org/10.3390/jemr18050053 - 11 Oct 2025
Viewed by 114
Abstract
Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust, [...] Read more.
Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust, fear, happiness, and sadness) in healthy young adults. Fifty participants (twenty-four women) completed a forced-choice categorization task with 10 stimuli (female/male poser × emotion). A remote eye tracker (60 Hz) recorded fixations mapped to eyes, nose, and mouth areas of interest (AOIs). The analyses combined accuracy and decision-time statistics with heatmap comparisons of misclassified versus accurate trials within the same image. Overall accuracy was 87.8% (439/500). Misclassification patterns depended on the target emotion, but not on participant gender. Fear male was most often misclassified (typically as disgust), and sadness female was frequently labeled as fear or disgust; disgust was the most incorrectly attributed response. For accurate trials, decision time showed main effects of emotion (p < 0.001) and participant gender (p = 0.033): happiness was categorized fastest and anger slowest, and women responded faster overall, with particularly fast response times for sadness. The AOI results revealed strong main effects and an AOI × emotion interaction (p < 0.001): eyes received the most fixations, but fear drew relatively more mouth sampling and sadness more nose sampling. Crucially, heatmaps showed an upper-face bias (eye AOI) in inaccurate trials, whereas accurate trials retained eye sampling and added nose and mouth AOI coverage, which aligned with diagnostic cues. These findings indicate that the scanpath strategy, in addition to information availability, underpins success and failure in basic-emotion recognition, with implications for theory, targeted training, and affective technologies. Full article
Show Figures

Figure 1

19 pages, 4472 KB  
Article
Electrospun Polycaprolactone/Collagen Scaffolds Enhance Manipulability and Influence the Composition of Self-Assembled Extracellular Matrix
by Saeed Farzamfar, Stéphane Chabaud, Julie Fradette, Yannick Rioux and Stéphane Bolduc
Bioengineering 2025, 12(10), 1077; https://doi.org/10.3390/bioengineering12101077 - 3 Oct 2025
Viewed by 534
Abstract
Cell-mediated extracellular matrix (ECM) self-assembly provides a biologically relevant approach for developing near-physiological tissue-engineered constructs by utilizing stromal cells to secrete and assemble ECM components in the presence of ascorbic acid. Despite its unique advantages, this method often results in scaffolds with limited [...] Read more.
Cell-mediated extracellular matrix (ECM) self-assembly provides a biologically relevant approach for developing near-physiological tissue-engineered constructs by utilizing stromal cells to secrete and assemble ECM components in the presence of ascorbic acid. Despite its unique advantages, this method often results in scaffolds with limited mechanical properties, depending on the cell type. This research aimed to enhance the mechanical properties of these constructs by culturing cells derived from various sources, including skin, bladder, urethra, vagina, and adipose tissue, on electrospun scaffolds composed of polycaprolactone and collagen (PCLCOL). The hybrid scaffolds were evaluated using various in vitro assays to assess their structural and functional properties. Results showed that different stromal cells could deposit ECM on the PCLCOL with distinct composition compared to the ECM that was self-assembled on tissue culture plates (TCP). Additionally, cells cultured on PCLCOL exhibited a different growth factor secretion profile compared to those on TCP. Mechanical testing demonstrated that the hybrid scaffolds exhibited high mechanical properties and superior manipulability. These findings suggest that PCLCOL could be a promising platform for developing biomimetic scaffolds that combine enhanced mechanical strength with integrated biological cues for tissue repair. Full article
Show Figures

Graphical abstract

19 pages, 4672 KB  
Article
Monocular Visual/IMU/GNSS Integration System Using Deep Learning-Based Optical Flow for Intelligent Vehicle Localization
by Jeongmin Kang
Sensors 2025, 25(19), 6050; https://doi.org/10.3390/s25196050 - 1 Oct 2025
Viewed by 491
Abstract
Accurate and reliable vehicle localization is essential for autonomous driving in complex outdoor environments. Traditional feature-based visual–inertial odometry (VIO) suffers from sparse features and sensitivity to illumination, limiting robustness in outdoor scenes. Deep learning-based optical flow offers dense and illumination-robust motion cues. However, [...] Read more.
Accurate and reliable vehicle localization is essential for autonomous driving in complex outdoor environments. Traditional feature-based visual–inertial odometry (VIO) suffers from sparse features and sensitivity to illumination, limiting robustness in outdoor scenes. Deep learning-based optical flow offers dense and illumination-robust motion cues. However, existing methods rely on simple bidirectional consistency checks that yield unreliable flow in low-texture or ambiguous regions. Global navigation satellite system (GNSS) measurements can complement VIO, but often degrade in urban areas due to multipath interference. This paper proposes a multi-sensor fusion system that integrates monocular VIO with GNSS measurements to achieve robust and drift-free localization. The proposed approach employs a hybrid VIO framework that utilizes a deep learning-based optical flow network, with an enhanced consistency constraint that incorporates local structure and motion coherence to extract robust flow measurements. The extracted optical flow serves as visual measurements, which are then fused with inertial measurements to improve localization accuracy. GNSS updates further enhance global localization stability by mitigating long-term drift. The proposed method is evaluated on the publicly available KITTI dataset. Extensive experiments demonstrate its superior localization performance compared to previous similar methods. The results show that the filter-based multi-sensor fusion framework with optical flow refined by the enhanced consistency constraint ensures accurate and reliable localization in large-scale outdoor environments. Full article
(This article belongs to the Special Issue AI-Driving for Autonomous Vehicles)
Show Figures

Figure 1

38 pages, 9769 KB  
Review
Label-Free Cancer Detection Methods Based on Biophysical Cell Phenotypes
by Isabel Calejo, Ana Catarina Azevedo, Raquel L. Monteiro, Francisco Cruz and Raphaël F. Canadas
Bioengineering 2025, 12(10), 1045; https://doi.org/10.3390/bioengineering12101045 - 28 Sep 2025
Viewed by 331
Abstract
Progress in clinical diagnosis increasingly relies on innovative technologies and advanced disease biomarker detection methods. While cell labeling remains a well-established technique, label-free approaches offer significant advantages, including reduced workload, minimal sample damage, cost-effectiveness, and simplified chip integration. These approaches focus on the [...] Read more.
Progress in clinical diagnosis increasingly relies on innovative technologies and advanced disease biomarker detection methods. While cell labeling remains a well-established technique, label-free approaches offer significant advantages, including reduced workload, minimal sample damage, cost-effectiveness, and simplified chip integration. These approaches focus on the morpho-biophysical properties of cells, eliminating the need for labeling and thus reducing false results while enhancing data reliability and reproducibility. Current label-free methods span conventional and advanced technologies, including phase-contrast microscopy, holographic microscopy, varied cytometries, microfluidics, dynamic light scattering, atomic force microscopy, and electrical impedance spectroscopy. Their integration with artificial intelligence further enhances their utility, enabling rapid, non-invasive cell identification, dynamic cellular interaction monitoring, and electro-mechanical and morphological cue analysis, making them particularly valuable for cancer diagnostics, monitoring, and prognosis. This review compiles recent label-free cancer cell detection developments within clinical and biotechnological laboratory contexts, emphasizing biophysical alterations pertinent to liquid biopsy applications. It highlights interdisciplinary innovations that allow the characterization and potential identification of cancer cells without labeling. Furthermore, a comparative analysis addresses throughput, resolution, and detection capabilities, thereby guiding their effective deployment in biomedical research and clinical oncology settings. Full article
(This article belongs to the Special Issue Label-Free Cancer Detection)
Show Figures

Graphical abstract

18 pages, 24021 KB  
Article
Depth-Guided Dual-Domain Progressive Low-Light Enhancement for Light Field Image
by Xiaoxue Wu and Tao Yan
Electronics 2025, 14(19), 3784; https://doi.org/10.3390/electronics14193784 - 24 Sep 2025
Viewed by 231
Abstract
In low-light environments, light field (LF) images are often affected by various degradation factors, which impair the performance of subsequent visual tasks such as depth estimation. To address these challenges, although numerous light-field low-light enhancement methods have been proposed, they generally overlook the [...] Read more.
In low-light environments, light field (LF) images are often affected by various degradation factors, which impair the performance of subsequent visual tasks such as depth estimation. To address these challenges, although numerous light-field low-light enhancement methods have been proposed, they generally overlook the importance of frequency-domain information in modeling light field features, thereby limiting their noise suppression capabilities. Moreover, these enhancement methods mainly rely on pixel- or semantic-level cues without explicitly incorporating disparity information for structural modeling, thereby overlooking the stereoscopic spatial structure of light field images and limiting enhancement performance across different depth levels. To address these issues, we propose a light field low-light enhancement method named DDPNet. The method integrates a depth-guided mechanism to jointly restore light field images in both the spatial and frequency domains, employing a multi-stage progressive strategy to achieve synergistic improvements in illumination and depth. Specifically, we introduce a Dual-Domain Feature Extraction (DDFE) module, which incorporates spatial-frequency analysis to efficiently extract both global and local light field features. In addition, we propose a Depth-Aware Enhancement (DAE) module, which utilizes depth maps to guide the enhancement process, effectively restoring edge structures and luminance information. Extensive experimental results demonstrate that DDPNet significantly outperforms existing methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 3989 KB  
Article
A2DSC-Net: A Network Based on Multi-Branch Dilated and Dynamic Snake Convolutions for Water Body Extraction
by Shuai Zhang, Chao Zhang, Qichao Zhao, Junjie Ma and Pengpeng Zhang
Water 2025, 17(18), 2760; https://doi.org/10.3390/w17182760 - 18 Sep 2025
Viewed by 361
Abstract
The accurate and efficient acquisition of the spatiotemporal distribution of surface water is of vital importance for water resource utilization, flood monitoring, and environmental protection. However, deep learning models often suffer from two major limitations when applied to high-resolution remote sensing imagery: the [...] Read more.
The accurate and efficient acquisition of the spatiotemporal distribution of surface water is of vital importance for water resource utilization, flood monitoring, and environmental protection. However, deep learning models often suffer from two major limitations when applied to high-resolution remote sensing imagery: the loss of small water body features due to encoder scale differences, and reduced boundary accuracy for narrow water bodies in complex backgrounds. To address these challenges, we introduce the A2DSC-Net, which offers two key innovations. First, a multi-branch dilated convolution (MBDC) module is designed to capture contextual information across multiple spatial scales, thereby enhancing the recognition of small water bodies. Second, a Dynamic Snake Convolution module is introduced to adaptively extract local features and integrate global spatial cues, significantly improving the delineation accuracy of narrow water bodies under complex background conditions. Ablation and comparative experiments were conducted under identical settings using the LandCover.ai and Gaofen Image Dataset (GID). The results show that A2DSC-Net achieves an average precision of 96.34%, average recall of 96.19%, average IoU of 92.8%, and average F1-score of 96.26%, outperforming classical segmentation models such as U-Net, DeepLabv3+, DANet, and PSPNet. These findings demonstrate that A2DSC-Net provides an effective and reliable solution for water body extraction from high-resolution remote sensing imagery. Full article
Show Figures

Figure 1

27 pages, 8657 KB  
Article
Semantic-Enhanced and Temporally Refined Bidirectional BEV Fusion for LiDAR–Camera 3D Object Detection
by Xiangjun Qu, Kai Qin, Yaping Li, Shuaizhang Zhang, Yuchen Li, Sizhe Shen and Yun Gao
J. Imaging 2025, 11(9), 319; https://doi.org/10.3390/jimaging11090319 - 18 Sep 2025
Viewed by 891
Abstract
In domains such as autonomous driving, 3D object detection is a key technology for environmental perception. By integrating multimodal information from sensors such as LiDAR and cameras, the detection accuracy can be significantly improved. However, the current multimodal fusion perception framework still suffers [...] Read more.
In domains such as autonomous driving, 3D object detection is a key technology for environmental perception. By integrating multimodal information from sensors such as LiDAR and cameras, the detection accuracy can be significantly improved. However, the current multimodal fusion perception framework still suffers from two problems: first, due to the inherent physical limitations of LiDAR detection, the number of point clouds of distant objects is sparse, resulting in small target objects being easily overwhelmed by the background; second, the cross-modal information interaction is insufficient, and the complementarity and correlation between the LiDAR point cloud and the camera image are not fully exploited and utilized. Therefore, we propose a new multimodal detection strategy, Semantic-Enhanced and Temporally Refined Bidirectional BEV Fusion (SETR-Fusion). This method integrates three key components: the Discriminative Semantic Saliency Activation (DSSA) module, the Temporally Consistent Semantic Point Fusion (TCSP) module, and the Bilateral Cross-Attention Fusion (BCAF) module. The DSSA module fully utilizes image semantic features to capture more discriminative foreground and background cues; the TCSP module generates semantic LiDAR points and, after noise filtering, produces a more accurate semantic LiDAR point cloud; and the BCAF module’s cross-attention to camera and LiDAR BEV features in both directions enables strong interaction between the two types of modal information. SETR-Fusion achieves 71.2% mAP and 73.3% NDS values on the nuScenes test set, outperforming several state-of-the-art methods. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

17 pages, 1659 KB  
Article
Enhancing Multi-Region Target Search Efficiency Through Integrated Peripheral Vision and Head-Mounted Display Systems
by Gang Wang, Hung-Hsiang Wang and Zhihuang Huang
Information 2025, 16(9), 800; https://doi.org/10.3390/info16090800 - 15 Sep 2025
Viewed by 358
Abstract
Effectively managing visual search tasks across multiple spatial regions during daily activities such as driving, cycling, and navigating complex environments often overwhelms visual processing capacity, increasing the risk of errors and missed critical information. This study investigates an integrated approach that combines an [...] Read more.
Effectively managing visual search tasks across multiple spatial regions during daily activities such as driving, cycling, and navigating complex environments often overwhelms visual processing capacity, increasing the risk of errors and missed critical information. This study investigates an integrated approach that combines an Ambient Display system utilizing peripheral vision cues with traditional Head-Mounted Displays (HMDs) to enhance spatial search efficiency while minimizing cognitive burden. We systematically evaluated this integrated HMD-Ambient Display system against standalone HMD configurations through comprehensive user studies involving target search scenarios across multiple spatial regions. Our findings demonstrate that the combined approach significantly improves user performance by establishing a complementary visual system where peripheral stimuli effectively capture initial attention while central HMD cues provide precise directional guidance. The integrated system showed substantial improvements in reaction time for rear visual region searches and higher user preference ratings compared with HMD-only conditions. This integrated approach represents an innovative solution that efficiently utilizes dual visual channels, reducing cognitive load while enhancing search efficiency across distributed spatial areas. Our contributions provide valuable design guidelines for developing assistive technologies that improve performance in multi-region visual search tasks by strategically leveraging the complementary strengths of peripheral and central visual processing mechanisms. Full article
Show Figures

Figure 1

29 pages, 1996 KB  
Review
Advances in Genetics and Breeding of Grain Shape in Rice
by Qian Chen, Yuheng Zhu, Banpu Ruan and Yanchun Yu
Agriculture 2025, 15(18), 1944; https://doi.org/10.3390/agriculture15181944 - 14 Sep 2025
Viewed by 870
Abstract
Grain shape is a critical determinant of rice yield, quality, and market value. Recent advances in molecular biology, genomics, and systems biology have revealed a complex regulatory network governing grain development, integrating genetic loci, plant hormone signaling, transcriptional regulation, protein ubiquitination, epigenetic modifications, [...] Read more.
Grain shape is a critical determinant of rice yield, quality, and market value. Recent advances in molecular biology, genomics, and systems biology have revealed a complex regulatory network governing grain development, integrating genetic loci, plant hormone signaling, transcriptional regulation, protein ubiquitination, epigenetic modifications, and environmental cues. This review summarizes key genetic components such as QTLs, transcription factors, and hormone pathways—including auxin, cytokinin, gibberellin, brassinosteroids, and abscisic acid—that influence seed size through regulation of cell division, expansion, and nutrient allocation. The roles of the ubiquitin–proteasome system, miRNAs, lncRNAs, and chromatin remodeling are also discussed, highlighting their importance in fine-tuning grain development. Furthermore, we examine environmental factors that impact grain filling and size, including temperature, light, and nutrient availability. We also explore cutting-edge breeding strategies such as gene editing, functional marker development, and wild germplasm utilization, along with the integration of multi-omics platforms like RiceAtlas to enable intelligent and ecological zone-specific precision breeding. Finally, challenges such as pleiotropy and non-additive gene interactions are discussed, and future directions are proposed to enhance grain shape improvement for yield stability and food security. Full article
(This article belongs to the Special Issue Physiological and Molecular Mechanisms of Stress Tolerance in Rice)
Show Figures

Figure 1

18 pages, 381 KB  
Article
Capturing the Experience: How Digital Media Affects Memory Retention in Museum Education
by Serkan Say, Serdar Akbulut and İsmail Yavuz Öztürk
Behav. Sci. 2025, 15(9), 1247; https://doi.org/10.3390/bs15091247 - 12 Sep 2025
Viewed by 727
Abstract
This study investigates the effects of digital media usage, specifically photo-taking and video recording, on memory retention in the context of museum education. Utilizing a quasi-experimental design, this research involved three groups, each exposed to different conditions: observation without media use, photo-taking, and [...] Read more.
This study investigates the effects of digital media usage, specifically photo-taking and video recording, on memory retention in the context of museum education. Utilizing a quasi-experimental design, this research involved three groups, each exposed to different conditions: observation without media use, photo-taking, and video recording. A total of 120 university students who participated in the study were divided randomly into groups balanced by working memory capacity. Immediate and delayed recall tests were conducted to assess short-term memory and long-term retention. The results reveal that participants who merely observed the objects exhibited considerably better memory performance compared to those who used digital media. This result is consistent with the cognitive offloading hypothesis and suggests that digital devices weaken memory encoding processes by reducing individuals’ internal cognitive resources. The video-recording group exhibited the lowest performance due to the need for sustained attention and increased cognitive load. The photographing group, despite performing lower in the short-term memory test, showed less decline in the long-term memory test than the other groups. This suggests that photographs may serve as a cue in the retrieval process. The research findings reveal that digital media use can have both supportive and disruptive effects in educational environments. In this context, it is important for educators and museum designers to develop strategies that will consciously direct the use of digital tools. Full article
(This article belongs to the Section Educational Psychology)
Show Figures

Figure 1

26 pages, 24511 KB  
Article
VTLLM: A Vessel Trajectory Prediction Approach Based on Large Language Models
by Ye Liu, Wei Xiong, Nanyu Chen and Fei Yang
J. Mar. Sci. Eng. 2025, 13(9), 1758; https://doi.org/10.3390/jmse13091758 - 11 Sep 2025
Viewed by 614
Abstract
In light of the rapid expansion of maritime trade, the maritime transportation industry has experienced burgeoning growth and complexity. The deployment of trajectory prediction technology is paramount in safeguarding navigational safety. Due to limitations in design complexity and the high costs of data [...] Read more.
In light of the rapid expansion of maritime trade, the maritime transportation industry has experienced burgeoning growth and complexity. The deployment of trajectory prediction technology is paramount in safeguarding navigational safety. Due to limitations in design complexity and the high costs of data fusion, current deep learning methods struggle to effectively integrate high-level semantic cues, such as vessel type, geographical identifiers, and navigational states, within predictive frameworks. Yet, these data contain abundant information regarding vessel categories or operational scenarios. Inspired by the robust semantic comprehension exhibited by large language models (LLMs) in natural language processing, this study introduces a trajectory prediction method leveraging LLMs. Initially, Automatic Identification System (AIS) data undergoes processing to eliminate incomplete entries, thereby selecting trajectories of high quality. Distinct from prior research that concentrated solely on vessel position and velocity, this study integrates ship identity, spatiotemporal trajectory, and navigational information through prompt engineering, empowering the LLM to extract multidimensional semantic features of trajectories from comprehensive natural language narratives. Thus, the LLM can amalgamate multi-source semantics with zero marginal cost, significantly enhancing its understanding of complex maritime environments. Subsequently, a supervised fine-tuning approach rooted in Low-Rank Adaptation (LoRA) is applied to train the chosen LLMs. This enables rapid adaptation of the LLM to specific maritime areas or vessel classifications by modifying only a limited subset of parameters, thereby appreciably diminishing both data requirements and computational costs. Finally, representative metrics are utilized to evaluate the efficacy of the model training and to benchmark its performance against prevailing advanced models for ship trajectory prediction. The results indicate that the model demonstrates notable performance in short-term predictions fFor instance, with a prediction step of 1 h, the average distance errors for VTLLM and TrAISformer are 5.26 nmi and 6.12 nmi, respectively, resulting in a performance improvement of approximately 14.05%), having identified certain patterns and features, such as linear movements and turns, from the training data. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

28 pages, 15259 KB  
Article
1D-CNN-Based Performance Prediction in IRS-Enabled IoT Networks for 6G Autonomous Vehicle Applications
by Radwa Ahmed Osman
Future Internet 2025, 17(9), 405; https://doi.org/10.3390/fi17090405 - 5 Sep 2025
Viewed by 353
Abstract
To foster the performance of wireless communication while saving energy, the integration of Intelligent Reflecting Surfaces (IRS) into autonomous vehicle (AV) communication networks is considered a powerful technique. This paper proposes a novel IRS-assisted vehicular communication model that combines Lagrange optimization and Gradient-Based [...] Read more.
To foster the performance of wireless communication while saving energy, the integration of Intelligent Reflecting Surfaces (IRS) into autonomous vehicle (AV) communication networks is considered a powerful technique. This paper proposes a novel IRS-assisted vehicular communication model that combines Lagrange optimization and Gradient-Based Phase Optimization to determine the optimal transmission power, optimal interference transmission power, and IRS phase shifts. Additionally, the proposed model help increase the Signal-to-Interference-plus-Noise Ratio (SINR) by utilizing IRS, which leads to maximizes energy efficiency and the achievable data rate under a variety of environmental conditions, while guaranteeing that resource limits are satisfied. In order to represent dense vehicular environments, practical constraints for the system model, such as IRS reflection efficiency and interference, have been incorporated from multiple sources, namely, Device-to-Device (D2D), Vehicle-to-Vehicle (V2V), Vehicle-to-Base Station (V2B), and Cellular User Equipment (CUE). A Lagrangian optimization approach has been implemented to determine the required transmission interference power and the best IRS phase designs in order to enhance the system performance. Consequently, a one-dimensional convolutional neural network has been implemented for the optimized data provided by this framework as training input. This deep learning algorithm learns to predict the required optimal IRS settings quickly, allowing for real-time adaptation in dynamic wireless environments. The obtained results from the simulation show that the combined optimization and prediction strategy considerably enhances the system reliability and energy efficiency over baseline techniques. This study lays a solid foundation for implementing IRS-assisted AV networks in real-world settings, hence facilitating the development of next-generation vehicular communication systems that are both performance-driven and energy-efficient. Full article
Show Figures

Figure 1

16 pages, 1010 KB  
Article
Productivity and Carbon Utilization of Three Green Microalgae Strains with High Biotechnological Potential Cultivated in Flat-Panel Photobioreactors
by David A. Gabrielyan, Maria A. Sinetova, Grigoriy A. Savinykh, Elena V. Zadneprovskaya, Maria A. Goncharova, Alexandra G. Markelova, Alexander K. Gabrielian, Boris V. Gabel and Nikolay V. Lobus
Phycology 2025, 5(3), 43; https://doi.org/10.3390/phycology5030043 - 2 Sep 2025
Cited by 1 | Viewed by 795
Abstract
Microalgae biotechnology is increasingly applied across diverse fields, from food and medicine to energy and environmental protection, with strain selection being crucial for both target product accumulation and scalability potential. In this study, we for the first time assess the scalability of two [...] Read more.
Microalgae biotechnology is increasingly applied across diverse fields, from food and medicine to energy and environmental protection, with strain selection being crucial for both target product accumulation and scalability potential. In this study, we for the first time assess the scalability of two new promising green microalgae strains, Neochlorella semenenkoi IPPAS C-1210 and Desmodesmus armatus ARC-06, in 5-L flat-panel photobioreactors. The growth characteristics of each culture, along with their biochemical composition and CO2 utilization efficiency, were examined and compared to the well-studied model strain Chlorella sorokiniana IPPAS C-1. While C-1 achieved the highest biomass concentration (7.1 ± 0.4 g DW L−1 by day 8) and demonstrated superior specific productivity (1.5 ± 0.1 g DW L−1 d−1) and CO2 utilization efficiency (average 25.4%, peaking at 34% on day 3), ARC-06 accumulated the highest starch content (51% of DW), twice that of C-1. Strain C-1210 showed intermediate performance, reaching 6.8 ± 0.8 g DW L−1 biomass with a CUE of 22.7%, whereas ARC-06 had the lowest CUE (12.8%). These results, combined with proposed cultivation optimization strategies, provide a foundation for scaling up N. semenenkoi and D. armatus production in industrial flat-panel PBR systems. Full article
(This article belongs to the Special Issue Development of Algal Biotechnology)
Show Figures

Figure 1

46 pages, 4712 KB  
Review
Biofilms Exposed: Innovative Imaging and Therapeutic Platforms for Persistent Infections
by Manasi Haval, Chandrashekhar Unakal, Shridhar C. Ghagane, Bijay Raj Pandit, Esther Daniel, Parbatee Siewdass, Kingsley Ekimeri, Vijayanandh Rajamanickam, Angel Justiz-Vaillant, Kathy-Ann A. Lootawan, Fabio Muniz De Oliveira, Nivedita Bashetti, Tatheer Alam Naqvi, Arun Shettar and Pramod Bhasme
Antibiotics 2025, 14(9), 865; https://doi.org/10.3390/antibiotics14090865 - 28 Aug 2025
Viewed by 2888
Abstract
Biofilms constitute a significant challenge in the therapy of infectious diseases, offering remarkable resistance to both pharmacological treatments and immunological elimination. This resilience is orchestrated through the regulation of extracellular polymeric molecules, metabolic dormancy, and quorum sensing, enabling biofilms to persist in both [...] Read more.
Biofilms constitute a significant challenge in the therapy of infectious diseases, offering remarkable resistance to both pharmacological treatments and immunological elimination. This resilience is orchestrated through the regulation of extracellular polymeric molecules, metabolic dormancy, and quorum sensing, enabling biofilms to persist in both clinical and industrial environments. The resulting resistance exacerbates chronic infections and contributes to mounting economic burdens. This review examines the molecular and structural complexities that drive biofilm persistence and critically outlines the limitations of conventional diagnostic and therapeutic approaches. We emphasize advanced technologies such as super-resolution microscopy, microfluidics, and AI-driven modeling that are reshaping our understanding of biofilm dynamics and heterogeneity. Further, we highlight recent progress in biofilm-targeted therapies, including CRISPR-Cas-modified bacteriophages, quorum-sensing antagonists, enzyme-functionalized nanocarriers, and intelligent drug-delivery systems responsive to biofilm-specific cues. We also explore the utility of in vivo and ex vivo models that replicate clinical biofilm complexity and promote translational applicability. Finally, we discuss emerging interventions grounded in synthetic biology, such as engineered probiotic gene circuits and self-regulating microbial consortia, which offer innovative alternatives to conventional antimicrobials. Collectively, these interdisciplinary strategies mark a paradigm shift from reactive antibiotic therapy to precision-guided biofilm management. By integrating cutting-edge technologies with systems biology principles, this review proposes a comprehensive framework for disrupting biofilm architecture and redefining infection treatment in the post-antibiotic era. Full article
Show Figures

Figure 1

26 pages, 7962 KB  
Article
IntegraPSG: Integrating LLM Guidance with Multimodal Feature Fusion for Single-Stage Panoptic Scene Graph Generation
by Yishuang Zhao, Qiang Zhang, Xueying Sun and Guanchen Liu
Electronics 2025, 14(17), 3428; https://doi.org/10.3390/electronics14173428 - 28 Aug 2025
Viewed by 621
Abstract
Panoptic scene graph generation (PSG) aims to simultaneously segment both foreground objects and background regions while predicting object relations for fine-grained scene modeling. Despite significant progress in panoptic scene understanding, current PSG methods face challenging problems: relation prediction often only relies on visual [...] Read more.
Panoptic scene graph generation (PSG) aims to simultaneously segment both foreground objects and background regions while predicting object relations for fine-grained scene modeling. Despite significant progress in panoptic scene understanding, current PSG methods face challenging problems: relation prediction often only relies on visual representations and is hindered by imbalanced relation category distributions. Accordingly, we propose IntegraPSG, a single-stage framework that integrates large language model (LLM) guidance with multimodal feature fusion. IntegraPSG introduces a multimodal sparse relation prediction network that efficiently integrates visual, linguistic, and depth cues to identify subject–object pairs most likely to form relations, enhancing the screening of subject–object pairs and filtering dense candidates into sparse, effective pairs. To alleviate the long-tail distribution problem of relations, we design a language-guided multimodal relation decoder where LLM is utilized to generate language descriptions for relation triplets, which are cross-modally attended with vision pair features. This design enables more accurate relation predictions for sparse subject–object pairs and effectively improves discriminative capability for rare relations. Experimental results show that IntegraPSG achieves steady and strong performance on the PSG dataset, especially with the R@100, mR@100, and mean reaching 38.7%, 28.6%, and 30.0%, respectively, indicating strong overall results and supporting the validity of the proposed method. Full article
Show Figures

Figure 1

Back to TopTop