Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (221)

Search Parameters:
Keywords = ciders

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 11205 KB  
Article
Remote Sensing Image Captioning via Self-Supervised DINOv3 and Transformer Fusion
by Maryam Mehmood, Ahsan Shahzad, Farhan Hussain, Lismer Andres Caceres-Najarro and Muhammad Usman
Remote Sens. 2026, 18(6), 846; https://doi.org/10.3390/rs18060846 - 10 Mar 2026
Viewed by 402
Abstract
Effective interpretation of coherent and usable information from aerial images (e.g., satellite imagery or high-altitude drone photography) can greatly reduce human effort in many situations, both natural (e.g., earthquakes, forest fires, tsunamis) and man-made (e.g., highway pile-ups, traffic congestion), particularly in disaster management. [...] Read more.
Effective interpretation of coherent and usable information from aerial images (e.g., satellite imagery or high-altitude drone photography) can greatly reduce human effort in many situations, both natural (e.g., earthquakes, forest fires, tsunamis) and man-made (e.g., highway pile-ups, traffic congestion), particularly in disaster management. This research proposes a novel encoder–decoder framework for captioning of remote sensing images that integrates self-supervised DINOv3 visual features with a hybrid Transformer–LSTM decoder. Unlike existing approaches that rely on supervised CNN-based encoders (e.g., ResNet, VGG), the proposed method leverages DINOv3’s self-supervised learning capabilities to extract dense, semantically rich features from aerial images without requiring domain-specific labeled pretraining. The proposed hybrid decoder combines Transformer layers for global context modeling with LSTM layers for sequential caption generation, producing coherent and context-aware descriptions. Feature extraction is performed using the DINOv3 model, which employs the gram-anchoring technique to stabilize dense feature maps. Captions are generated through a hybrid of Transformer with Long Short-Term Memory (LSTM) layers, which adds contextual meaning to captions through sequential hidden layer modeling with gated memory. The model is first evaluated on two traditional remote sensing image captioning datasets: RSICD and UCM-Captions. Multiple evaluation metrics like Bilingual Evaluation Understudy (BLEU), Consensus-based Image Description Evaluation (CIDEr), Recall-Oriented Understudy for Gisting Evaluation (ROUGE-L), and Metric for Evaluation of Translation with Explicit Ordering (METEOR), are used to quantify the performance and robustness of the proposed DINOv3 hybrid model. The proposed model outperforms conventional Convolutional Neural Network (CNN) and Vision Transformers (ViT)-based models by approximately 9–12% across most evaluation metrics. Attention heatmaps are also employed to qualitatively validate the proposed model when identifying and describing key spatial elements. In addition, the proposed model is evaluated on advanced remote sensing datasets, including RSITMD, DisasterM3, and GeoChat. The results demonstrate that self-supervised vision transformers are robust encoders for multi-modal understanding in remote sensing image analysis and captioning. Full article
Show Figures

Figure 1

15 pages, 1080 KB  
Brief Report
Escherichia coli Mono-Association Modulates Ionotropic Receptor-Dependent Behaviors in Drosophila melanogaster
by Hazem Al Darwish, Mia Cacao, Tia Hart, Deep Patel, Sammi Russo, Safiyah Salama, Muqaddasa Tariq, Aina T. Ananda and Jennifer S. Sun
Insects 2026, 17(3), 275; https://doi.org/10.3390/insects17030275 - 3 Mar 2026
Viewed by 733
Abstract
Chemosensory systems are crucial for insect survival, enabling host-seeking, food acquisition, and oviposition site selection. While insect-associated microbes are known to influence host development and immunity, their role in modulating chemosensory behavior remains poorly understood. Here, we show that mono-association with Escherichia coli [...] Read more.
Chemosensory systems are crucial for insect survival, enabling host-seeking, food acquisition, and oviposition site selection. While insect-associated microbes are known to influence host development and immunity, their role in modulating chemosensory behavior remains poorly understood. Here, we show that mono-association with Escherichia coli alters sensory-driven behaviors in both larval and adult axenic Drosophila melanogaster. In larvae, E. coli mono-association altered phototaxis and mechanosensory responses across genotypes, while changes in tunneling and thermosensory behaviors were reduced or absent in ionotropic receptor (IR) co-receptor mutants. In adults, E. coli mono-association increased attraction to fermentation cues (apple cider vinegar, ethanol) and enhanced sucrose consumption in wild-type and Orco-deficient flies, whereas these effects were reduced or absent in IR co-receptor mutants (IR25a and IR76b). Together, these findings indicate that under defined gnotobiotic conditions, E. coli exposure alters sensory-driven behavioral outputs relative to axenic controls. Effects are reduced or absent in IR co-receptor mutants, consistent with a role for IR pathways in mediating these behavioral shifts. These findings support a role for microbial cues in shaping insect sensory-driven behaviors and highlight the importance of microbial status in interpreting behavioral phenotypes. This work provides a framework for future studies investigating how microbial signals interact with conserved sensory pathways. Full article
Show Figures

Figure 1

16 pages, 1120 KB  
Article
Desirability Function-Based Optimization of Apple Pomace Extraction to Produce a Fermentable Must Using Central Composite Design Methodology (CCD) for the Development of Non- or Low-Alcohol Beverages
by Maider Lago, Ane Bordagaray, Ane Olañeta-Jainaga, Unai Aguirre-Cano, Iratxe Olazaran and Shuyana Deba-Rementeria
Fermentation 2026, 12(3), 122; https://doi.org/10.3390/fermentation12030122 - 24 Feb 2026
Viewed by 589
Abstract
The cider industry generates substantial amounts of apple pomace (AP), a by-product rich in fermentable sugars, organic acids, and bioactive compounds. This study aimed to optimize the extraction of fermentable must from AP using a central composite design (CCD) and to evaluate its [...] Read more.
The cider industry generates substantial amounts of apple pomace (AP), a by-product rich in fermentable sugars, organic acids, and bioactive compounds. This study aimed to optimize the extraction of fermentable must from AP using a central composite design (CCD) and to evaluate its potential for producing non- or low-alcohol (NoLo) beverages through fermentation. The extraction process was optimized using a desirability function targeting maximum sugar (26.69 g/L), malic acid (1.30 g/L), and nitrogen content (29.60 mg/L). The model revealed that extraction time and agitation had significant effects on sugar and malic acid, while an enzyme was less influential. The selected optimal condition was 4.6 h of maceration without enzyme addition providing an efficient must composition. Must was then fermented using combinations of Saccharomyces cerevisiae, Lactobacillus plantarum, and Pichia kluyveri. Physicochemical, microbiological, and volatile compound analyses were conducted throughout fermentation. Results showed that fermentation time and microbial composition significantly influenced sugar degradation, ethanol and lactic acid production and malic acid conversion, while mixed fermentation S. cerevisiae + L. plantarum and L. plantarum + P. kluyveri demonstrated the most promising aromatic profiles. Principal component analysis (PCA) confirmed distinct fermentation trajectories based on microbial consortia, highlighting the potential of AP as a sustainable substrate for developing innovative NoLo beverages. Full article
Show Figures

Figure 1

20 pages, 2326 KB  
Article
A LLaMA-Based Efficient Fine-Tuning Method for Image Captioning Using Multi-Feature Dynamic Prompts
by Yongyang Yin, Hengyu Cao, Chunsheng Zhang, Faxun Jin, Xin Liu and Jun Lin
Appl. Sci. 2026, 16(4), 1857; https://doi.org/10.3390/app16041857 - 12 Feb 2026
Viewed by 315
Abstract
To address the trade-off between parameter scale and generation quality in Vision-Language Models (VLMs), this study proposes a Multi-Feature Dynamic Instruction Tuning (MFDIT) image captioning model based on LLaMA. By integrating CLIP-based global features with SAM-derived local features, the model constructs a multi-level [...] Read more.
To address the trade-off between parameter scale and generation quality in Vision-Language Models (VLMs), this study proposes a Multi-Feature Dynamic Instruction Tuning (MFDIT) image captioning model based on LLaMA. By integrating CLIP-based global features with SAM-derived local features, the model constructs a multi-level visual representation. Additionally, a Dynamic Prompt Adapter is designed to enable cross-modal semantic alignment with adaptive flexibility. Combined with a Low-Rank Adaptation (LoRA) fine-tuning strategy, the proposed method enhances the model’s capability in describing diverse images while training only 20 million parameters, accounting for merely 0.05% of the total parameter volume. Experimental results demonstrate that the model achieves a CIDEr score of 126.7 on the MSCOCO dataset, surpassing traditional adapter-based approaches by 3.0 points. Moreover, in the MME Benchmark evaluation, the proposed model outperforms the mainstream LLaMA-Adapter V2 by 7.3% and 3.8% in OCR and object counting tasks, respectively. Ablation studies further validate the synergistic effects of multi-feature fusion and dynamic instruction optimization. This research provides an efficient solution for parameter-efficient multimodal model training and potential deployment in resource-constrained environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 362 KB  
Article
A Content Analysis of Alcohol Marketing on Instagram: Examining Its Contribution to an Australian Aquatic Alcogenic Environment
by Gemma Crawford, Renee N. Carey, Nicola D’Orazio, Jonine Jancey and Justine E. Leavy
Int. J. Environ. Res. Public Health 2026, 23(2), 217; https://doi.org/10.3390/ijerph23020217 - 9 Feb 2026
Viewed by 598
Abstract
The alcohol industry uses social media platforms to normalise alcohol consumption in aquatic locations, which can lead to injury and drowning. This research investigated alcohol content posted on Instagram to determine marketing strategies and the extent to which aquatic locations and activities were [...] Read more.
The alcohol industry uses social media platforms to normalise alcohol consumption in aquatic locations, which can lead to injury and drowning. This research investigated alcohol content posted on Instagram to determine marketing strategies and the extent to which aquatic locations and activities were featured. The Australian Instagram accounts of five alcohol brands were identified across beer, spirits, and cider. The 20 most recent posts were captured for analysis. Each image and caption were saved and coded to examine (1) ‘What marketing characteristics are used by alcohol brands on Instagram?’; and (2) ‘Which aquatic locations and activities featured in the posted Instagram content?’. Ninety-nine posts comprising 176 individual images were identified and analysed. Alcoholic beverages were shown in 53.4% of images. Aquatic locations were observed in 21.6% and aquatic activities in 17.0%. The use of aquatic locations and activities varied significantly by brand (p < 0.001). Advertising themes included associating alcohol with hobbies (29.6%) and mateship (22.7%). Only 21.6% of images contained an age restriction or warning; all accounts employed some form of age-gating. Social media platforms provide alcohol industry actors the opportunity to increase their exposure to young people. Brands used imagery that associated alcohol with fun and mateship and promoted the use of alcohol in aquatic locations. Full article
(This article belongs to the Section Global Health)
22 pages, 1944 KB  
Article
Automated Radiological Report Generation from Breast Ultrasound Images Using Vision and Language Transformers
by Shaheen Khatoon and Azhar Mahmood
J. Imaging 2026, 12(2), 68; https://doi.org/10.3390/jimaging12020068 - 6 Feb 2026
Viewed by 568
Abstract
Breast ultrasound imaging is widely used for the detection and characterization of breast abnormalities; however, generating detailed and consistent radiological reports remains a labor-intensive and subjective process. Recent advances in deep learning have demonstrated the potential of automated report generation systems to support [...] Read more.
Breast ultrasound imaging is widely used for the detection and characterization of breast abnormalities; however, generating detailed and consistent radiological reports remains a labor-intensive and subjective process. Recent advances in deep learning have demonstrated the potential of automated report generation systems to support clinical workflows, yet most existing approaches focus on chest X-ray imaging and rely on convolutional–recurrent architectures with limited capacity to model long-range dependencies and complex clinical semantics. In this work, we propose a multimodal Transformer-based framework for automatic breast ultrasound report generation that integrates visual and textual information through cross-attention mechanisms. The proposed architecture employs a Vision Transformer (ViT) to extract rich spatial and morphological features from ultrasound images. For textual embedding, pretrained language models (BERT, BioBERT, and GPT-2) are implemented in various encoder–decoder configurations to leverage both general linguistic knowledge and domain-specific biomedical semantics. A multimodal Transformer decoder is implemented to autoregressively generate diagnostic reports by jointly attending to visual features and contextualized textual embeddings. We conducted an extensive quantitative evaluation using standard report generation metrics, including BLEU, ROUGE-L, METEOR, and CIDEr, to assess lexical accuracy, semantic alignment, and clinical relevance. Experimental results demonstrate that BioBERT-based models consistently outperform general domain counterparts in clinical specificity, while GPT-2-based decoders improve linguistic fluency. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

26 pages, 6390 KB  
Article
Image Captioning Using Enhanced Cross-Modal Attention with Multi-Scale Aggregation for Social Hotspot and Public Opinion Monitoring
by Shan Jiang, Yingzhao Chen, Rilige Chaomu and Zheng Liu
Inventions 2026, 11(1), 13; https://doi.org/10.3390/inventions11010013 - 2 Feb 2026
Viewed by 475
Abstract
Large volumes of images shared on social media have made image captioning an important tool for social hotspot identification and public opinion monitoring, where accurate visual–language alignment is essential for reliable analysis. However, existing image captioning models based on BLIP-2 (Bootstrapped Language–Image Pre-training) [...] Read more.
Large volumes of images shared on social media have made image captioning an important tool for social hotspot identification and public opinion monitoring, where accurate visual–language alignment is essential for reliable analysis. However, existing image captioning models based on BLIP-2 (Bootstrapped Language–Image Pre-training) often struggle with complex, context-rich, and socially meaningful images in real-world social media scenarios, mainly due to insufficient cross-modal interaction, redundant visual token representations, and an inadequate ability to capture multi-scale semantic cues. As a result, the generated captions tend to be incomplete or less informative. To address these limitations, this paper proposes ECMA (Enhanced Cross-Modal Attention), a lightweight module integrated into the Querying Transformer (Q-Former) of BLIP-2. ECMA enhances cross-modal interaction through bidirectional attention between visual features and query tokens, enabling more effective information exchange, while a multi-scale visual aggregation strategy is introduced to model semantic representations at different levels of abstraction. In addition, a semantic residual gating mechanism is designed to suppress redundant information while preserving task-relevant features. ECMA can be seamlessly incorporated into BLIP-2 without modifying the original architecture or fine-tuning the vision encoder or the large language model, and is fully compatible with OPT (Open Pre-trained Transformer)-based variants. Experimental results on the COCO (Common Objects in Context) benchmark demonstrate consistent performance improvements, where ECMA improves the CIDEr (Consensus-based Image Description Evaluation) score from 144.6 to 146.8 and the BLEU-4 score from 42.5 to 43.9 on the OPT-6.7B model, corresponding to relative gains of 1.52% and 3.29%, respectively, while also achieving competitive METEOR (Metric for Evaluation of Translation with Explicit Ordering) scores. Further evaluations on social media datasets show that ECMA generates more coherent, context-aware, and socially informative captions, particularly for images involving complex interactions and socially meaningful scenes. Full article
Show Figures

Figure 1

23 pages, 1579 KB  
Article
Exploring Difference Semantic Prior Guidance for Remote Sensing Image Change Captioning
by Yunpeng Li, Xiangrong Zhang, Guanchun Wang and Tianyang Zhang
Remote Sens. 2026, 18(2), 232; https://doi.org/10.3390/rs18020232 - 11 Jan 2026
Viewed by 564
Abstract
Understanding complex change scenes is a crucial challenge in remote sensing field. Remote sensing image change captioning (RSICC) task has emerged as a promising approach to translate appeared changes between bi-temporal remote sensing images into textual descriptions, enabling users to make accurate decisions. [...] Read more.
Understanding complex change scenes is a crucial challenge in remote sensing field. Remote sensing image change captioning (RSICC) task has emerged as a promising approach to translate appeared changes between bi-temporal remote sensing images into textual descriptions, enabling users to make accurate decisions. Current RSICC methods frequently encounter difficulties in consistency for contextual awareness and semantic prior guidance. Therefore, this study explores difference semantic prior guidance network to reason context-rich sentence for capturing appeared vision changes. Specifically, the context-aware difference module is introduced to guarantee the consistency of unchanged/changed context features, strengthening multi-level changed information to improve the ability of semantic change feature representation. Moreover, to effectively mine higher-level cognition ability to reason salient/weak changes, we employ difference comprehending with shallow change information to realize semantic change knowledge learning. In addition, the designed parallel cross refined attention in Transformer decoder can balance vision difference and semantic knowledge for implicit knowledge distilling, enabling fine-grained perception changes of semantic details and reducing pseudochanges. Compared with advanced algorithms on the LEVIR-CC and Dubai-CC datasets, experimental results validate the outstanding performance of the designed model in RSICC tasks. Notably, on the LEVIR-CC dataset, it reaches a CIDEr score of 143.34%, representing a 3.11% improvement over the most competitive SAT-cap. Full article
Show Figures

Figure 1

31 pages, 7858 KB  
Article
Domain-Adapted MLLMs for Interpretable Road Traffic Accident Analysis Using Remote Sensing Imagery
by Bing He, Wei He, Qing Chang, Wen Luo and Lingli Xiao
ISPRS Int. J. Geo-Inf. 2026, 15(1), 8; https://doi.org/10.3390/ijgi15010008 - 21 Dec 2025
Cited by 1 | Viewed by 659
Abstract
Traditional road traffic accident analysis has long relied on structured data, making it difficult to integrate high-dimensional heterogeneous information such as remote sensing imagery and leading to an incomplete understanding of accident scene environments. This study proposes a road traffic accident analysis framework [...] Read more.
Traditional road traffic accident analysis has long relied on structured data, making it difficult to integrate high-dimensional heterogeneous information such as remote sensing imagery and leading to an incomplete understanding of accident scene environments. This study proposes a road traffic accident analysis framework based on Multimodal Large Language Models. The approach integrates high-resolution remote sensing imagery with structured accident data through a three-stage progressive training pipeline. Specifically, we fine-tune three open-source vision–language models using Low-Rank Adaptation (LoRA) to sequentially optimize the model’s capabilities in visual environmental description, multi-task accident classification, and Chain-of-Thought (CoT) driven causal reasoning. A multimodal dataset was constructed containing remote sensing image descriptions, accident classification labels, and interpretable reasoning chains. Experimental results show that the fine-tuned model achieved a maximum improvement in the CIDEr score for image description tasks. In the joint classification task of accident severity and duration, the model achieved an accuracy of 71.61% and an F1-score of 0.8473. In the CoT reasoning task, both METEOR and CIDEr scores improved significantly. These results validate the effectiveness of structured reasoning mechanisms in multimodal fusion for transportation applications, providing a feasible path toward interpretable and intelligent analysis for real-world traffic management. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

19 pages, 864 KB  
Article
Unique Alcoholic Beverages Derived from Pear and Apple Juice Using Probiotic Yeast
by Andrea Maria Patelski, Maria Balcerek, Urszula Dziekońska, Katarzyna Pielech-Przybylska, Aleksandra Raczyk, Michalina Wasilewska and Katarzyna Dębska
Appl. Sci. 2025, 15(24), 13039; https://doi.org/10.3390/app152413039 - 11 Dec 2025
Viewed by 947
Abstract
Fermented fruit beverages enriched with probiotic microorganisms are gaining increasing interest due to their potential to combine sensory appeal with functional properties. In this study, apple and pear juices were fermented using Saccharomyces cerevisiae var. boulardii and the reference wine strain S. cerevisiae [...] Read more.
Fermented fruit beverages enriched with probiotic microorganisms are gaining increasing interest due to their potential to combine sensory appeal with functional properties. In this study, apple and pear juices were fermented using Saccharomyces cerevisiae var. boulardii and the reference wine strain S. cerevisiae RV002, followed by sweetening with xylitol, erythritol, or stevia. The aim was to evaluate the fermentative performance of the probiotic yeast, the chemical composition of the resulting beverages, and the influence of sweeteners on the results of sensory evaluation. Both yeast strains efficiently produced ethanol within typical ranges for cider and perry. The highest ethanol concentration was observed in apple juice fermented with S. boulardii (49.01 ± 0.60 g/L), while the lowest occurred in pear juice fermented with S. boulardii (41.28 ± 1.00 g/L). Total phenolic content (TPC) decreased after apple juice fermentation but remained largely unchanged in pear juice. Notably, S. boulardii use resulted in the highest post-fermentation TPC value in pear juice (0.34 ± 0.002 g/L), while the lowest value was obtained in apple juice fermented with RV002 strain (0.27 ± 0.005 g/L). Our findings highlight the potential of S. boulardii for producing novel functional alcoholic beverages. Future work should examine long-term probiotic viability and optimise formulations for commercial application. Full article
(This article belongs to the Special Issue Biosynthesis and Applications of Natural Products)
Show Figures

Figure 1

22 pages, 2905 KB  
Article
Image Captioning with Object Detection and Facial Expression Recognition for Smart Industry
by Abdul Saboor Khan, Abdul Haseeb Khan, Muhammad Jamshed Abbass and Imran Shafi
Bioengineering 2025, 12(12), 1325; https://doi.org/10.3390/bioengineering12121325 - 5 Dec 2025
Cited by 1 | Viewed by 1295
Abstract
This paper presents a new image captioning system which contains facial expression recognition as a way to provide better emotional and contextual comprehension of the captions generated. A combination of affective cues and visual features is made, which enables semantically full and emotionally [...] Read more.
This paper presents a new image captioning system which contains facial expression recognition as a way to provide better emotional and contextual comprehension of the captions generated. A combination of affective cues and visual features is made, which enables semantically full and emotionally conscious descriptions. Experiments were carried out on two created datasets, FlickrFace11k and COCOFace15k, with standard benchmarks such as BLEU, METEOR, ROUGE-L, CIDEr, and SPICE to analyze their effectiveness. The suggested model produced better results in all metrics as compared to baselines, like Show-Attend-Tell and Up-Down, remaining consistently better on all the scores. Remarkably, it has reached gains of 2.5 points on CIDEr and 1.0 on SPICE, which means a closer correlation to the prompt captions made by people. A 5-fold cross-validation confirmed the model’s robustness, with minimal standard deviation across folds (<±0.2). Qualitative results further demonstrated its ability to capture fine-grained emotional expressions often missed by conventional models. These findings underscore the model’s potential in affective computing, assistive technologies, and human-centric AI applications. The pipeline is designed for on-prem/edge deployment with lightweight interfaces to IoT middleware (MQTT/OPC UA), enabling smart-factory integration. These characteristics align the method with Industry 4.0 sensor networks and human-centric analytics. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Graphical abstract

25 pages, 1831 KB  
Review
Phytotherapy in Pediatric Dentistry: A Narrative Review of Clinical Applications and Evidence
by Zorela Elena Miclăuș, Rahela Tabita Moca, Ruxandra-Ilinca Matei, Abel Emanuel Moca, Adriana Țenț and Anca Porumb
Children 2025, 12(11), 1559; https://doi.org/10.3390/children12111559 - 17 Nov 2025
Viewed by 1643
Abstract
Background/Objectives: Phytotherapy, the use of plant-derived bioactive compounds for therapeutic purposes, has gained increasing attention in dentistry as a natural, well-tolerated, and culturally acceptable adjunct to conventional treatments. In pediatric dentistry, its potential relevance lies in its antimicrobial, anti-inflammatory, and antioxidant properties, which [...] Read more.
Background/Objectives: Phytotherapy, the use of plant-derived bioactive compounds for therapeutic purposes, has gained increasing attention in dentistry as a natural, well-tolerated, and culturally acceptable adjunct to conventional treatments. In pediatric dentistry, its potential relevance lies in its antimicrobial, anti-inflammatory, and antioxidant properties, which may support oral health, caries prevention, pulp vitality, and gingival health. This narrative review aimed to summarize the current clinical evidence regarding the application of phytotherapeutic agents in pediatric oral care. Methods: A narrative review was conducted according to SANRA guidelines, including clinical studies on plant-based products used for preventive or therapeutic purposes in children and adolescents. Results: Forty-three clinical studies met the inclusion criteria. The most commonly investigated agents included licorice, green tea, cocoa husk, cranberry, pomegranate, Aloe vera, and miswak. These agents demonstrated antimicrobial activity against cariogenic bacteria, reduction in plaque and gingival indices, and favorable healing in pulp therapies. In endodontics, Aloe vera-derived acemannan and Ankaferd Blood Stopper® showed outcomes comparable to conventional materials, while pomegranate and apple cider vinegar exhibited partial antibacterial effects as irrigants. Conclusions: Phytotherapy shows promise as a complementary approach in pediatric dentistry, contributing to caries prevention, gingivitis control, and pulp healing. However, current evidence remains limited by small sample sizes, short-term follow-ups, and heterogeneity in formulations. Further trials are required to confirm efficacy, ensure safety, and standardize phytotherapeutic applications in pediatric oral care. Full article
(This article belongs to the Section Pediatric Dentistry & Oral Medicine)
Show Figures

Figure 1

18 pages, 1444 KB  
Article
Strain-Dependent Contributions of Hanseniaspora uvarum Isolate to Apple Cider Fermentation, Chemical Composition and Aroma Complexity
by Marko Malićanin, Sandra Stamenković Stojanović, Jelena Stanojević, Stojan Mančić, Bojana Danilović and Ivana Karabegović
Fermentation 2025, 11(11), 650; https://doi.org/10.3390/fermentation11110650 - 17 Nov 2025
Viewed by 1082
Abstract
Cider fermentation is strongly influenced by yeast metabolism, which determines both fermentation dynamics and aroma complexity. While Saccharomyces species remain the standard choice, increasing attention has been directed toward non-Saccharomyces yeasts such as Hanseniaspora uvarum, known for their high ester formation [...] Read more.
Cider fermentation is strongly influenced by yeast metabolism, which determines both fermentation dynamics and aroma complexity. While Saccharomyces species remain the standard choice, increasing attention has been directed toward non-Saccharomyces yeasts such as Hanseniaspora uvarum, known for their high ester formation and positive impact on sensory attributes. In this study, three native H. uvarum strains were compared with Saccharomyces bayanus in cider production. Fermentation kinetics, physicochemical parameters, and volatile and sensory profiles were assessed. All H. uvarum strains depleted sugars effectively, but strain-specific differences were evident: Kr-4 exhibited the highest ethanol (4.92% v/v) and glycerol (2.88 g/L) production, while Kd-13 showed reduced fermentative vigor. GC–MS analysis revealed higher alcohols as the dominant volatiles, with 3-methyl-1-butanol and phenylethyl alcohol most abundant. The highest concentration of phenylethyl alcohol was found in cider fermented with H. uvarum Kd-13. Ester diversity was also strain-dependent, with H. uvarum Kd-13 producing increased levels of monoethyl succinate and ethyl phenylacetate. Sensory evaluation identified ciders produced with H. uvarum Kd-13 and Kr-4 as the most complex, whereas the control sample was perceived to have a lighter body and intensive acidity. These findings highlight significant strain-level variability within H. uvarum, underlining its potential for tailoring cider aroma and sensory quality. Full article
(This article belongs to the Special Issue The Role of Non-Saccharomyces Yeasts in Crafting Alcoholic Drinks)
Show Figures

Figure 1

19 pages, 4107 KB  
Article
Structured Prompting and Collaborative Multi-Agent Knowledge Distillation for Traffic Video Interpretation and Risk Inference
by Yunxiang Yang, Ningning Xu and Jidong J. Yang
Computers 2025, 14(11), 490; https://doi.org/10.3390/computers14110490 - 9 Nov 2025
Cited by 1 | Viewed by 1526
Abstract
Comprehensive highway scene understanding and robust traffic risk inference are vital for advancing Intelligent Transportation Systems (ITS) and autonomous driving. Traditional approaches often struggle with scalability and generalization, particularly under the complex and dynamic conditions of real-world environments. To address these challenges, we [...] Read more.
Comprehensive highway scene understanding and robust traffic risk inference are vital for advancing Intelligent Transportation Systems (ITS) and autonomous driving. Traditional approaches often struggle with scalability and generalization, particularly under the complex and dynamic conditions of real-world environments. To address these challenges, we introduce a novel structured prompting and multi-agent collaborative knowledge distillation framework that enables automatic generation of high-quality traffic scene annotations and contextual risk assessments. Our framework orchestrates two large vision–language models (VLMs): GPT-4o and o3-mini, using a structured Chain-of-Thought (CoT) strategy to produce rich, multiperspective outputs. These outputs serve as knowledge-enriched pseudo-annotations for supervised fine-tuning of a much smaller student VLM. The resulting compact 3B-scale model, named VISTA (Vision for Intelligent Scene and Traffic Analysis), is capable of understanding low-resolution traffic videos and generating semantically faithful, risk-aware captions. Despite its significantly reduced parameter count, VISTA achieves strong performance across established captioning metrics (BLEU-4, METEOR, ROUGE-L, and CIDEr) when benchmarked against its teacher models. This demonstrates that effective knowledge distillation and structured role-aware supervision can empower lightweight VLMs to capture complex reasoning capabilities. The compact architecture of VISTA facilitates efficient deployment on edge devices, enabling real-time risk monitoring without requiring extensive infrastructure upgrades. Full article
Show Figures

Graphical abstract

29 pages, 2851 KB  
Review
Fermented Beverages from Amazonian Fruits: Nutritional Characteristics and Bioactive Compounds
by Bárbara N. Batista, Ana Cristina Correia, António M. Jordão and Patrícia M. Albuquerque
Beverages 2025, 11(5), 152; https://doi.org/10.3390/beverages11050152 - 21 Oct 2025
Cited by 2 | Viewed by 3130
Abstract
Fermented beverages are drinks that undergo a fermentation process involving yeasts, bacteria, or other microbial groups, leading to the conversion of natural sugars into alcohol, acids, and gases. Beer, wine, kombucha, kefir, and cider are examples of fermented beverages produced and consumed worldwide, [...] Read more.
Fermented beverages are drinks that undergo a fermentation process involving yeasts, bacteria, or other microbial groups, leading to the conversion of natural sugars into alcohol, acids, and gases. Beer, wine, kombucha, kefir, and cider are examples of fermented beverages produced and consumed worldwide, representing a rapidly growing market. However, demands for these products have expanded beyond aspects such as size, shape, and storage conditions. There is an increasing demand for eco-sustainable, fresh products tailored to individuals with dietary restrictions and/or enriched with nutrients and health-promoting compounds. In this context, the market has witnessed a surge in alternative fermented beverages made from nutrient-rich or exotic-flavored raw materials, highlighting their versatility. A noteworthy example is the application of Amazonian fruits, which, despite being primarily consumed fresh by local populations, have been extensively characterized in scientific studies for their abundance of molecules with beneficial effects and their use in products like juices, ice creams, and fermented beverages. Thus, this review aims to explore the nutritional composition and microbiological aspects of different fermented beverages produced from several Amazonian fruits. Full article
(This article belongs to the Special Issue Bioactive Compounds in Fermented Beverages)
Show Figures

Figure 1

Back to TopTop