Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (159)

Search Parameters:
Keywords = automated content production

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3135 KB  
Article
Layer-by-Layer Integration of Electrospun Nanofibers in FDM 3D Printing for Hierarchical Composite Fabrication
by Jaymin Vrajlal Sanchaniya, Hilary Smogor, Valters Gobins, Vincent Noël, Inga Lasenko and Simas Rackauskas
Polymers 2026, 18(1), 78; https://doi.org/10.3390/polym18010078 (registering DOI) - 27 Dec 2025
Abstract
This study presents a novel integrated manufacturing approach that combines fused deposition modeling (FDM) 3D printing with in situ electrospinning to fabricate hierarchical composite structures composed of polylactic acid (PLA) reinforced with polyacrylonitrile (PAN) nanofibers. A mounting fixture was employed to enable layer-by-layer [...] Read more.
This study presents a novel integrated manufacturing approach that combines fused deposition modeling (FDM) 3D printing with in situ electrospinning to fabricate hierarchical composite structures composed of polylactic acid (PLA) reinforced with polyacrylonitrile (PAN) nanofibers. A mounting fixture was employed to enable layer-by-layer nanofiber deposition directly onto printed PLA layers in a continuous automated process, eliminating the need for prefabricated electrospun nanofiber mats. The influences of nozzle temperature (210–230 °C) and electrospinning time (5–15 min per layer) on mechanical, thermal, and morphological properties were systematically investigated. Optimal performance was achieved at an FDM nozzle temperature of 220 °C with 5 min of electrospinning time (sample E1), showing a 36.5% increase in tensile strength (71 MPa), a 33.3% increase in Young’s modulus (2.8 GPa), and a 62.0% increase in flexural strength (128 MPa) compared with the neat PLA. This enhancement resulted from the complete infiltration of molten PLA into the thin nanofiber mats, creating true fiber–matrix integration. Excessive nanofiber content (15 min ES) caused a 36.5% reduction in strength due to delamination and incomplete infiltration. Thermal analysis revealed a decrease in glass transition temperature (1.2 °C) and onset of thermal degradation (5.3–15.2 °C) with nanofiber integration. Fracture morphology confirmed that to achieve optimal properties, it was critical to balance the nanofiber reinforcement content with the depth of infiltration, as excessive content created poorly bonded interleaved layers. This integrated fabrication platform enables the production of lightweight hierarchical composites with multiscale, custom-made reinforcement for applications in biomedical scaffolds, protective equipment, and structural components. Full article
(This article belongs to the Special Issue Advanced Electrospinning Technology for Polymer Materials)
Show Figures

Figure 1

25 pages, 1303 KB  
Article
Digital Twin Irrigation Strategies to Mitigate Drought Effects in Processing Tomatoes
by Sandra Millán, Jaume Casadesús, Jose María Vadillo and Carlos Campillo
Horticulturae 2026, 12(1), 28; https://doi.org/10.3390/horticulturae12010028 (registering DOI) - 26 Dec 2025
Abstract
The increasing frequency and intensity of droughts, a direct consequence of climate change, represent one of the main threats to agriculture, especially for crops with a high water demand such as the processing tomato. The objective of this study is to evaluate the [...] Read more.
The increasing frequency and intensity of droughts, a direct consequence of climate change, represent one of the main threats to agriculture, especially for crops with a high water demand such as the processing tomato. The objective of this study is to evaluate the potential of the IrriDesK digital twin (DT) as a tool for automated irrigation management and the implementation of regulated deficit irrigation (RDI) strategies tailored to the crop’s water status and phenological stage. The trial was conducted in an experimental plot over two consecutive growing seasons (2023–2024), comparing three irrigation treatments: full irrigation based on lysimeter measurements (T1) and two RDI strategies programmed through IrriDesK (T2 and T3). The results showed water consumption reductions of 30–45% in treatments T2 and T3 compared to treatment T1, with applied volumes of 277–400 mm versus approximately 570 mm in treatment T1, thus remaining within the sustainability threshold (<500 mm, equivalent to 5000 m3 ha−1). This threshold corresponds to the maximum seasonal allocation typically available for processing tomato under drought conditions in the region and was used to configure the DT’s seasonal irrigation plan. The monitoring of leaf water potential (Ψleaf) and the normalized difference vegetation index (NDVI) confirmed the DT’s ability to dynamically adjust irrigation and maintain an adequate water status during critical crop phases. In terms of productivity, treatment T1 achieved the highest yields (≈135 t ha−1), while RDI strategies reduced production to 90–108 t ha−1, but improved fruit quality, with increases in total soluble solids content of up to 10–15% (°Brix). These results demonstrate that IrriDesK is an effective tool for the optimization of water use while maintaining crop profitability and enhancing the resilience of processing tomatoes to drought scenarios. Full article
36 pages, 513 KB  
Article
Comparative Evaluation of GPT-4o, GPT-OSS-120B and Llama-3.1-8B-Instruct Language Models in a Reproducible CV-to-JSON Extraction Pipeline
by Marcin Nawalny, Mateusz Łępicki, Tomasz Latkowski, Sebastian Bujak, Michał Bukowski, Bartosz Świderski, Grzegorz Baranik, Bogusz Nowak, Robert Zakowicz, Łukasz Dobrakowski, Agnieszka Oczeretko, Piotr Sadowski, Konrad Szlaga, Bartłomiej Kubica and Jarosław Kurek
Appl. Sci. 2026, 16(1), 217; https://doi.org/10.3390/app16010217 - 24 Dec 2025
Viewed by 91
Abstract
Recruitment automation increasingly relies on Large Language Models (LLMs) for extracting structured information from unstructured CVs and job postings. However, production data often arrive as heterogeneous, privacy-sensitive PDFs, limiting reproducibility and compliance. This study introduces a deterministic, GDPR-aligned pipeline that converts recruitment documents [...] Read more.
Recruitment automation increasingly relies on Large Language Models (LLMs) for extracting structured information from unstructured CVs and job postings. However, production data often arrive as heterogeneous, privacy-sensitive PDFs, limiting reproducibility and compliance. This study introduces a deterministic, GDPR-aligned pipeline that converts recruitment documents into structured, anonymized Markdown and subsequently into validated JSON ready for downstream AI processing. The workflow combines the Docling PDF-to-Markdown converter with a two-pass anonymization protocol and evaluates three LLM back-ends—GPT-4o (Azure, frozen proprietary), GPT-OSS-120B and Llama-3.1-8B-Instruct—using identical prompts and schema constraints under near-zero-temperature decoding. Each model’s output was assessed across 2280 multilingual CVs using two complementary metrics: reference-based completeness and content similarity. The proprietary GPT-4o achieved perfect schema coverage and served as the reproducibility baseline, while the open-weight models reached 73–79% completeness and 59–72% content similarity depending on section complexity. Llama-3.1-8B-Instruct performed strongly on standardized sections such as contact and legal, whereas GPT-OSS-120B better-handled less frequent narrative fields. The results demonstrate that fully deterministic, auditable document extraction is achievable with both proprietary and open LLMs when guided by strong schema validation and anonymization. The proposed pipeline bridges the gap between document ingestion and reliable, bias-aware data preparation for AI-driven recruitment systems. Full article
19 pages, 2700 KB  
Article
Content Generation Through the Integration of Markov Chains and Semantic Technology (CGMCST)
by Liliana Ibeth Barbosa-Santillán and Edgar León-Sandoval
Appl. Sci. 2025, 15(23), 12687; https://doi.org/10.3390/app152312687 - 30 Nov 2025
Viewed by 424
Abstract
In today’s rapidly evolving digital landscape, businesses are constantly under pressure to produce high-quality, engaging content for various marketing channels, including blog posts, social media updates, and email campaigns. However, the traditional manual content generation process is often time-consuming, resource-intensive, and inconsistent in [...] Read more.
In today’s rapidly evolving digital landscape, businesses are constantly under pressure to produce high-quality, engaging content for various marketing channels, including blog posts, social media updates, and email campaigns. However, the traditional manual content generation process is often time-consuming, resource-intensive, and inconsistent in maintaining the desired messaging and tone. As a result, the content production process can become a bottleneck, delay marketing campaigns, and reduce organizational agility. Furthermore, manual content generation introduces the risk of inconsistencies in tone, style, and messaging across different platforms and pieces of content. These inconsistencies can confuse the audience and dilute the message. We propose a hybrid approach for content generation based on the integration of Markov Chains with Semantic Technology (CGMCST). Based on the probabilistic nature of Markov chains, this approach allows an automated system to predict sequences of words and phrases, thereby generating coherent and contextually accurate content. Moreover, the application of semantic technology ensures that the generated content is semantically rich and maintains a consistent tone and style. Consistency across all marketing materials strengthens the message and enhances audience engagement. Automated content generation can scale effortlessly to meet increasing demands. The algorithm obtained an entropy of 9.6896 for the stationary distribution, indicating that the model can accurately predict the next word in sequences and generate coherent, contextually appropriate content that supports the efficacy of this novel CGMCST approach. The simulation was executed for a fixed time of 10,000 cycles, considering the weights based on the top three topics. These weights are determined both by the global document index and by term. The stationary distribution of the Markov chain for the top keywords, by stationary probability, includes a stationary distribution of “people” with a 0.004398 stationary distribution. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2375 KB  
Article
Label-Efficient PCB Defect Detection with an ECA–DCN-Lite–BiFPN–CARAFE-Enhanced YOLOv5 and Single-Stage Semi-Supervision
by Zhenxia Wang, Nurulazlina Ramli and Tzer Hwai Gilbert Thio
Sensors 2025, 25(23), 7283; https://doi.org/10.3390/s25237283 - 29 Nov 2025
Viewed by 460
Abstract
Printed circuit board (PCB) defect detection is critical to manufacturing quality, yet tiny, low-contrast defects and limited annotations challenge conventional systems. This study develops an ECA–DCN-lite–BiFPN–CARAFE-enhanced YOLOv5 detector by modifying You Only Look Once (YOLO) version 5 (YOLOv5) with Efficient Channel Attention (ECA) [...] Read more.
Printed circuit board (PCB) defect detection is critical to manufacturing quality, yet tiny, low-contrast defects and limited annotations challenge conventional systems. This study develops an ECA–DCN-lite–BiFPN–CARAFE-enhanced YOLOv5 detector by modifying You Only Look Once (YOLO) version 5 (YOLOv5) with Efficient Channel Attention (ECA) for channel re-weighting, a lightweight Deformable Convolution (DCN-lite) for geometric adaptability, a Bi-Directional Feature Pyramid Network (BiFPN) for multi-scale fusion, and Content-Aware ReAssembly of FEatures (CARAFE) for content-aware upsampling. A single-cycle semi-supervised training pipeline is further introduced: a detector trained on labeled images generates high-confidence pseudo-labels for unlabeled data, and the combined set is used for retraining without ratio heuristics. Evaluated on PKU-PCB under label-scarce regimes, the full model improves supervised mean Average Precision at an Intersection-over-Union threshold of 0.5 (mAP@0.5) from 0.870 (baseline) to 0.910, and reaches 0.943 mAP@0.5 with semi-supervision, with consistent class-wise gains and faster convergence. Ablation experiments validate the contribution of each module and identify robust pseudo-label thresholds, while comparisons with recent YOLO variants show favorable accuracy–efficiency trade-offs. These findings indicate that the proposed design delivers accurate, label-efficient PCB inspection suitable for Automated Optical Inspection (AOI) in production environments. This work supports SDG 9 by enhancing intelligent manufacturing systems through reliable, high-precision AI-driven PCB inspection. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 4638 KB  
Article
Semantics-Driven 3D Scene Retrieval via Joint Loss Deep Learning
by Juefei Yuan, Tianyang Wang, Shandian Zhe, Yijuan Lu, Zhaoxian Zhou and Bo Li
Mathematics 2025, 13(22), 3726; https://doi.org/10.3390/math13223726 - 20 Nov 2025
Viewed by 588
Abstract
Three-dimensional (3D) scene model retrieval has emerged as a novel and challenging area within content-based 3D model retrieval research. It plays an increasingly critical role in various domains, such as video games, film production, and immersive technologies, including virtual reality (VR), augmented reality [...] Read more.
Three-dimensional (3D) scene model retrieval has emerged as a novel and challenging area within content-based 3D model retrieval research. It plays an increasingly critical role in various domains, such as video games, film production, and immersive technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), where automated generation of 3D content is highly desirable. Despite their potential, the existing 3D scene retrieval techniques often overlook the rich semantic relationships among objects and between objects and their surrounding scenes. To address this gap, we introduce a comprehensive scene semantic tree that systematically encodes learned object occurrence probabilities within each scene category, capturing essential semantic information. Building upon this structure, we propose a novel semantics-driven image-based 3D scene retrieval method. The experimental evaluations show that the proposed approach effectively models scene semantics, enables more accurate similarity assessments between 3D scenes, and achieves substantial performance improvements. All the experimental results, along with the associated code and datasets, are available on the project website. Full article
Show Figures

Figure 1

21 pages, 1470 KB  
Review
Advancements in Pharmaceutical Lyophilization: Integrating QbD, AI, and Novel Formulation Strategies for Next-Generation Biopharmaceuticals
by Prachi Atre and Syed A. A. Rizvi
Biologics 2025, 5(4), 35; https://doi.org/10.3390/biologics5040035 - 10 Nov 2025
Viewed by 1374
Abstract
Lyophilization (freeze-drying) has become a cornerstone pharmaceutical technology for stabilizing biopharmaceuticals, overcoming the inherent instability of biologics, vaccines, and complex drug formulations in aqueous environments. The appropriate literature for this review was identified through a structured search of several databases (such as PubMed, [...] Read more.
Lyophilization (freeze-drying) has become a cornerstone pharmaceutical technology for stabilizing biopharmaceuticals, overcoming the inherent instability of biologics, vaccines, and complex drug formulations in aqueous environments. The appropriate literature for this review was identified through a structured search of several databases (such as PubMed, Scopus) covering publications from late 1990s till date, with inclusion limited to peer-reviewed studies on lyophilization processes, formulation development, and process analytical technologies. This succinct review examines both fundamental principles and cutting-edge advancements in lyophilization technology, with particular emphasis on Quality by Design (QbD) frameworks for optimizing formulation development and manufacturing processes. The work systematically analyzes the critical three-stage lyophilization cycle—freezing, primary drying, and secondary drying—while detailing how key parameters (shelf temperature, chamber pressure, annealing) influence critical quality attributes (CQAs) including cake morphology, residual moisture content, and reconstitution behavior. Special attention is given to formulation strategies employing synthetic surfactants, cryoprotectants, and stabilizers for complex delivery systems such as liposomes, nanoparticles, and biologics. The review highlights transformative technological innovations, including artificial intelligence (AI)-driven cycle optimization, digital twin simulations, and automated visual inspection systems, which are revolutionizing process control and quality assurance. Practical case studies demonstrate successful applications across diverse therapeutic categories, from small molecules to monoclonal antibodies and vaccines, showcasing improved stability profiles and manufacturing efficiency. Finally, the discussion addresses current regulatory expectations (FDA/ICH) and compliance considerations, particularly regarding cGMP implementation and the evolving landscape of AI/ML (machine learning) validation in pharmaceutical manufacturing. By integrating QbD-driven process design with AI-enabled modeling, process analytical technology (PAT) implementation, and regulatory alignment, this review provides both a strategic roadmap and practical insights for advancing lyophilized drug product development to meet contemporary challenges in biopharmaceutical stabilization and global distribution. Despite several publications addressing individual aspects of lyophilization, there is currently no comprehensive synthesis that integrates formulation science, QbD principles, and emerging digital technologies such as AI/ML and digital twins within a unified framework for process optimization. Future work should integrate advanced technologies, AI/ML standardization, and global access initiatives within a QbD framework to enable next-generation lyophilized products with improved stability and patient focus. Full article
Show Figures

Graphical abstract

22 pages, 6324 KB  
Article
A Novel Approach for the Estimation of the Efficiency of Demulsification of Water-In-Crude Oil Emulsions
by Slavko Nešić, Olga Govedarica, Mirjana Jovičić, Julijana Žeravica, Sonja Stojanov, Cvijan Antić and Dragan Govedarica
Polymers 2025, 17(21), 2957; https://doi.org/10.3390/polym17212957 - 6 Nov 2025
Viewed by 870
Abstract
Undesirable water-in-crude oil emulsions in the oil and gas industry can lead to several issues, including equipment corrosion, high-pressure drops in pipelines, high pumping costs, and increased total production costs. These emulsions are commonly treated with surface-active chemicals called demulsifiers, which can break [...] Read more.
Undesirable water-in-crude oil emulsions in the oil and gas industry can lead to several issues, including equipment corrosion, high-pressure drops in pipelines, high pumping costs, and increased total production costs. These emulsions are commonly treated with surface-active chemicals called demulsifiers, which can break an oil–water interface and enhance phase separation. This study introduces a novel approach based on neural networks to estimate demulsification efficiency and to aid in the selection of demulsifiers under field conditions. The influence of various types of demulsifiers, demulsifier concentration, time required for demulsification, temperature and asphaltene content on the demulsification efficiency is analyzed. To improve model accuracy, a modified full-scale factorial design of experiments and the comparison of response surface method with multilayer perception neural networks were conducted. The results demonstrated the advantages of using neural networks over the response surface methodology such as a reduced settling time in separators, an improved crude oil dehydration and processing capacity, and a lower consumption of energy and utilities. The findings may enhance processing conditions and identify regions of higher demulsification efficiency. The neural network approach provided a more accurate prediction of maximum of demulsification efficiency compared to the response surface methodology. The automated multilayer perceptron neural network, with an architecture consisting of 3 input layers, 14 hidden layers, and 1 output layer, demonstrated the highest validation performance R2 of 0.991932 by utilizing a logistic output activation function and a hyperbolic tangent activation function for the hidden layers. The identification of shifted optimal values of time required from demulsification, demulsifier concentration, and asphaltene content along with sensitivity analysis confirmed advantages of automated neural networks over conventional methods. Full article
Show Figures

Figure 1

25 pages, 2253 KB  
Entry
Artificial Intelligence in Higher Education: A State-of-the-Art Overview of Pedagogical Integrity, Artificial Intelligence Literacy, and Policy Integration
by Manolis Adamakis and Theodoros Rachiotis
Encyclopedia 2025, 5(4), 180; https://doi.org/10.3390/encyclopedia5040180 - 28 Oct 2025
Viewed by 5315
Definition
Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the [...] Read more.
Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the higher education landscape, emphasizing mature knowledge aimed at educators, researchers, and policymakers. AI technologies now support personalized learning pathways, enhance instructional efficiency, and improve academic productivity by facilitating tasks such as automated grading, adaptive feedback, and academic writing assistance. The widespread adoption of AI tools among students and faculty members has created a critical need for AI literacy—encompassing not only technical proficiency but also critical evaluation, ethical awareness, and metacognitive engagement with AI-generated content. Key opportunities include the deployment of adaptive tutoring and real-time feedback mechanisms that tailor instruction to individual learning trajectories; automated content generation, grading assistance, and administrative workflow optimization that reduce faculty workload; and AI-driven analytics that inform curriculum design and early intervention to improve student outcomes. At the same time, AI poses challenges related to academic integrity (e.g., plagiarism and misuse of generative content), algorithmic bias and data privacy, digital divides that exacerbate inequities, and risks of “cognitive debt” whereby over-reliance on AI tools may degrade working memory, creativity, and executive function. The lack of standardized AI policies and fragmented institutional governance highlight the urgent necessity for transparent frameworks that balance technological adoption with academic values. Anchored in several foundational pillars (such as a brief description of AI higher education, AI literacy, AI tools for educators and teaching staff, ethical use of AI, and institutional integration of AI in higher education), this entry emphasizes that AI is neither a panacea nor an intrinsic threat but a “technology of selection” whose impact depends on the deliberate choices of educators, institutions, and learners. When embraced with ethical discernment and educational accountability, AI holds the potential to foster a more inclusive, efficient, and democratic future for higher education; however, its success depends on purposeful integration, balancing innovation with academic values such as integrity, creativity, and inclusivity. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
Show Figures

Figure 1

20 pages, 7704 KB  
Article
Seamless User-Generated Content Processing for Smart Media: Delivering QoE-Aware Live Media with YOLO-Based Bib Number Recognition
by Alberto del Rio, Álvaro Llorente, Sofia Ortiz-Arce, Maria Belesioti, George Pappas, Alejandro Muñiz, Luis M. Contreras and Dimitris Christopoulos
Electronics 2025, 14(20), 4115; https://doi.org/10.3390/electronics14204115 - 21 Oct 2025
Viewed by 728
Abstract
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, [...] Read more.
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, and real-time quality assessment in a live sporting scenario. A key innovation of this work is the use of a cloud-native architecture based on Kubernetes, enabling dynamic and scalable integration of smartphone streams and remote production tools into a unified workflow. The system also included advanced cognitive services, such as a Video Quality Probe for estimating perceived visual quality and an AI Engine based on YOLO models for detection and recognition of runners and bib numbers. Together, these components enable a fully automated workflow for live production, combining real-time analysis and quality monitoring, capabilities that previously required manual or offline processing. The results demonstrated consistently high Mean Opinion Score (MOS) values above 3 72.92% of the time, confirming acceptable perceived quality under real network conditions, while the AI Engine achieved strong performance with a Precision of 93.6% and Recall of 80.4%. Full article
Show Figures

Figure 1

18 pages, 2167 KB  
Article
Turning Organic Waste into Energy and Food: Household-Scale Water–Energy–Food Systems
by Seneshaw Tsegaye, Terence Wise, Gabriel Alford, Peter R. Michael, Mewcha Amha Gebremedhin, Ankit Kumar Singh, Thomas H. Culhane, Osman Karatum and Thomas M. Missimer
Sustainability 2025, 17(19), 8942; https://doi.org/10.3390/su17198942 - 9 Oct 2025
Viewed by 1252
Abstract
Population growth drives increasing energy demands, agricultural production, and organic waste generation. The organic waste contributes to greenhouse gas emissions and increasing landfill burdens, highlighting the need for novel closed-loop technologies that integrate water, energy, and food resources. Within the context of the [...] Read more.
Population growth drives increasing energy demands, agricultural production, and organic waste generation. The organic waste contributes to greenhouse gas emissions and increasing landfill burdens, highlighting the need for novel closed-loop technologies that integrate water, energy, and food resources. Within the context of the Water–energy–food Nexus (WEF), wastewater can be recycled for food production and food waste can be converted into clean energy, both contributing to environmental impact reduction and resource sustainability. A novel household-scale, closed-loop WEF system was designed, installed and operated to manage organic waste while retrieving water for irrigation, nutrients for plant growth, and biogas for energy generation. The system included a biodigester for energy production, a sand filter system to regulate nutrient levels in the effluent, and a hydroponic setup for growing food crops using the nutrient-rich effluent. These components are operated with a daily batch feeder coupled with automated sensors to monitor effluent flow from the biodigester, sand filter system, and the feeder to the hydroponic system. This novel system was operated continuously for two months using typical household waste composition. Controlled experimental tests were conducted weekly to measure the nutrient content of the effluent at four locations and to analyze the composition of biogas. Gas chromatography was used to analyze biogas composition, while test strips and In-Situ Aqua Troll Multi-Parameter Water Quality Sonde were employed for water quality measurements during the experimental study. Experimental results showed that the system consistently produced biogas with 76.7% (±5.2%) methane, while effluent analysis confirmed its potential as a nutrient source with average concentrations of phosphate (20 mg/L), nitrate (26 mg/L), and nitrite (5 mg/L). These nutrient values indicate suitability for hydroponic crop growth and reduced reliance on synthetic fertilizers. This novel system represents a significant step toward integrating waste management, energy production, and food cultivation at the source, in this case, the household. Full article
Show Figures

Figure 1

22 pages, 14929 KB  
Article
Educational Evaluation with MLLMs: Framework, Dataset, and Comprehensive Assessment
by Yuqing Chen, Yixin Li, Yupei Ren, Yixin Liu and Yiping Ma
Electronics 2025, 14(18), 3713; https://doi.org/10.3390/electronics14183713 - 19 Sep 2025
Cited by 2 | Viewed by 1273
Abstract
With the rapid development of Multimodal Large Language Models (MLLMs) in education, their applications have mainly focused on content generation tasks such as text writing and courseware production. However, automated assessment of non-exam learning outcomes remains underexplored. This study shifts the application of [...] Read more.
With the rapid development of Multimodal Large Language Models (MLLMs) in education, their applications have mainly focused on content generation tasks such as text writing and courseware production. However, automated assessment of non-exam learning outcomes remains underexplored. This study shifts the application of MLLMs from content generation to content evaluation and designs a lightweight and extensible framework to enable automated assessment of students’ multimodal work. We constructed a multimodal dataset comprising student essays, slide decks, and presentation videos from university students, which were annotated by experts across five educational dimensions. Based on horizontal educational evaluation dimensions (Format Compliance, Content Quality, Slide Design, Verbal Expression, and Nonverbal Performance) and vertical model capability dimensions (consistency, stability, and interpretability), we systematically evaluated four leading multimodal large models (GPT-4o, Gemini 2.5, Doubao1.6, and Kimi 1.5) in assessing non-exam learning outcomes. The results indicate that MLLMs demonstrate good consistency with human evaluations across various assessment dimensions, with each model exhibiting its own strengths. Additionally, they possess high explainability and perform better in text-based tasks than in visual tasks, but their scoring stability still requires improvement. This study demonstrates the potential of MLLMs for non-exam learning assessment and provides a reference for advancing their applications in education. Full article
(This article belongs to the Special Issue Techniques and Applications of Multimodal Data Fusion)
Show Figures

Figure 1

18 pages, 1151 KB  
Article
Expanding the Team: Integrating Generative Artificial Intelligence into the Assessment Development Process
by Toni A. May, Kathleen Provinzano, Kristin L. K. Koskey, Connor J. Sondergeld, Gregory E. Stone, James N. Archer and Naorah Rimkunas
Appl. Sci. 2025, 15(18), 9976; https://doi.org/10.3390/app15189976 - 11 Sep 2025
Viewed by 1047
Abstract
Effective assessment development requires collaboration between multidisciplinary team members, and the process is often time-intensive. This study illustrates a framework for integrating generative artificial intelligence (GenAI) as a collaborator in assessment design, rather than a fully automated tool. The context was the development [...] Read more.
Effective assessment development requires collaboration between multidisciplinary team members, and the process is often time-intensive. This study illustrates a framework for integrating generative artificial intelligence (GenAI) as a collaborator in assessment design, rather than a fully automated tool. The context was the development of a 12-item multiple-choice test for social work interns in a school-based training program, guided by design-based research (DBR) principles. Using ChatGPT to generate draft items, psychometricians refined outputs through structured prompts and then convened a panel of five subject matter experts to evaluate content validity. Results showed that while most AI-assisted items were relevant, 75% required modification, with revisions focused on response option clarity, alignment with learning objectives, and item stems. These findings provide initial evidence that GenAI can serve as a productive collaborator in assessment development when embedded in a human-in-the-loop process, while underscoring the need for continued expert oversight and further validation research. Full article
Show Figures

Figure 1

17 pages, 3186 KB  
Article
Investigation of the Effects of Gas Metal Arc Welding and Friction Stir Welding Hybrid Process on AA6082-T6 and AA5083-H111 Aluminum Alloys
by Mariane Chludzinski, Leire Garcia-Sesma, Oier Zubiri, Nieves Rodriguez and Egoitz Aldanondo
Metals 2025, 15(9), 1005; https://doi.org/10.3390/met15091005 - 9 Sep 2025
Viewed by 1128
Abstract
Friction stir welding (FSW) has emerged as a solid-state joining technique offering notable advantages over traditional welding methods. Gas metal arc welding (GMAW), a fusion-based process, remains widely used due to its high efficiency, productivity, weld quality, and ease of automation. To combine [...] Read more.
Friction stir welding (FSW) has emerged as a solid-state joining technique offering notable advantages over traditional welding methods. Gas metal arc welding (GMAW), a fusion-based process, remains widely used due to its high efficiency, productivity, weld quality, and ease of automation. To combine the benefits of both techniques, a hybrid welding approach integrating GMAW and FSW has been developed. This study investigates the impact of this hybrid technique on the joint quality and properties of AA5083-H111 and AA6082-T6 aluminum alloys. Butt joints were produced on 6 mm thick plates, with variations in friction process parameters. Characterization included macro- and microstructural analyses, mechanical testing (hardness and tensile strength), and corrosion resistance evaluation through stress corrosion cracking tests. Results showed that FSW significantly refined and homogenized the microstructure in both alloys. AA5083-H111 welds achieved a joint efficiency of 99%, while AA6082-T6 reached 66.7%, differences attributed to their distinct strengthening mechanisms and the thermal–mechanical effects of FSW. To assess hydrogen-related behavior, slow strain rate tensile (SSRT) tests were conducted in both inert and hydrogen-rich environments. Hydrogen content was measured in arc, friction, and overlap zones, revealing variations depending on the alloy and microstructure. Despite these differences, both alloys exhibited negligible hydrogen embrittlement. In conclusion, the GMAW–FSW hybrid process successfully produced sound joints with good mechanical and corrosion resistance performance in both aluminum alloys. The findings demonstrate the potential of hybrid welding as a viable method for enhancing weld quality and performance in applications involving dissimilar aluminum alloys. Full article
(This article belongs to the Section Welding and Joining)
Show Figures

Figure 1

14 pages, 855 KB  
Article
Novel Machine Learning-Based Approach for Determining Milk Clotting Time Using Sheep Milk
by João Dias, Sandra Gomes, Karina S. Silvério, Daniela Freitas, Jaime Fernandes, João Martins, José Jasnau Caeiro, Manuela Lageiro and Nuno Alvarenga
Appl. Sci. 2025, 15(17), 9843; https://doi.org/10.3390/app15179843 - 8 Sep 2025
Viewed by 925
Abstract
The enzymatic coagulation of milk, crucial in cheese production, entails the hydrolysis of κ-casein and subsequent micelle aggregation. Conventional assessment standards, such as the Berridge method, depend on visual inspection and are susceptible to operator bias. Recent methods for the identification of milk-clotting [...] Read more.
The enzymatic coagulation of milk, crucial in cheese production, entails the hydrolysis of κ-casein and subsequent micelle aggregation. Conventional assessment standards, such as the Berridge method, depend on visual inspection and are susceptible to operator bias. Recent methods for the identification of milk-clotting time rely on optical, ultrasonic, and image-based technologies. In the present work, the composition of milk was evaluated through standard methods from ISO and AOAC. Milk coagulation time (MCT) was measured through viscosimetry, Berridge’s operator-driven technique, and a machine learning approach employing computer vision. Coagulation was additionally observed using the Optigraph, which measures micellar aggregation through near-infrared light attenuation for immediate analysis. Sheep milk samples were analysed for their composition and coagulation characteristics. Coagulation times, assessed via Berridge (BOB), demonstrated high correlation (R2 = 0.9888) with viscosimetry (Visc) and machine learning (ML). Increased levels of protein and casein were linked to extended MCT, whereas lower pH levels sped up coagulation. The calcium content did not have a notable impact. Optigraph assessments validated variations in firmness and aggregation rate. Principal Component Analysis (PCA) identified significant correlations between total solids, casein, and MCT techniques. Estimates from ML-based MCT closely align with those from operator-based methods, confirming its dependability. This research emphasises ML as a powerful, automated method for evaluating milk coagulation, presenting a compelling substitute for conventional approaches. Full article
(This article belongs to the Special Issue Innovation in Dairy Products)
Show Figures

Figure 1

Back to TopTop