Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,039)

Search Parameters:
Keywords = full automation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 11943 KB  
Article
A Machine Learning-Augmented Microwave Sensor for Metallic Landmine Detection
by Maged A. Aldhaeebi, Abdulbaset Ali and Thamer S. Almoneef
Signals 2026, 7(3), 40; https://doi.org/10.3390/signals7030040 (registering DOI) - 2 May 2026
Abstract
This paper presents a non-imaging landmine detection system that integrates a highly sensitive multiple-input multiple-output (MIMO) microwave sensor with a machine learning (ML) classifier for automated classification. The proposed sensor consists of two circular patch elements fed with two ports designed in a [...] Read more.
This paper presents a non-imaging landmine detection system that integrates a highly sensitive multiple-input multiple-output (MIMO) microwave sensor with a machine learning (ML) classifier for automated classification. The proposed sensor consists of two circular patch elements fed with two ports designed in a unique configuration, comprising a dual loop with a cross dipole, for enhancing sensitivity to changes in the environmental electrical properties (dielectric constant and electrical conductivity) induced by buried metallic objects. It operates in dual bands of 1.58 GHz and 1.75 GHz, within the operating frequency range of 1.3 to 2 GHz. The system’s performance was assessed using full-wave simulations and experimental measurements, involving a sand-filled foam container with a metal surrogate landmine placed at different depths. The sensor’s performance was evaluated by monitoring changes in the magnitude and phase of the reflection coefficient (S11) and the transmission coefficient (S21). The acquired scattering parameters data were processed using a Support Vector Machine (SVM) algorithm for automated classification. Results demonstrate the sensor’s high capability in detecting metallic targets at various depths and standoff distances. Compared to conventional imaging technologies, this system offers significant advantages in cost, simplicity, and ease of data processing. The SVM models trained on measurement data with proper feature selection showed a high level of agreement with their counterparts trained on simulation data. Stratified k-fold cross-validation was used to improve the reliability of accuracy metrics, with results showing 85% or higher mean accuracy in all classification scenarios. Full article
Show Figures

Figure 1

43 pages, 2732 KB  
Systematic Review
Large Language Models in Intelligent Education Systems: New Educational Perspectives—A Systematic Review
by Tatyana Ivanova and Valentina Terzieva
Information 2026, 17(5), 433; https://doi.org/10.3390/info17050433 - 1 May 2026
Abstract
Large language models (LLMs) are an emerging artificial intelligence-driven technology, based on transformer architecture. LLMs are widely used in modern education, both by learners and tutors, as standalone tools or integrated into e-learning systems, where they can support personalization, adaptive learning, automated assessment [...] Read more.
Large language models (LLMs) are an emerging artificial intelligence-driven technology, based on transformer architecture. LLMs are widely used in modern education, both by learners and tutors, as standalone tools or integrated into e-learning systems, where they can support personalization, adaptive learning, automated assessment and feedback, content generation, and intelligent tutoring. LLMs offer many benefits for learners, but they also have significant limitations. One approach to address the limitations of LLMs is to combine them with other intelligent technologies. The primary goal of this systematic survey is to identify appropriate supporting technologies, mechanisms of use, and methodological approaches able to help overcome the limitations of LLMs and support their responsible and effective use in education. For this reason, analysis and discussion of recent scientific research (published over the last four years) accessible through Google Scholar, ACM, IEEE Xplore, or indexed in Scopus or Web of Science (WoS) is performed. A bibliometric analysis of results from the initial general query strings is used to refine and formulate more specific search queries during the literature retrieval process in the selected databases. Full-text exploration of relevant search results serves as a source for critical analysis and deductions leading to the following conclusion: LLMs should be integrated into e-learning systems, combined with knowledge graphs, ontologies, learning analytics, and multimodal reasoning to enhance reliability, improve pedagogical effectiveness, and enable true personalization. New pedagogical approaches are also needed to ensure the effective use of LLMs in both tutoring and assessment contexts. Therefore, the authors propose methodological guidelines for integrating LLMs in complex modular educational systems. Full article
(This article belongs to the Special Issue Trends in Artificial Intelligence-Supported E-Learning)
39 pages, 4605 KB  
Review
Research Progress on High-Efficiency and Low-Loss Harvesting Technologies and Equipment for Lodged Grain: A Review
by Yuyuan Qiao, Yuting Dong, Qi He, Zhaoming Zhang, Wenbin Zhang, Tao Ye, Xin Lu and Zhong Tang
Appl. Sci. 2026, 16(9), 4431; https://doi.org/10.3390/app16094431 - 1 May 2026
Abstract
Due to topographic and climatic constraints, a large proportion of grain is cultivated in relatively flat areas, where it is highly susceptible to lodging caused by natural disasters or improper field management. However, research on the full-process intelligent harvesting of lodged grain remains [...] Read more.
Due to topographic and climatic constraints, a large proportion of grain is cultivated in relatively flat areas, where it is highly susceptible to lodging caused by natural disasters or improper field management. However, research on the full-process intelligent harvesting of lodged grain remains limited. This paper aims to examine the current state of the grain industry and its level of mechanization, with a focus on the challenges in mechanized harvesting of lodged grain, including the lack of suitable equipment, high harvest losses, and low operational efficiency. The article provides a comprehensive review of the key technologies critical to achieving fully automated intelligent harvesting, including the acquisition of lodging information, combine harvester operations, and the development of intelligent modular systems. It also summarizes recent advancements in cutting-edge mechanized grain harvesting equipment. Additionally, the article presents recommendations for the future development and research of mechanized grain harvesting technologies, focusing on grain lodging feature recognition, equipment modularization, and the integration of intelligent technologies. The aim is to offer valuable insights for advancing progress in this field. Full article
(This article belongs to the Section Agricultural Science and Technology)
44 pages, 2892 KB  
Review
Meat-Borne Bacterial Pathogen Detection: Conventional, Molecular and Emerging AI-Based Strategies
by Athar Hussain, Qindeel Abbas, Muhammad Nadeem, Aquib Nazar, Ali Athar and Hafiz Ubaid Ur Rahman
Diagnostics 2026, 16(9), 1360; https://doi.org/10.3390/diagnostics16091360 - 30 Apr 2026
Viewed by 222
Abstract
Meat serves as a prime medium for the growth of foodborne pathogens due to its rich protein content and high water activity, contributing significantly to the global burden of foodborne illnesses. This review synthesizes current advances in meat-borne bacterial pathogen detection with particular [...] Read more.
Meat serves as a prime medium for the growth of foodborne pathogens due to its rich protein content and high water activity, contributing significantly to the global burden of foodborne illnesses. This review synthesizes current advances in meat-borne bacterial pathogen detection with particular emphasis on emerging artificial intelligence (AI)-enabled applications. Major pathogens of concern, including Salmonella, Listeria monocytogenes, Escherichia coli, Campylobacter, Clostridium, and Staphylococcus aureus, are examined in relation to their relevance across the meat supply chain. Recent progress in biosensors (clustered regularly interspaced short palindromic repeats), CRISPR-based assays, isothermal amplification, and metagenomics is evaluated alongside the growing role of AI in automating signal interpretation, enhancing image-based diagnostics, and supporting early contamination prediction. AI-based systems have proved 96.4–104% recovery and 100% bacterial capture ability. Embedding AI methods in a wet lab demands technical and logical modeling, as well as learning and calibration decorum. Nonetheless, AI readiness and full-scale application for meat-borne pathogens surveillance are on the way. Furthermore, additional focus is aligned on meat-borne bacterial pathogen genomic databases, i.e., (NCBI Pathogen Detection, EnteroBase, VFDB, ComBase, and GenBank), which serve as critical training resources for AI models for outbreak tracking, virulence profiling, and antimicrobial resistance (AMR) prediction. By integrating molecular methods, genomic surveillance, and AI-driven analytics, this review presents a framework for strengthening meat safety systems. This will improve early detection capabilities and support data-driven public health interventions in the future. Full article
Show Figures

Graphical abstract

23 pages, 3224 KB  
Article
Evaluation of Coagulants and Polymers for Optimizing Wastewater Treatment and Acid Oil Extraction in a Poultry Processing Plant
by Elisa Tschaen Schneider, Polyana Silverio Massariol, Viviane Martins de Deus, Caio Lucas Alhadas de Paula Velloso and Job Teixeira de Oliveira
Polymers 2026, 18(9), 1078; https://doi.org/10.3390/polym18091078 - 29 Apr 2026
Viewed by 223
Abstract
The treatment of oily wastewater represents a significant environmental challenge, requiring efficient separation technologies and waste valorization. This study evaluated different types of coagulants (ferric chloride 38% m/m, aluminum polychloride 18% m/m, aluminum sulfate 8% m/m, and ferrous sulfate 6% m/m) and anionic [...] Read more.
The treatment of oily wastewater represents a significant environmental challenge, requiring efficient separation technologies and waste valorization. This study evaluated different types of coagulants (ferric chloride 38% m/m, aluminum polychloride 18% m/m, aluminum sulfate 8% m/m, and ferrous sulfate 6% m/m) and anionic polymers (from six suppliers) for treating poultry slaughterhouse effluent, aiming to optimize both clarification and oil recovery from the floated sludge. Bench-scale jar tests (G = 300 s−1 and 30 s−1) were followed by full-scale validation in a dissolved air flotation unit (100 m3 h−1) at a poultry processing WWTP. Recovered oil was extracted by hot cooking (95 °C) and tridecanter centrifugation, and its quality (moisture, acidity, saponification index) was assessed. A techno-economic analysis, including simple/discounted payback, NPV, IRR, Monte Carlo simulation (10,000 iterations, Python), and deterministic sensitivity analysis, was performed. Ferric chloride (38% m/m) produced the best technical results: treated effluent turbidity < 30 NTU, oil yield of 360 L day−1 with moisture < 2% at the tridecanter outlet, and consistent sludge dewaterability (moisture 55–65%). Oil moisture increased dramatically (to >30%) after storage due to condensate contamination from an inefficient exhaust system, a critical operational flaw that must be corrected. No statistically significant effect of polymer type on oil recovery was observed, although high variability (CV > 50%) was noted during PAC tests. The simple payback period for ferric chloride was 60.7 months (discounted: 64.1 months), with a positive median NPV (USD 7925) under a 12% p.a. discount rate. Sensitivity analysis showed that the investment is most sensitive to oil price: a 20% drop in oil price leads to a negative NPV (−USD 21,727). Despite this risk, the project provides environmental compliance and waste-to-value benefits. The study demonstrates that ferric chloride enables effective oil extraction from poultry wastewater, but proper exhaust design is essential to maintain oil quality. Future work should focus on standardized test durations (≥72 h) and automated monitoring to reduce variability. Full article
Show Figures

Figure 1

24 pages, 1871 KB  
Article
Design and Analysis of Minimum-Weighted Connected Capacitated Vertex Cover Algorithms for Link Monitoring in IoT-Enabled WSNs
by Miray Kol, Ege Erberk Uslu, Zuleyha Akusta Dagdeviren and Orhan Dagdeviren
Sensors 2026, 26(9), 2752; https://doi.org/10.3390/s26092752 - 29 Apr 2026
Viewed by 124
Abstract
Wireless sensor networks (WSNs) are the backbone of IoT-enabled smart manufacturing, environmental monitoring, and industrial automation. However, their broadcast nature makes communication links vulnerable to eavesdropping, routing manipulation, and denial-of-service attacks. Strategically placing monitor nodes to check each link is an effective approach [...] Read more.
Wireless sensor networks (WSNs) are the backbone of IoT-enabled smart manufacturing, environmental monitoring, and industrial automation. However, their broadcast nature makes communication links vulnerable to eavesdropping, routing manipulation, and denial-of-service attacks. Strategically placing monitor nodes to check each link is an effective approach to protect against attacks, but energy, connectivity, and capacity constraints should be considered while picking monitor nodes. In this paper, we tackle the Minimum-Weighted Connected Capacitated Vertex Cover (MWCCVC) problem, which minimizes monitoring costs, ensures backbone connectivity, and adheres to per-node capacity constraints. Unlike prior works that consider weighted vertex cover, connectivity constraints, or capacitated variants separately, the proposed MWCCVC model jointly integrates all three dimensions within a single vertex cover-based monitoring framework. We first provide a Branch-and-Bound (B&B) solver with linear programming relaxation bounds and constraint-based pruning strategies that produces optimum solutions. Three constructive greedy heuristics (GD, GR, GW) and two hybrid genetic algorithms (HGA, HGA-v2) that combine parameterized greedy decoders with evolutionary search are proposed; all methods guarantee full edge coverage, induced-subgraph connectivity, and max-flow-validated capacity feasibility. Tests on 130 small, 160 medium, and 19 large benchmark instances show that HGA matches B&B optima on every small instance, beats the time-limited B&B by 6.6% on medium instances, where the percentage is computed based on the relative difference in average total weight with respect to B&B, and stays the best on large graphs with up to 1000 nodes. The HGA-v2 tries to balance the quality and speed, with only a 3.1% difference at 10× faster execution. Full article
Show Figures

Figure 1

23 pages, 522 KB  
Article
Privacy-Preserving Hybrid GA–LSTM Ensemble for Typhoid Detection Using Optimised Clinical Feature Selection
by Karim Gasmi, Afrah Alanazi, Sahar Almenwer, Sarah Almaghrabi, Hamoud Alshammari, Kais Khaldi and Hassen Chouaib
Biomedicines 2026, 14(5), 1010; https://doi.org/10.3390/biomedicines14051010 - 29 Apr 2026
Viewed by 303
Abstract
Background/Objectives: Typhoid fever remains a major public health challenge in many low-income countries, where overlapping clinical symptoms and the limited reliability of conventional diagnostic procedures hinder accurate diagnosis. This study aims to develop a reliable and efficient diagnostic framework that automates typhoid fever [...] Read more.
Background/Objectives: Typhoid fever remains a major public health challenge in many low-income countries, where overlapping clinical symptoms and the limited reliability of conventional diagnostic procedures hinder accurate diagnosis. This study aims to develop a reliable and efficient diagnostic framework that automates typhoid fever detection from clinical data while preserving patient privacy. Methods: To achieve this objective, we propose a hybrid framework combining genetic algorithm (GA)–based feature selection, a Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) deep learning classifier, and federated learning. The GA identifies the most informative clinical features, reducing redundancy and computational complexity. The selected features are then used to train a CNN–LSTM model in a federated learning setup using the Federated Averaging (FedAvg) algorithm, enabling collaborative model training across multiple clients without sharing raw patient data. Results: Experimental results show that the proposed framework achieves 92% accuracy, with a strong F1-score and satisfactory sensitivity. Compared to models trained on the full feature set, the proposed approach requires less memory and shorter training time, while maintaining balanced performance under class imbalance. Conclusions: These results demonstrate that integrating evolutionary feature selection, deep sequential learning, and federated training provides an effective and privacy-aware solution for multi-class typhoid fever diagnosis. The proposed framework is particularly suitable for clinical environments with limited data access and constrained resources. Full article
Show Figures

Figure 1

25 pages, 1013 KB  
Article
Illuminating the Shadows: An Explainable AI-Driven Approach with Ensemble Learning for Insider Threat Detection
by Shahad Ghawa and Ashwaq Alhargan
Electronics 2026, 15(9), 1863; https://doi.org/10.3390/electronics15091863 - 28 Apr 2026
Viewed by 220
Abstract
In response to the increasing complexity of insider threats, this study proposes an explainable AI-driven framework designed to emulate real-world analyst workflows in security operations centers (SOCs). The framework integrates ensemble learning models—Random Forest, XGBoost, and Stacking—with behavioral feature engineering across multiple temporal [...] Read more.
In response to the increasing complexity of insider threats, this study proposes an explainable AI-driven framework designed to emulate real-world analyst workflows in security operations centers (SOCs). The framework integrates ensemble learning models—Random Forest, XGBoost, and Stacking—with behavioral feature engineering across multiple temporal granularities (session, daily, and weekly), enabling both fine-grained detection and long-term behavioral analysis. The framework follows a structured pipeline in which LLM-driven filtering is first applied to refine behavioral data using dataset metadata and MITRE ATT&CK-aligned logic, followed by ensemble learning for detection, explainability through SHAP and LIME, and LLM-based interpretation for analyst-oriented insights. A key contribution of this work is a dual-layer explainability architecture, where SHAP values capture global feature importance and LIME values provide instance-level explanations, enhanced by LLM-generated interpretations aligned with the MITRE ATT&CK framework. Due to computational constraints, modeling, full SHAP/LIME explainability, and LLM-guided filtering are applied at the weekly level. This design enables stable and interpretable behavioral analysis, while finer-grained analysis at daily and session levels remains part of future work. The filtering logic simulates SOC playbook-based automation using dataset metadata and MITRE-aligned patterns, reflecting how large-scale behavioral data are handled in practice. Despite the absence of contextual telemetry such as Security Information and Event Management (SIEM), Data Loss Prevention (DLP), or network logs, the proposed pipeline produces transparent and prioritized alerts that reduce false positives and improve analyst trust. Future work will extend the framework to finer temporal granularities—particularly daily and session levels—by applying the same pipeline to ensure consistency across analysis levels, in addition to exploring semi-supervised learning to adapt to evolving insider threat tactics. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 4455 KB  
Article
The Relevance of Compound Events in Bee Traffic Monitoring
by Andrea Nieves-Rivera, Marie Lluberes-Contreras and Rémi Mégret
Informatics 2026, 13(5), 65; https://doi.org/10.3390/informatics13050065 - 23 Apr 2026
Viewed by 615
Abstract
Bees are essential pollinators for agricultural systems, making accurate, automated monitoring of their behavior critical for assessing colony health and ecosystem stability. Recent advances in computer vision and artificial intelligence have enabled large-scale bee traffic monitoring at hive entrances; however, most existing event [...] Read more.
Bees are essential pollinators for agricultural systems, making accurate, automated monitoring of their behavior critical for assessing colony health and ecosystem stability. Recent advances in computer vision and artificial intelligence have enabled large-scale bee traffic monitoring at hive entrances; however, most existing event classification methods focus exclusively on simple entrance and exit events. This simplification overlooks compound movements—such as U-turns and guarding behaviors—that represent a substantial portion of bee activity and can lead to inaccurate trajectory reconstruction and misleading behavioral interpretations. In this work, we systematically analyze existing event classification strategies used in automatic bee traffic monitoring, evaluating their performance on both simple and compound movements. We then propose extended classification methods that explicitly model compound events by incorporating bidirectional movement patterns derived from positional and angular cues. Using a manually annotated dataset of computer-vision-based hive entrance recordings, we compare threshold-based, displacement-based, and angle-based approaches under simple and mixed-event conditions. Our results demonstrate that compound events account for over one-third of all detected movements and that classification methods explicitly designed to handle bidirectional behavior substantially outperform traditional approaches in both accuracy and robustness. In particular, threshold-based bidirectional classification achieves near-perfect performance when full trajectories are available, while displacement-based methods provide a reliable alternative under partial observations. These findings highlight the importance of modeling compound behaviors in automated bee monitoring systems and contribute to more accurate flight reconstruction, behavioral analysis, and AI-driven decision support for precision agriculture and pollinator management. Full article
Show Figures

Figure 1

27 pages, 13004 KB  
Article
Classification of Wheat Varieties Using Fourier-Transform Infrared Spectroscopy and Machine-Learning Techniques
by Mahtem Teweldemedhin Mengstu, Alper Taner and Neluș-Evelin Gheorghiță
Agriculture 2026, 16(8), 914; https://doi.org/10.3390/agriculture16080914 - 21 Apr 2026
Viewed by 429
Abstract
The combination of Fourier-transform infrared (FTIR) spectroscopy and machine learning gives a promising result in wheat variety classification. This study aimed to evaluate the contributions of distinct spectral regions and their combinations to classification performance. Out of the full raw spectra of four [...] Read more.
The combination of Fourier-transform infrared (FTIR) spectroscopy and machine learning gives a promising result in wheat variety classification. This study aimed to evaluate the contributions of distinct spectral regions and their combinations to classification performance. Out of the full raw spectra of four bread wheat varieties, namely Altindane, Cavus, Flamura-85, and Nevzatbey, 15 spectral datasets were prepared. Artificial Neural Networks (ANN), Support Vector Machines (SVM), Random Forest (RF), and K-Nearest Neighbor (KNN) models were trained and analyzed. The highest classification performance was obtained using spectral regions associated with protein and lipid bands. The highest average accuracy of 0.9895 was shown by the SVM model, while the ANN produced comparable results with lower variability. Additionally, Variable Importance in Projection (VIP) analysis identified the most influential spectral bands in the protein (Amide II, ~1542 cm−1) and carbonyl (1744–1715 cm−1) regions. These findings indicate that classification is driven by chemically meaningful features rather than purely statistical patterns. The approach followed in this study provides an insight that, in FTIR-based classification, when rigorously evaluated using nested cross-validation, spectral region selection can outweigh model complexity. This approach demonstrates strong potential for rapid and non-destructive assessment, especially for real-time applications in grain processing and automated sorting systems. Full article
(This article belongs to the Special Issue Integrating Spectroscopy and Machine Learning for Crop Phenotyping)
Show Figures

Figure 1

12 pages, 413 KB  
Article
Impact of Total Laboratory Automation on Urine Culture Turnaround Time: A Comparative Study Between Manual Workflow and WASPLab™
by Fizza Khalid, Ahmed J. Alzahrani, Hilal Mohammed, Aymen Khalaf Allah Gamma, Mohamed Elhadi Hassan, Christy Poulose, Azza ElSheikh, Khalid Sumaily, Ahmad Ali Alharbi, Najah Fayyad Aldrous, Mohammed Alsaadan, Mohammed Alnamnakani and Osamah T. Khojah
Diagnostics 2026, 16(8), 1235; https://doi.org/10.3390/diagnostics16081235 - 21 Apr 2026
Viewed by 262
Abstract
Background: Turnaround time (TAT) is a key performance indicator in clinical microbiology, particularly for urine cultures, which represent a high-volume workload and directly impact antimicrobial stewardship. Methods: This retrospective observational study compared urine culture TAT before (2023, manual workflow) and after (2025, total [...] Read more.
Background: Turnaround time (TAT) is a key performance indicator in clinical microbiology, particularly for urine cultures, which represent a high-volume workload and directly impact antimicrobial stewardship. Methods: This retrospective observational study compared urine culture TAT before (2023, manual workflow) and after (2025, total laboratory automation using WASPLab™) implementation in a high-volume reference laboratory. A total of 16,210 cultures in 2023 and 60,474 in 2025 were included. TAT was defined as the time from laboratory receipt to final report validation. Results: Implementation of total laboratory automation significantly reduced median TAT for both negative cultures (from 49.68 to 34.38 h) and positive cultures (from 50.42 to 34.62 h) (p < 0.001). In addition, variability in reporting times decreased, indicating improved consistency. Laboratory productivity increased from 2316 to 7559 cultures per full-time equivalent, representing a 3.26-fold improvement. Conclusions: Total laboratory automation significantly improved the speed and consistency of urine culture reporting, supporting enhanced laboratory efficiency and facilitating timely clinical decision-making. Full article
(This article belongs to the Section Clinical Laboratory Medicine)
Show Figures

Figure 1

29 pages, 388 KB  
Article
AI Agents in Financial Markets: Architecture, Applications, and Systemic Implications
by Hui Gong
FinTech 2026, 5(2), 34; https://doi.org/10.3390/fintech5020034 - 19 Apr 2026
Viewed by 405
Abstract
Recent advances in large language models, tool-using agents, and financial machine learning are shifting financial automation from isolated prediction tasks to integrated decision systems that can perceive information, reason over objectives, and generate or execute actions. The paper develops an integrative framework for [...] Read more.
Recent advances in large language models, tool-using agents, and financial machine learning are shifting financial automation from isolated prediction tasks to integrated decision systems that can perceive information, reason over objectives, and generate or execute actions. The paper develops an integrative framework for analysing agentic finance: financial market environments in which autonomous or semi-autonomous AI systems participate in information processing, decision support, monitoring, and execution workflows. The analysis proceeds in three steps. First, the paper proposes a four-layer architecture of financial AI agents covering data perception, reasoning engines, strategy generation, and execution with control. Second, it introduces the Agentic Financial Market Model (AFMM), a stylised agent-based representation linking agent design parameters such as autonomy depth, heterogeneity, execution coupling, infrastructure concentration, and supervisory observability to market-level outcomes including efficiency, liquidity resilience, volatility, and systemic risk. Third, it presents an illustrative empirical application based on event studies of AI-agent capability disclosures and heterogeneous market repricing. It argues that the systemic implications of AI in finance depend less on model intelligence alone than on how agent architectures are distributed, coupled, and governed across institutions. The empirical application is intentionally exploratory: it does not validate the full AFMM but shows how one observable expectations channel can be studied using public data. In the near term, the most plausible equilibrium is bounded autonomy, in which AI agents operate as supervised co-pilots, monitoring systems, and constrained execution modules embedded within human decision processes. Full article
27 pages, 4918 KB  
Article
MultiFixRadSoft: A Comprehensive Tool for Primary Relative Radiometric Scale Realization in Radiation Thermometry
by Mehtap Ertürk, Mevlüt Karabulut, Ömer Faruk Kadı, Can Gözönünde, Patrik Broberg, Åge Andreas Falnes Olsen and Humbet Nasibli
Sensors 2026, 26(8), 2489; https://doi.org/10.3390/s26082489 - 17 Apr 2026
Viewed by 251
Abstract
This paper presents a practical implementation of relative primary radiation thermometry (RPRT) together with MultiFixRadSoft, an open-source software package developed in accordance with the Mise-en-Pratique for the kelvin (MeP-K) for realization of the thermodynamic temperature scale and uncertainty evaluation under the [...] Read more.
This paper presents a practical implementation of relative primary radiation thermometry (RPRT) together with MultiFixRadSoft, an open-source software package developed in accordance with the Mise-en-Pratique for the kelvin (MeP-K) for realization of the thermodynamic temperature scale and uncertainty evaluation under the new definition of the kelvin. The software enables realization of temperature scales using ITS-90 metal fixed points as well as metal–carbon and metal–carbide–carbon eutectic high-temperature fixed points (HTFPs) for both radiation thermometers and radiometers. It incorporates automated routines for melting plateau analysis, including determination of the point of inflection, liquidus point, and melting range, together with correction modules for size-of-source effect, detector nonlinearity, emissivity, and temperature drop. Validation is demonstrated through experimental realization using six fixed points (Cu, Fe–C, Co–C, Pd–C, Ru–C, and WC–C) and a linear radiation thermometer. The software also supports ITS-90 extrapolation procedures and flexible calibration schemes (n = 1 to n ≥ 3), with automated Sakuma–Hattori fitting and full uncertainty propagation compliant with MeP-K requirements. The results show excellent agreement with manual analyses and published data, confirming the correctness of the implemented algorithms. By integrating data processing, scale realization, and uncertainty analysis within a unified and transparent framework, MultiFixRadSoft provides a robust and accessible tool for traceable radiometric thermometry, supporting emerging NMIs and industrial laboratories while promoting the wider adoption of primary thermodynamic temperature realization methods. Full article
Show Figures

Figure 1

28 pages, 2111 KB  
Article
Simulation-Based Safety Evaluation of Mixed Traffic with Autonomous Vehicles in Seaports
by Jingwen Wang, Anastasia Feofilova, Yadong Wang, Jixiao Jiang and Mengru Shao
J. Mar. Sci. Eng. 2026, 14(8), 739; https://doi.org/10.3390/jmse14080739 - 16 Apr 2026
Viewed by 433
Abstract
The increasing deployment of autonomous vehicles in port logistics requires safety assessment methods that remain valid in mixed traffic environments. This study evaluates the safety of mixed automated guided vehicle (AGV) and human-driven vehicle (HDV) traffic in a seaport terminal connected to an [...] Read more.
The increasing deployment of autonomous vehicles in port logistics requires safety assessment methods that remain valid in mixed traffic environments. This study evaluates the safety of mixed automated guided vehicle (AGV) and human-driven vehicle (HDV) traffic in a seaport terminal connected to an external urban road network. A microscopic traffic model was developed in AIMSUN Next to represent gate areas, internal roads, storage-yard access, berth interfaces, and external container-truck traffic. HDVs were modeled using a Gipps-based car-following model, whereas AGVs were represented through an Adaptive Cruise Control framework. Vehicle trajectories were exported to the Surrogate Safety Assessment Model (SSAM), where Time-to-Collision (TTC) and Post-Encroachment Time (PET) were used to detect and classify conflicts. Six staged fleet-composition scenarios were evaluated in 36 simulation runs, ranging from fully human-driven operation to full automation. Total conflicts decreased from 89 in the fully human-driven scenario to 43 in the fully automated scenario (−51.7%), while rear-end conflicts decreased from 70 to 30 (−57.1%). Crossing conflicts remained relatively stable across scenarios. At the same time, mean TTC decreased from 0.80 to 0.24 s and mean PET from 1.57 to 0.38 s, indicating tighter but more coordinated interactions under automated control. These results show that automation improves longitudinal safety performance in port traffic, but also that conventional TTC and PET thresholds calibrated for human-driven traffic may not be directly applicable to automated port operations. Automation-sensitive surrogate safety criteria are therefore needed for seaport mixed-traffic evaluation. Full article
(This article belongs to the Special Issue Deep Learning Applications in Port Logistics Systems)
Show Figures

Figure 1

22 pages, 447 KB  
Systematic Review
Evolving Roles of Information Professionals in the Artificial Intelligence Era: A Systematic Literature Review
by Dyah Puspitasari Srirahayu, Dian Ekowati, Tiara Kusumaningtiyas, Esti Putri Anugrah, Alifian Sukma, Misita Anwar and Hanis Diyana Kamarudin
Publications 2026, 14(2), 25; https://doi.org/10.3390/publications14020025 - 16 Apr 2026
Viewed by 477
Abstract
The rapid advancement of artificial intelligence (AI) is reshaping the landscape of library and information science, significantly altering the roles and responsibilities of information professionals. This paper aims to examine the transformations of information professional roles in the era of artificial intelligence. This [...] Read more.
The rapid advancement of artificial intelligence (AI) is reshaping the landscape of library and information science, significantly altering the roles and responsibilities of information professionals. This paper aims to examine the transformations of information professional roles in the era of artificial intelligence. This study conducted a systematic literature review (SLR) emulating the PRISMA 2020 protocol. Titles and abstracts were screened based on predefined inclusion criteria, including English full-text journal articles, review papers, and conference papers indexed in Scopus addressing the roles and competencies of information professionals in the era of artificial intelligence. The study employed a conceptual and review analysis of documents to examine the use of AI and its impact on the roles of information professionals. We investigated the positive and negative effects of AI on the roles of information professionals, as well as the evolving role of information professionals in routine process automation. AI’s presence and transformation of virtually all the information professionals’ roles are profound, with pertinent challenges. The impact of AI on the roles of information professionals are both positive and negative, while the roles of information professionals have undergone significant changes in the AI era. This paper presents a unique perspective on the evolving roles of information professionals in the era of artificial intelligence. It offers original insights into how AI is reshaping the profession, highlighting the profound impacts and transformations that are redefining traditional practices and skill sets. Full article
(This article belongs to the Special Issue AI in Open Access)
Show Figures

Figure 1

Back to TopTop