Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,468)

Search Parameters:
Keywords = software development process

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 7611 KB  
Article
Spectacularity on the Frontline: An Interactive Materialization of the Costume of the Burgundian Prostitute in Louis Braun’s Panorama of the Battle of Murten
by Kathi Martin and Momo Jawwad
Heritage 2026, 9(2), 46; https://doi.org/10.3390/heritage9020046 - 27 Jan 2026
Abstract
The dressed body can reveal a great deal about the social, economic, political and artistic milieu that propelled a fashion style. Louis Braun used fashion to augment the narrative of his artwork, the Murten Panorama, a 10 m × 100 m cylindrical painting [...] Read more.
The dressed body can reveal a great deal about the social, economic, political and artistic milieu that propelled a fashion style. Louis Braun used fashion to augment the narrative of his artwork, the Murten Panorama, a 10 m × 100 m cylindrical painting commemorating the Swiss victory against the army of the Duchy of Burgundy, 1476. The Laboratory for Experimental Museology, Ecole Polytechnique Fédérale de Lausanne, led by Sarah Kenderdine, has digitized the panorama, producing a 1.6-trillion-pixel digital twin, the largest digital image of a particular object ever created. Exhibitions of the twin are in progress across Switzerland and other international venues to commemorate the 550th anniversary of the Burgundian wars. Volumetric videos, 3D objects and historic costume characters, motion capture and a dynamic soundscape present a multisensory immersive experience. This paper outlines our method of ‘materializing’, in 3D, the dress of the Burgundian prostitute, a prominent character in the panorama. Researching the sartorial, historical and artistic influences affecting Braun while he created the artwork revealed multiple layers of fashion interpretation and informed our research on how to embody the materiality of the character’s costume. We discuss our multi-disciplinary process to ‘materialize’ the character and the software used in the development. Full article
Show Figures

Graphical abstract

50 pages, 5096 KB  
Review
Growth Simulation Model and Intelligent Management System of Horticultural Crops: Methods, Decisions, and Prospects
by Yue Lyu, Chen Cheng, Xianguan Chen, Shunjie Tang, Shaoqing Chen, Xilin Guan, Lu Wu, Ziyi Liang, Yangchun Zhu and Gengshou Xia
Horticulturae 2026, 12(2), 139; https://doi.org/10.3390/horticulturae12020139 - 27 Jan 2026
Abstract
In the context of the rapid transformation of global agricultural production towards intensification and intelligence, the precise and intelligent management of horticultural crop production processes is key to enhancing resource utilization efficiency and industry profitability. Crop growth and development models, as digital representations [...] Read more.
In the context of the rapid transformation of global agricultural production towards intensification and intelligence, the precise and intelligent management of horticultural crop production processes is key to enhancing resource utilization efficiency and industry profitability. Crop growth and development models, as digital representations of the interactions between environment, crops, and management, are core tools for achieving intelligent decision-making in facility production. This paper provides a comprehensive review of the advancements in intelligent management models and systems for horticultural crop growth and development. It introduces the developmental stages of horticultural crop growth models and the integration of multi-source data, systematically organizing and analyzing the modeling mechanisms of crop growth and development process models centered on developmental stages, photosynthesis and respiration, dry matter accumulation and allocation, and yield and quality formation. Furthermore, it summarizes the current status of expert decision-support system software development and application based on crop models, achieving comprehensive functionalities such as data and document management, model parameter management and optimization, growth process and environmental simulation, management plan design and effect evaluation, and result visualization and decision product dissemination. This illustrates the pathway from theoretical research to practical application of models. Addressing the current challenges related to the universality of mechanisms, multi-source data assimilation, and intelligent decision-making, the paper looks forward to future research directions, aiming to provide theoretical references and technological insights for the future development and system integration of intelligent management models for horticultural crop growth and development. Full article
(This article belongs to the Section Protected Culture)
Show Figures

Figure 1

15 pages, 1728 KB  
Article
Reframing BIM: Toward Epistemic Resilience in Existing-Building Representation
by Ciera Hanson, Xiaotong Liu and Mike Christenson
Infrastructures 2026, 11(2), 40; https://doi.org/10.3390/infrastructures11020040 - 27 Jan 2026
Abstract
Conventional uses of building information modeling (BIM) in existing-building representation tend to prioritize geometric consistency and efficiency, but often at the expense of interpretive depth. This paper challenges BIM’s tendency to promote epistemic closure by proposing a method to foreground relational ambiguity, [...] Read more.
Conventional uses of building information modeling (BIM) in existing-building representation tend to prioritize geometric consistency and efficiency, but often at the expense of interpretive depth. This paper challenges BIM’s tendency to promote epistemic closure by proposing a method to foreground relational ambiguity, transforming view reconciliation from a default automated process into a generative act of critical inquiry. The method, implemented in Autodesk Revit, introduces a parametric reference frame within BIM sheets that foregrounds and manipulates reciprocal relationships between orthographic views (e.g., plans and sections) to promote interpretive ambiguity. Through a case study, the paper demonstrates how parameterized view relationships can resist oversimplification and encourage conflicting interpretations. By intentionally sacrificing efficiency for epistemic resilience, the method aims to expand BIM’s role beyond documentation, positioning it as a tool for architectural knowledge production. The paper concludes with implications for software development, pedagogy, and future research at the intersection of critical representation and computational tools. Full article
(This article belongs to the Special Issue Modern Digital Technologies for the Built Environment of the Future)
Show Figures

Figure 1

18 pages, 22560 KB  
Article
Data-Driven Motion Correction Algorithm: Validation in [13N]NH3 Dynamic PET/CT Scans
by Oscar Isaac Mendoza-Ibañez, Riemer H. J. A. Slart, Charles Hayden, Tonantzin Samara Martínez-Lucio, Friso M. van der Zant, Remco J. J. Knol and Sergiy V. Lazarenko
J. Clin. Med. 2026, 15(3), 984; https://doi.org/10.3390/jcm15030984 - 26 Jan 2026
Abstract
Background: Motion is a long-standing problem in cardiac PET/CT. An automated data-driven motion correction (DDMC) algorithm for within-reconstruction motion correction (MC) has been developed and validated in static images from [13N]NH3 and 82Rb PET/CT. This study aims to [...] Read more.
Background: Motion is a long-standing problem in cardiac PET/CT. An automated data-driven motion correction (DDMC) algorithm for within-reconstruction motion correction (MC) has been developed and validated in static images from [13N]NH3 and 82Rb PET/CT. This study aims to validate DDMC in dynamic [13N]NH3 PET/CT, and to explore the added value of DDMC in the evaluation of myocardial motion. Methods: Thirty-six PET/CT studies from normal patients and forty-three scans from patients with myocardial ischemia were processed using QPET software without MC (NMC), using manual in-software MC (ISMC), and DDMC. Differences in the mean values of rest-, stress-MBF, and CFR; and differences in effect size related to the use and type of MC method were explored. Moreover, motion vectors provided by DDMC were analyzed to evaluate differences in myocardial motion between scan phases and axes, and to elucidate changes in MBF quantification in relation to the motion extent. Results: In both subgroups, repeated measures ANOVA showed that the use of MC significantly increased regional and global stress-MBF and CFR values (p < 0.05), regardless of the MC method. Paired t-test analysis demonstrated a comparable ES between MC tools, despite minor differences in Cx, RCA and global rest-MBF values. High-intensity motion (>6 mm) proved to be present almost exclusively in the Z (cranio-caudal) direction. In the same axis, motion was significantly higher during stress than rest, regardless of patients’ subgroup. Finally, the Jonckheere trend test showed a significant trend caused by motion in s-MBF values, in which lower stress-MBF values were observed in response to motion extent increments. Conclusions: DDMC is feasible to perform in [13N]NH3 dynamic acquisitions and provides similar MBF/CFR values than manual ISMC. The use of DDMC reduces post-processing times and observer variability, and allows a more extensive evaluation of motion. MC is highly recommended when using QPET, as motion in the Z-axis during stress scans negatively impacts stress-MBF quantification. Full article
(This article belongs to the Special Issue Recent Advancements in Nuclear Medicine and Radiology: 2nd Edition)
Show Figures

Figure 1

18 pages, 2932 KB  
Article
Quantification of Glycan in Glycoproteins via AUCAgent-Enhanced Analytical Ultracentrifugation
by Xiaojuan Yu, Zhaoxing Wang, Chengshi Zeng, Ruifeng Zhang, Qing Chang, Wendan Chu, Qinghua Ma, Ke Ma, Lan Wang, Chuanfei Yu and Wenqi Li
Pharmaceuticals 2026, 19(2), 210; https://doi.org/10.3390/ph19020210 - 26 Jan 2026
Abstract
Background: As essential biomolecules composed of proteins and carbohydrate moieties, glycoproteins play pivotal roles in numerous biological processes. The glycosylation level plays a crucial role in determining the functionality of glycoproteins. Therefore, the precise quantification of glycan components in proteins holds significant [...] Read more.
Background: As essential biomolecules composed of proteins and carbohydrate moieties, glycoproteins play pivotal roles in numerous biological processes. The glycosylation level plays a crucial role in determining the functionality of glycoproteins. Therefore, the precise quantification of glycan components in proteins holds significant importance for research on and development of polysaccharide–protein-conjugated vaccines. Methods: In this study, a novel glycan quantification approach was developed, leveraging analytical ultracentrifugation (AUC) technology that synergistically utilizes ultraviolet wavelength absorption and interference data to directly determine glycan mass fractions in glycoproteins. Results: This methodology expands the analytical framework for glycoproteins while retaining the intrinsic advantages of AUC, enabling analysis in native states with high reproducibility as indicated by low standard deviation across replicates. Conclusions: The approach was implemented in our proprietary AUC data analysis software called AUCAgent (v1.8.8), providing a new method for glycoprotein quantification and polysaccharide ratio determination in polysaccharide-protein-conjugate vaccines. Full article
(This article belongs to the Section Biopharmaceuticals)
Show Figures

Figure 1

14 pages, 1003 KB  
Article
Use of Patient-Specific 3D Models in Paediatric Surgery: Effect on Communication and Surgical Management
by Cécile O. Muller, Lydia Helbling, Theodoros Xydias, Jeanette Greiner, Valérie Oesch, Henrik Köhler, Tim Ohletz and Jatta Berberat
J. Imaging 2026, 12(2), 56; https://doi.org/10.3390/jimaging12020056 - 26 Jan 2026
Abstract
Children with rare tumours and malformations may benefit from innovative imaging, including patient-specific 3D models that can enhance communication and surgical planning. The primary aim was to evaluate the impact of patient-specific 3D models on communication with families. The secondary aims were to [...] Read more.
Children with rare tumours and malformations may benefit from innovative imaging, including patient-specific 3D models that can enhance communication and surgical planning. The primary aim was to evaluate the impact of patient-specific 3D models on communication with families. The secondary aims were to assess their influence on medical management and to establish an efficient post-processing workflow. From 2021 to 2024, we prospectively included patients aged 3 months to 18 years with rare tumours or malformations. Families completed questionnaires before and after the presentation of a 3D model generated from MRI sequences, including peripheral nerve tractography. Treating physicians completed a separate questionnaire before surgical planning. Analyses were performed in R. Among 21 patients, diagnoses included 11 tumours, 8 malformations, 1 trauma, and 1 pancreatic pseudo-cyst. Likert scale responses showed improved family understanding after viewing the 3D model (mean score 3.94 to 4.67) and a high overall evaluation (mean 4.61). Physicians also rated the models positively. An efficient image post-processing workflow was defined. Although manual 3D reconstruction remains time-consuming, these preliminary results show that colourful, patient-specific 3D models substantially improve family communication and support clinical decision-making. They also highlight the need for supporting the development of MRI-based automated segmentation softwares using deep neural networks, which are clinically approved and usable in routine practice. Full article
(This article belongs to the Special Issue 3D Image Processing: Progress and Challenges)
Show Figures

Graphical abstract

22 pages, 3681 KB  
Article
The Pelagic Laser Tomographer for the Study of Suspended Particulates
by M. Dale Stokes, David R. Nadeau and James J. Leichter
J. Mar. Sci. Eng. 2026, 14(3), 247; https://doi.org/10.3390/jmse14030247 - 24 Jan 2026
Viewed by 193
Abstract
An ongoing challenge in pelagic oceanography and limnology is to quantify and understand the distribution of suspended particles and particle aggregates with sufficient temporal and spatial fidelity to understand their dynamics. These particles include biotic (mesoplankton, organic fragments, fecal pellets, etc.) and abiotic [...] Read more.
An ongoing challenge in pelagic oceanography and limnology is to quantify and understand the distribution of suspended particles and particle aggregates with sufficient temporal and spatial fidelity to understand their dynamics. These particles include biotic (mesoplankton, organic fragments, fecal pellets, etc.) and abiotic (dusts, precipitates, sediments and flocks, anthropogenic materials, etc.) matter and their aggregates (i.e., marine snow), which form a large part of the total particulate matter > 200 μm in size in the ocean. The transport of organic material from surface waters to the deep-sea floor is of particular interest, as it is recognized as a key factor controlling the global carbon cycle and hence, a critical process influencing the sequestration of carbon dioxide from the atmosphere. Here we describe the development of an oceanographic instrument, the Pelagic Laser Tomographer (PLT), that uses high-resolution optical technology, coupled with post-processing analysis, to scan the 3D content of the water column to detect and quantify 3D distributions of small particles. Existing optical instruments typically trade sampling volume for spatial resolution or require large, complex platforms. The PLT addresses this gap by combining high-resolution laser-sheet imaging with large effective sampling volumes in a compact, deployable system. The PLT can generate spatial distributions of small particles (~100 µm and larger) across large water volumes (order 100–1000 m3) during a typical deployment, and allow measurements of particle patchiness over spatial scales to less than 1 mm. The instrument’s small size (6 kg), high resolution (~100 µm in each 3000 cm2 tomographic image slice), and analysis software provide a tool for pelagic studies that have typically been limited by high cost, data storage, resolution, and mechanical constraints, all usually necessitating bulky instrumentation and infrequent deployment, typically requiring a large research vessel. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

22 pages, 3757 KB  
Article
Ensemble Machine Learning for Operational Water Quality Monitoring Using Weighted Model Fusion for pH Forecasting
by Wenwen Chen, Yinzi Shao, Zhicheng Xu, Zhou Bing, Shuhe Cui, Zhenxiang Dai, Shuai Yin, Yuewen Gao and Lili Liu
Sustainability 2026, 18(3), 1200; https://doi.org/10.3390/su18031200 - 24 Jan 2026
Viewed by 93
Abstract
Water quality monitoring faces increasing challenges due to accelerating industrialization and urbanization, demanding accurate, real-time, and reliable prediction technologies. This study presents a novel ensemble learning framework integrating Gaussian Process Regression, Support Vector Regression, and Random Forest algorithms for high-precision water quality pH [...] Read more.
Water quality monitoring faces increasing challenges due to accelerating industrialization and urbanization, demanding accurate, real-time, and reliable prediction technologies. This study presents a novel ensemble learning framework integrating Gaussian Process Regression, Support Vector Regression, and Random Forest algorithms for high-precision water quality pH prediction. The research utilized a comprehensive spatiotemporal dataset, comprising 11 water quality parameters from 37 monitoring stations across Georgia, USA, spanning 705 days from January 2016 to January 2018. The ensemble model employed a dynamic weight allocation strategy based on cross-validation error performance, assigning optimal weights of 34.27% to Random Forest, 33.26% to Support Vector Regression, and 32.47% to Gaussian Process Regression. The integrated approach achieved superior predictive performance, with a mean absolute error of 0.0062 and coefficient of determination of 0.8533, outperforming individual base learners across multiple evaluation metrics. Statistical significance testing using Wilcoxon signed-rank tests with a Bonferroni correction confirmed that the ensemble significantly outperforms all individual models (p < 0.001). Comparison with state-of-the-art models (LightGBM, XGBoost, TabNet) demonstrated competitive or superior ensemble performance. Comprehensive ablation experiments revealed that Random Forest removal causes the largest performance degradation (+4.43% MAE increase). Feature importance analysis revealed the dissolved oxygen maximum and conductance mean as the most influential predictors, contributing 22.1% and 17.5%, respectively. Cross-validation results demonstrated robust model stability with a mean absolute error of 0.0053 ± 0.0002, while bootstrap confidence intervals confirmed narrow uncertainty bounds of 0.0060 to 0.0066. Spatiotemporal analysis identified station-specific performance variations ranging from 0.0036 to 0.0150 MAE. High-error stations (12, 29, 33) were analyzed to distinguish characteristics, including higher pH variability and potential upstream pollution influences. An integrated software platform was developed featuring intuitive interface, real-time prediction, and comprehensive visualization tools for environmental monitoring applications. Full article
(This article belongs to the Section Sustainable Water Management)
23 pages, 5057 KB  
Article
DropSense: A Novel Imaging Software for the Analysis of Spray Parameters on Water-Sensitive Papers
by Ömer Barış Özlüoymak, Medet İtmeç and Alper Soysal
Appl. Sci. 2026, 16(3), 1197; https://doi.org/10.3390/app16031197 - 23 Jan 2026
Viewed by 101
Abstract
Measuring the spray parameters and providing feedback on the quality of the spraying is critical to ensuring that the spraying material reaches to the appropriate region. A novel software entitled DropSense was developed to determine spray parameters quickly and accurately compared to DepositScan, [...] Read more.
Measuring the spray parameters and providing feedback on the quality of the spraying is critical to ensuring that the spraying material reaches to the appropriate region. A novel software entitled DropSense was developed to determine spray parameters quickly and accurately compared to DepositScan, ImageJ 1.54d and Image-Pro 10 software. Water-sensitive papers (WSP) were used to determine spray parameters such as deposit coverage, total deposits counted, DV10, DV50, DV90, density, deposit area and relative span values. Upon execution of the developed software, these parameters were displayed on the computer screen and then saved in an Excel spreadsheet file at the end of the image analysis. A conveyor belt system with three different belt speeds (4, 5 and 6 km h−1) and four nozzle types (AI11002, TXR8002, XR11002, TTJ6011002) were used for carrying out the spray experiments. The novel software was developed in the LabVIEW programming language. Compared WSP image results related to the mentioned spray parameters were statistically evaluated. The results showed that the DropSense software had superior speed and ease of use in comparison to the other software for the image analysis of WSPs. The novel software showed mostly similar or more reliable performance compared to the existing software. The core technical innovation of DropSense lay in its integration of advanced morphological operations, which enable the accurate separation and quantification of overlapping droplet stains on WSPs. In addition, it performed fully automated processing of WSP images and significantly reduced analysis time compared to commonly used WSP image analysis software. Full article
Show Figures

Figure 1

22 pages, 1433 KB  
Article
An Engineering-Based Methodology to Assess Alternative Options for Reusing Decommissioned Offshore Platforms
by Annachiara Martini, Raffaella Gerboni, Anna Chiara Uggenti, Claudia Vivalda, Emanuela Bruno, Francesca Verga, Giorgio Giglio and Andrea Carpignano
J. Mar. Sci. Eng. 2026, 14(3), 239; https://doi.org/10.3390/jmse14030239 - 23 Jan 2026
Viewed by 194
Abstract
In the current context of the energy transition, the reuse of offshore oil and gas (O&G) structures that have reached the end of their operational life presents new engineering challenges. Many projects aim to adapt existing facilities for a range of alternative uses. [...] Read more.
In the current context of the energy transition, the reuse of offshore oil and gas (O&G) structures that have reached the end of their operational life presents new engineering challenges. Many projects aim to adapt existing facilities for a range of alternative uses. This paper outlines guidelines for identifying the most suitable conversion options aligned with the goals of the ongoing energy transition, focusing on the Italian offshore area. The study promotes the reuse—instead of partial or full removal—of existing offshore platforms originally built for the exploitation of hydrocarbon reservoirs. From an engineering perspective, the project describes the development of guidelines based on an innovative methodology to identify new uses for both offshore oil and gas platforms and the depleted reservoirs, with a focus on safety and environmental impact. The guidelines identify the most suitable and effective conversion option for the platform–reservoir system under consideration. To ensure a realistic approach, the developed methodology allows one to identify the preferable conversion option even when some piece of information is missing or incomplete, as often happens in the early stages of a feasibility study. The screening process provides an associated level of uncertainty related to the degree of data incompleteness. The outcome is a complete evaluation procedure divided into five phases: definition of criteria; assignment of an importance scale to determine how critical each criterion is; connection of indices and weights to each criterion; and analysis of the relationships between them. The guidelines are implemented in a software tool that supports and simplifies the decision-making process. The results are very promising. The developed methodology and the related guidelines applied to a case study have proven to be an effective decision-support for analysts. The study shows that it is possible to identify the most suitable conversion option from a technical, engineering, and operational point of view while also considering its environmental impact and safety implications. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

34 pages, 10715 KB  
Article
Features of the Data Collection and Transmission Technology in an Intelligent Thermal Conditioning System for Engines and Vehicles Operating on Thermal Energy Storage Technology Based on a Digital Twin
by Igor Gritsuk and Justas Žaglinskis
Machines 2026, 14(1), 130; https://doi.org/10.3390/machines14010130 - 22 Jan 2026
Viewed by 37
Abstract
This article examines an integrated approach to data acquisition and transmission within an intelligent thermal conditioning system for engines and vehicles that operates using thermal energy storage and the digital twin concept. The system is characterized by its use of multiple primary energy [...] Read more.
This article examines an integrated approach to data acquisition and transmission within an intelligent thermal conditioning system for engines and vehicles that operates using thermal energy storage and the digital twin concept. The system is characterized by its use of multiple primary energy sources to power internal subsystems and maintain optimal engine and vehicle temperature conditions. Building on a formalized conceptual model of the intelligent thermal conditioning system, the study identifies key technological features required for implementing complex operational processes, as well as the stages necessary for applying the proposed approach during the design and modernization phases throughout the system’s life cycle. A core block diagram of the system’s digital twin is presented, developed using mathematical models that describe support and monitoring processes under real operating conditions. Additionally, an architectural framework for organizing data collection and transmission is proposed, highlighting the integration of digital twin technologies into the thermal conditioning workflow. The article also introduces methods for adaptive data formation, transfer, and processing, supported by a specialized onboard software-diagnostic complex that enables structured information management. The practical implementation of the proposed solutions has the potential to enhance the energy efficiency of thermal conditioning processes and improve the reliability of vehicles employing thermal energy storage technologies. Full article
(This article belongs to the Special Issue Data-Driven Fault Diagnosis for Machines and Systems, 2nd Edition)
Show Figures

Figure 1

27 pages, 9070 KB  
Article
Research on the Prediction of Pressure, Temperature, and Hydrate Inhibitor Addition Amount After Surface Mining Throttling
by Dake Peng, Yuxin Wu, Yiyun Wang, Hong Wang, Junji Wei, Guojing Fu, Wei Luo and Jihan Wang
Processes 2026, 14(2), 376; https://doi.org/10.3390/pr14020376 - 21 Jan 2026
Viewed by 78
Abstract
During the trial mining process, ground horizontal pipes are prone to generating hydrates due to pressure and temperature changes, leading to ice blockage. Hydrate inhibitors are usually added on-site to prevent freezing blockage. However, existing addition methods have limitations, including poor real-time performance, [...] Read more.
During the trial mining process, ground horizontal pipes are prone to generating hydrates due to pressure and temperature changes, leading to ice blockage. Hydrate inhibitors are usually added on-site to prevent freezing blockage. However, existing addition methods have limitations, including poor real-time performance, insufficient accuracy in the addition amount, and dependence on manual adjustment. In view of this, this paper aims to develop models to predict the throttling pressure and temperature for horizontal ground pipes, and to indicate the amount of ethylene glycol needed to prevent freezing blockage, thereby laying the foundation for accurate, real-time prediction of fluid pressure and temperature and for controlling the addition amount. By integrating data-driven technologies and mechanism models, this study developed intelligent prediction systems for ground horizontal pipe throttling pressure and temperature, and for suppression of freeze-blocking ethylene glycol addition. First, a three-phase throttling mechanism model for oil, gas, and water is established using the energy conservation equation to accurately predict the pressure and temperature at the throttling points along the process. At the same time, HYSYS software is used to simulate various operating conditions and to fit the ethylene glycol addition amount prediction model. Finally, edge computing equipment is integrated to enable real-time data collection, prediction, and dynamic adjustment and optimization. The field measurement data of Well A showed that the model’s prediction error of pressure and temperature before and after throttling is less than 6%, and the prediction error of the ethylene glycol addition amount is less than 5%, which provides key technical support for safe and efficient operation of the trial mining process as well as for cost reduction and efficiency improvement. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

24 pages, 1137 KB  
Article
Detecting TLS Protocol Anomalies Through Network Monitoring and Compliance Tools
by Diana Gratiela Berbecaru and Marco De Santo
Future Internet 2026, 18(1), 62; https://doi.org/10.3390/fi18010062 - 21 Jan 2026
Viewed by 74
Abstract
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, [...] Read more.
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, the protocol is typically implemented through libraries, such as OpenSSL, BoringSSL, LibreSSL, WolfSSL, NSS, or mbedTLS. These libraries encompass functions that execute the specialized TLS handshake required for channel establishment, as well as the construction and processing of TLS records, and the procedures for closing the secure channel. However, these software libraries may contain vulnerabilities or errors that could potentially jeopardize the security of the TLS channel. To identify flaws or deviations from established standards within the implemented TLS code, a specialized tool known as TLS-Anvil can be utilized. This tool also verifies the compliance of TLS libraries with the specifications outlined in the Request for Comments documents published by the IETF. TLS-Anvil conducts numerous tests with a client/server configuration utilizing a specified TLS library and subsequently generates a report that details the number of successful tests. In this work, we exploit the results obtained from a selected subset of TLS-Anvil tests to generate rules used for anomaly detection in Suricata, a well-known signature-based Intrusion Detection System. During the tests, TLS-Anvil generates .pcap capture files that report all the messages exchanged. Such files can be subsequently analyzed with Wireshark, allowing for a detailed examination of the messages exchanged during the tests and a thorough understanding of their structure on a byte-by-byte basis. Through the analysis of the TLS handshake messages produced during testing, we develop customized Suricata rules aimed at detecting TLS anomalies that result from flawed implementations within the intercepted traffic. Furthermore, we describe the specific test environment established for the purpose of deriving and validating certain Suricata rules intended to identify anomalies in nodes utilizing a version of the OpenSSL library that does not conform to the TLS specification. The rules that delineate TLS deviations or potential attacks may subsequently be integrated into a threat detection platform supporting Suricata. This integration will enhance the capability to identify TLS anomalies arising from code that fails to adhere to the established specifications. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

19 pages, 495 KB  
Article
Mitigating Prompt Dependency in Large Language Models: A Retrieval-Augmented Framework for Intelligent Code Assistance
by Saja Abufarha, Ahmed Al Marouf, Jon George Rokne and Reda Alhajj
Software 2026, 5(1), 4; https://doi.org/10.3390/software5010004 - 21 Jan 2026
Viewed by 95
Abstract
Background: The implementation of Large Language Models (LLMs) in software engineering has provided new and improved approaches to code synthesis, testing, and refactoring. However, even with these new approaches, the practical efficacy of LLMs is restricted due to their reliance on user-given [...] Read more.
Background: The implementation of Large Language Models (LLMs) in software engineering has provided new and improved approaches to code synthesis, testing, and refactoring. However, even with these new approaches, the practical efficacy of LLMs is restricted due to their reliance on user-given prompts. The problem is that these prompts can vary a lot in quality and specificity, which results in inconsistent or suboptimal results for the LLM application. Methods: This research therefore aims to alleviate these issues by developing an LLM-based code assistance prototype with a framework based on Retrieval-Augmented Generation (RAG) that automates the prompt-generation process and improves the outputs of LLMs using contextually relevant external knowledge. Results: The tool aims to reduce dependence on the manual preparation of prompts and enhance accessibility and usability for developers of all experience levels. The tool achieved a Code Correctness Score (CCS) of 162.0 and an Average Code Correctness (ACC) score of 98.8% in the refactoring task. These results can be compared to those of the generated tests, which scored CCS 139.0 and ACC 85.3%, respectively. Conclusions: This research contributes to the growing list of Artificial Intelligence (AI)-powered development tools and offers new opportunities for boosting the productivity of developers. Full article
Show Figures

Figure 1

18 pages, 3461 KB  
Article
Real Time IoT Low-Cost Air Quality Monitoring System
by Silvian-Marian Petrică, Ioana Făgărășan, Nicoleta Arghira and Iulian Munteanu
Sustainability 2026, 18(2), 1074; https://doi.org/10.3390/su18021074 - 21 Jan 2026
Viewed by 101
Abstract
This paper proposes a complete solution, implementing a low-cost, energy-independent, network-connected, and scalable environmental air parameter monitoring system. It features a remote sensing module which provides environmental data to a cloud-based server and a software application for real-time and historical data processing, standardized [...] Read more.
This paper proposes a complete solution, implementing a low-cost, energy-independent, network-connected, and scalable environmental air parameter monitoring system. It features a remote sensing module which provides environmental data to a cloud-based server and a software application for real-time and historical data processing, standardized air quality indices computations, and a comprehensive visualization of environmental parameters evolutions. A fully operational prototype was built around a low-cost micro-controller connected to low-cost air parameter sensors and a GSM modem, powered by a stand-alone renewable energy-based power supply. The associated software platform has been developed by using Microsoft Power Platform technologies. The collected data is transmitted from sensors to a remote server via the GSM modem using custom-built JSON structures. From there, data is extracted and forwarded to a database accessible to users through a dedicated application. The overall accuracy of the air quality monitoring system has been thoroughly validated both in controlled indoor environment and against a trusted outdoor air quality reference station. The proposed air parameters monitoring solution paves the way for future research actions, such as the classification of polluted sites or prediction of air parameter variations in the site of interest. Full article
(This article belongs to the Section Air, Climate Change and Sustainability)
Show Figures

Figure 1

Back to TopTop