Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (114)

Search Parameters:
Keywords = unreliable users

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 924 KB  
Article
Barriers to Changing Travel Modes: A Case Study of Reykjavík, Iceland
by Johanna Raudsepp, Chloé Ruiz, Victor Schlencker and Jukka Heinonen
Urban Sci. 2026, 10(3), 131; https://doi.org/10.3390/urbansci10030131 - 1 Mar 2026
Viewed by 552
Abstract
Transportation remains one of the sectors with the highest GHG emissions in urban areas, forming around a third of household footprints in affluent countries like the Nordics and being the main source of particulate matter emissions in urban areas around the world. This [...] Read more.
Transportation remains one of the sectors with the highest GHG emissions in urban areas, forming around a third of household footprints in affluent countries like the Nordics and being the main source of particulate matter emissions in urban areas around the world. This study focuses on the Reykjavík Capital Area in Iceland, which is known for its car-centricity and where modal shift remains a major challenge. The study examines barriers to modal shift to understand why Reykjavík residents are reluctant to change their transport modes away from private cars. The study uses softGIS survey data gathered in 2025 of 1801 respondents. The results show that mobility remains car-dominated, with even regular public and active-mode users owning a car for running errands. The main barriers for switching to public or active modes include long travel distances, high travel time need, an unreliable public transport system, and difficulties running errands. Slight differences emerged between native and non-native residents’ barriers, with the latter being more likely to be impacted by price and connectivity issues. The study further recognizes the potential impact of climate awareness and education, as people with a stronger belief in individual impact on climate were less likely to find these aspects to be a barrier. Full article
(This article belongs to the Section Urban Mobility and Transportation)
Show Figures

Figure 1

21 pages, 20486 KB  
Article
Semantic–Physical Sensor Fusion for Safe Physical Human–Robot Interaction in Dual-Arm Rehabilitation
by Disha Zhu, Xuefeng Wang and Shaomei Shang
Sensors 2026, 26(5), 1510; https://doi.org/10.3390/s26051510 - 27 Feb 2026
Viewed by 340
Abstract
A safe physical human–robot interaction (pHRI) in rehabilitation requires reliable perception and low-latency decision making under heterogeneous and unreliable sensor inputs. This paper presents a multimodal sensor-fusion-based safety framework that integrates physical state estimation, semantic information fusion, and an edge-deployed large language model [...] Read more.
A safe physical human–robot interaction (pHRI) in rehabilitation requires reliable perception and low-latency decision making under heterogeneous and unreliable sensor inputs. This paper presents a multimodal sensor-fusion-based safety framework that integrates physical state estimation, semantic information fusion, and an edge-deployed large language model (LLM) for real-time pHRI safety control. A dynamics-based virtual sensing method is introduced to estimate internal joint torques from external force–torque measurements, achieving a normalized mean absolute error of 18.5% in real-world experiments. An asynchronous semantic state pool with a time-to-live mechanism is designed to fuse visual, force, posture, and human semantic cues while maintaining robustness to sensor delays and dropouts. Based on structured multimodal tokens, an instruction-tuned edge LLM outputs discrete safety decisions that are further mapped to continuous compliant control parameters. The framework is trained using a hybrid dataset consisting of limited real-world samples and LLM-augmented synthetic data, and evaluated on unseen real and mixed-condition scenarios. Experimental results show reliable detection of safety-critical events with a low emergency misdetection rate, while maintaining an end-to-end decision latency of approximately 223 ms on edge hardware. Real-world experiments on a rehabilitation robot demonstrate effective responses to impacts, user instability, and visual occlusions, indicating the practical applicability of the proposed approach for real-time pHRI safety monitoring. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

28 pages, 5136 KB  
Article
Stage-Aware Reconstruction of Typhoon Inflow for Offshore Wind Turbines Using WRF and TurbSim
by Jundong Wang, Liye Zhao, Lei Xue, Qianqian Li and Yu Xue
J. Mar. Sci. Eng. 2026, 14(5), 438; https://doi.org/10.3390/jmse14050438 - 26 Feb 2026
Viewed by 292
Abstract
Accurate typhoon inflow characterization is essential for offshore wind turbine safety in typhoon-prone regions. This study presents a physics-informed WRF–TurbSim framework that reconstructs rotor-relevant, stage-aware inflow fields for Typhoon In-Fa (2021) by mapping mesoscale stability and turbulence diagnostics into a User-Defined von Kármán [...] Read more.
Accurate typhoon inflow characterization is essential for offshore wind turbine safety in typhoon-prone regions. This study presents a physics-informed WRF–TurbSim framework that reconstructs rotor-relevant, stage-aware inflow fields for Typhoon In-Fa (2021) by mapping mesoscale stability and turbulence diagnostics into a User-Defined von Kármán model. Spectral and coherence checks confirm consistency with the imposed constraints and show pronounced regime dependence: low-frequency coherence decay remains near IEC neutral behavior, whereas high-frequency decay weakens substantially during the stable eye stage. The results suggest that neutral coherence assumptions may be unreliable in strongly stable typhoon regimes, motivating stage-aware inflow characterization for engineering applications. Full article
Show Figures

Figure 1

37 pages, 501 KB  
Article
Comparative Analysis of Attribute-Based Encryption Schemes for Special Internet of Things Applications
by Łukasz Pióro, Krzysztof Kanciak and Zbigniew Zieliński
Electronics 2026, 15(3), 697; https://doi.org/10.3390/electronics15030697 - 5 Feb 2026
Viewed by 462
Abstract
Attribute-based encryption (ABE) is an advanced public key encryption mechanism that enables the precise control of access to encrypted data based on attributes assigned to users and data. Attribute-based access control (ABAC), which is built on ABE, is crucial in providing dynamic, fine-grained, [...] Read more.
Attribute-based encryption (ABE) is an advanced public key encryption mechanism that enables the precise control of access to encrypted data based on attributes assigned to users and data. Attribute-based access control (ABAC), which is built on ABE, is crucial in providing dynamic, fine-grained, and context-aware security management in modern Internet of Things (IoT) applications. ABAC controls access based on attributes associated with users, devices, resources, and environmental conditions rather than fixed roles, making it highly adaptable to the complex and heterogeneous nature of IoT ecosystems. ABE can significantly improve the security and manageability of modern military IoT systems. Nevertheless, its practical implementation requires obtaining a range of performance data and assessing the additional overhead, particularly regarding data transmission efficiency. This paper provides a comparative analysis of the performance of two cryptographic schemes for attribute-based encryption in the context of special Internet of Things (IoT) applications. This applies to special environments, both military and civilian, where infrastructure is unreliable and dynamic and decisions must be made locally and in near-real time. From a security perspective, there is a need for strong authentication, precise access control, and a zero-trust approach at the network edge as well. The CIRCL scheme, based on traditional pairing-based ABE (CP-ABE), is compared with the newer Covercrypt scheme, a hybrid key encapsulation mechanism with access control (KEMAC) that provides quantum resistance. The main goal is to determine which scheme scales better and meets the performance requirements for two different scenarios: large corporate networks (where scalability is key) and tactical edge networks (where minimal bandwidth and post-quantum security are paramount). The benchmark results are used to compare the operating costs in detail, such as the key generation time, message encryption and decryption times, public key size, and cipher overhead, showing that Covercrypt provides a reduction in ciphertext overhead in tactical scenarios, while CIRCL offers faster decryption throughput in large-scale enterprise environments. It is concluded that the optimal choice depends on the specific constraints of the operating environment. Full article
(This article belongs to the Special Issue Computer Networking Security and Privacy)
Show Figures

Figure 1

22 pages, 444 KB  
Article
Domain Knowledge-Enhanced LLMs for Fraud and Concept Drift Detection
by Ali Şenol, Garima Agrawal and Huan Liu
Electronics 2026, 15(3), 534; https://doi.org/10.3390/electronics15030534 - 26 Jan 2026
Cited by 2 | Viewed by 621
Abstract
Deceptive and evolving conversations on online platforms threaten trust, security, and user safety, particularly when concept drift obscures malicious intent. Large Language Models (LLMs) offer strong natural language reasoning but remain unreliable in risk-sensitive scenarios due to contextual ambiguity and hallucinations. This article [...] Read more.
Deceptive and evolving conversations on online platforms threaten trust, security, and user safety, particularly when concept drift obscures malicious intent. Large Language Models (LLMs) offer strong natural language reasoning but remain unreliable in risk-sensitive scenarios due to contextual ambiguity and hallucinations. This article introduces a domain knowledge-enhanced Dual-LLM framework that integrates structured cues with pretrained models to improve fraud detection and drift classification. The proposed approach achieves 98% accuracy on benchmark datasets, significantly outperforming zero-shot LLMs and traditional classifiers. The results highlight how domain-grounded prompts enhance both accuracy and interpretability, offering a trustworthy path for applying LLMs in safety-critical applications. Beyond advancing the state of the art in fraud detection, this work has the potential to benefit domains such as cybersecurity, e-commerce, financial fraud prevention, and online content moderation. Full article
(This article belongs to the Special Issue New Trends in Representation Learning)
Show Figures

Figure 1

20 pages, 3662 KB  
Article
Enhancing Signal Processing Capability with Tabu Search Algorithm Utilization for Rate-4/5 Modulation Coded Bit-Patterned Magnetic Recording
by Mutita Mattayakan, Chanon Warisarn, Jaejin Lee and Kittipon Kankhunthod
Appl. Sci. 2025, 15(24), 12944; https://doi.org/10.3390/app152412944 - 8 Dec 2025
Viewed by 349
Abstract
To meet the growing demand for higher storage capacities, bit-patterned magnetic recording (BPMR) has emerged as a leading solution for achieving ultra-high user densities (UDs). However, BPMR systems are significantly impacted by two-dimensional (2D) interferences, specifically inter-symbol interference (ISI) and inter-track interference (ITI), [...] Read more.
To meet the growing demand for higher storage capacities, bit-patterned magnetic recording (BPMR) has emerged as a leading solution for achieving ultra-high user densities (UDs). However, BPMR systems are significantly impacted by two-dimensional (2D) interferences, specifically inter-symbol interference (ISI) and inter-track interference (ITI), which can degrade the quality of the readback signal. This paper introduces a rate-4/5 constructive ITI (CITI) modulation scheme, combined with a Tabu search (TS)-based error correction algorithm, to address the limitations of conventional CITI modulation codes. In the original encoding scheme, some codewords still contain forbidden patterns within their borders. The TS algorithm enhances the performance of the outermost tracks by refining unreliable bits identified through a distance-based reliability metric, which differs from earlier TS-based detectors that were directly used for multi-track detection. A proposed soft-information adjuster is then used to correct the poor reliability of soft information, resulting in improved soft-information reliability and decoding performance. A modified TS detector is also proposed, where the single-bit criterion for selecting the number of input bits is adopted, to improve neighbor selection and better align with the signal characteristics of the inner tracks. Simulation results show that the proposed system can achieve up to 2.7 dB and 4.0 dB improvements in bit error rate (BER) at a user density (UD) of 2.4 Terabits per square inch, compared to conventional uncoded and coded systems, respectively, while also reducing computational complexity. Furthermore, the results also imply that when the recording systems must operate under fluctuations in the size and position of the bit-island, our proposed system can provide superior performance. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

13 pages, 2545 KB  
Article
PixelCut: A Unified Solution for Zero-Configuration 16S rRNA Trimming via Computer Vision
by Dongin Kim, Woo Jin Kim, Hyun-Myung Woo and Hyundoo Jeong
Curr. Issues Mol. Biol. 2025, 47(12), 968; https://doi.org/10.3390/cimb47120968 - 21 Nov 2025
Cited by 1 | Viewed by 705
Abstract
16S rRNA amplicon sequencing has been an effective method for profiling microbial taxonomy in microbiome research, as it offers lower per-sample costs and higher sample throughput than shotgun metagenomics. Although 16S rRNA sequencing offers clear advantages over shotgun sequencing, it depends on precise [...] Read more.
16S rRNA amplicon sequencing has been an effective method for profiling microbial taxonomy in microbiome research, as it offers lower per-sample costs and higher sample throughput than shotgun metagenomics. Although 16S rRNA sequencing offers clear advantages over shotgun sequencing, it depends on precise trimming of low-quality bases at the 3′ ends of reads. Given the widespread use of 16S rRNA amplicon sequencing, there is an increasing demand for analysis tools that can identify errors in the 3′ region of reads and remove erroneous bases. While various algorithms for predicting trim locations are widely employed, most are command-line standalone tools, which pose challenges for users with limited computational background or resources. Furthermore, in the absence of biological or experimental priors such as amplicon size, trim position predictions may be unreliable. Here, we introduce PixelCut, a fully automated trim-position prediction framework that requires no hyperparameters or prior biological information for accurate prediction. Unlike most available algorithms that operate on raw FASTQ data, PixelCut analyzes the per-base quality report generated by FastQC to infer trimming positions. Based on the recommended quality score threshold from the quality report, PixelCut inspects the quality scores across bases and automatically determines the recommended trim position using character recognition techniques based on computer vision. We have also developed a user-friendly web application to make the method accessible to those without programming expertise, while offering a command-line version for advanced users. Through comprehensive computer simulations, we show that PixelCut produces taxonomic profiling results that are consistent with those from popular trim-location prediction algorithms. Full article
(This article belongs to the Special Issue Challenges and Advances in Bioinformatics and Computational Biology)
Show Figures

Figure 1

23 pages, 989 KB  
Article
Aquila: Efficient In-Kernel System Call Telemetry for Cloud-Native Environments
by Juyong Shin, Jisu Kim and Jaehyun Nam
Sensors 2025, 25(21), 6511; https://doi.org/10.3390/s25216511 - 22 Oct 2025
Viewed by 1574
Abstract
System call telemetry is essential for understanding runtime behavior in cloud-native infrastructures, but existing eBPF-based monitors suffer from high per-event overhead, unreliable delivery under load, and limited context for correlating multi-step activities. These issues reduce scalability, create blind spots in telemetry streams, and [...] Read more.
System call telemetry is essential for understanding runtime behavior in cloud-native infrastructures, but existing eBPF-based monitors suffer from high per-event overhead, unreliable delivery under load, and limited context for correlating multi-step activities. These issues reduce scalability, create blind spots in telemetry streams, and complicate the analysis of complex workload behaviors. This work presents Aquila, a lightweight telemetry framework that emphasizes efficiency, reliability, and semantic fidelity. Aquila employs a dual-path kernel pipeline that separates fixed-size metadata from variable-length attributes, reducing serialization costs and enabling high-throughput event processing. It introduces priority-aware buffering and explicit drop detection to retain loss-sensitive events while providing visibility into overload conditions. In the user space, kernel traces are enriched with Kubernetes metadata, mapping low-level system calls to pods, containers, and namespaces. Evaluation under representative workloads shows that Aquila improves scalability, reduces event loss, and enhances the semantic completeness of system call telemetry compared with existing approaches. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

17 pages, 414 KB  
Article
DQMAF—Data Quality Modeling and Assessment Framework
by Razan Al-Toq and Abdulaziz Almaslukh
Information 2025, 16(10), 911; https://doi.org/10.3390/info16100911 - 17 Oct 2025
Viewed by 1666
Abstract
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only [...] Read more.
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only undermines analytics and machine learning models but also exposes unsuspecting users to unreliable services, compromised authentication mechanisms, and biased decision-making processes. Traditional data quality assessment methods, largely based on manual inspection or rigid rule-based validation, cannot cope with the scale, heterogeneity, and velocity of modern data streams. To address this gap, we propose DQMAF (Data Quality Modeling and Assessment Framework), a generalized machine learning–driven approach that systematically profiles, evaluates, and classifies data quality to protect end-users and enhance the reliability of Internet services. DQMAF introduces an automated profiling mechanism that measures multiple dimensions of data quality—completeness, consistency, accuracy, and structural conformity—and aggregates them into interpretable quality scores. Records are then categorized into high, medium, and low quality, enabling downstream systems to filter or adapt their behavior accordingly. A distinctive strength of DQMAF lies in integrating profiling with supervised machine learning models, producing scalable and reusable quality assessments applicable across domains such as social media, healthcare, IoT, and e-commerce. The framework incorporates modular preprocessing, feature engineering, and classification components using Decision Trees, Random Forest, XGBoost, AdaBoost, and CatBoost to balance performance and interpretability. We validate DQMAF on a publicly available Airbnb dataset, showing its effectiveness in detecting and classifying data issues with high accuracy. The results highlight its scalability and adaptability for real-world big data pipelines, supporting user protection, document and text-based classification, and proactive data governance while improving trust in analytics and AI-driven applications. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

19 pages, 684 KB  
Article
A Wi-Fi Fingerprinting Indoor Localization Framework Using Feature-Level Augmentation via Variational Graph Auto-Encoder
by Dongdeok Kim, Jae-Hyeon Park and Young-Joo Suh
Electronics 2025, 14(14), 2807; https://doi.org/10.3390/electronics14142807 - 12 Jul 2025
Cited by 3 | Viewed by 3325
Abstract
Wi-Fi fingerprinting is a widely adopted technique for indoor localization in location-based services (LBS) due to its cost-effectiveness and ease of deployment using existing infrastructure. However, the performance of these systems often suffers due to missing received signal strength indicator (RSSI) measurements, which [...] Read more.
Wi-Fi fingerprinting is a widely adopted technique for indoor localization in location-based services (LBS) due to its cost-effectiveness and ease of deployment using existing infrastructure. However, the performance of these systems often suffers due to missing received signal strength indicator (RSSI) measurements, which can arise from complex indoor structures, device limitations, or user mobility, leading to incomplete and unreliable fingerprint data. To address this critical issue, we propose Feature-level Augmentation for Localization (FALoc), a novel framework that enhances Wi-Fi fingerprinting-based localization through targeted feature-level data augmentation. FALoc uniquely models the observation probabilities of RSSI signals by constructing a bipartite graph between reference points and access points, which is then processed by a variational graph auto-encoder (VGAE). Based on these learned probabilities, FALoc intelligently imputes likely missing RSSI values or removes unreliable ones, effectively enriching the training data. We evaluated FALoc using an MLP (Multi-Layer Perceptron)-based localization model on the UJIIndoorLoc and UTSIndoorLoc datasets. The experimental results demonstrate that FALoc significantly improves localization accuracy, achieving mean localization errors of 7.137 m on UJIIndoorLoc and 7.138 m on UTSIndoorLoc, which represent improvements of approximately 12.9% and 8.6% over the respective MLP baselines (8.191 m and 7.808 m), highlighting the efficacy of our approach in handling missing data. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Figure 1

33 pages, 3077 KB  
Article
Perspective-Based Microblog Summarization
by Chih-Yuan Li, Soon Ae Chun and James Geller
Information 2025, 16(4), 285; https://doi.org/10.3390/info16040285 - 1 Apr 2025
Cited by 2 | Viewed by 2193
Abstract
Social media allows people to express and share a variety of experiences, opinions, beliefs, interpretations, or viewpoints on a single topic. Summarizing a collection of social media posts (microblogs) on one topic may be challenging and can result in an incoherent summary due [...] Read more.
Social media allows people to express and share a variety of experiences, opinions, beliefs, interpretations, or viewpoints on a single topic. Summarizing a collection of social media posts (microblogs) on one topic may be challenging and can result in an incoherent summary due to multiple perspectives from different users. We introduce a novel approach to microblog summarization, the Multiple-View Summarization Framework (MVSF), designed to efficiently generate multiple summaries from the same social media dataset depending on chosen perspectives and deliver personalized and fine-grained summaries. The MVSF leverages component-of-perspective computing, which can recognize the perspectives expressed in microblogs, such as sentiments, political orientations, or unreliable opinions (fake news). The perspective computing can filter social media data to summarize them according to specific user-selected perspectives. For the summarization methods, our framework implements three extractive summarization methods: Entity-based, Social Signal-based, and Triple-based. We conduct comparative evaluations of MVSF summarizations against state-of-the-art summarization models, including BertSum, SBert, T5, and Bart-Large-CNN, by using a gold-standard BBC news dataset and Rouge scores. Furthermore, we utilize a dataset of 18,047 tweets about COVID-19 vaccines to demonstrate the applications of MVSF. Our contributions include the innovative approach of using user perspectives in summarization methods as a unified framework, capable of generating multiple summaries that reflect different perspectives, in contrast to prior approaches of generating one-size-fits-all summaries for one dataset. The practical implication of MVSF is that it offers users diverse perspectives from social media data. Our prototype web application is also implemented using ChatGPT to show the feasibility of our approach. Full article
(This article belongs to the Special Issue Text Mining: Challenges, Algorithms, Tools and Applications)
Show Figures

Figure 1

16 pages, 630 KB  
Article
A Study on Performance Improvement of Maritime Wireless Communication Using Dynamic Power Control with Tethered Balloons
by Tao Fang, Jun-han Wang, Jaesang Cha, Incheol Jeong and Chang-Jun Ahn
Electronics 2025, 14(7), 1277; https://doi.org/10.3390/electronics14071277 - 24 Mar 2025
Cited by 5 | Viewed by 1105
Abstract
In recent years, the demand for maritime wireless communication has been increasing, particularly in areas such as ship operations management, marine resource utilization, and safety assurance. However, due to the difficulty of deploying base stations(BSs), maritime communication still faces challenges in terms of [...] Read more.
In recent years, the demand for maritime wireless communication has been increasing, particularly in areas such as ship operations management, marine resource utilization, and safety assurance. However, due to the difficulty of deploying base stations(BSs), maritime communication still faces challenges in terms of limited coverage and unreliable communication quality. As the number of users on ships and offshore platforms increases, along with the growing demand for mobile communication at sea, conventional terrestrial base stations struggle to provide stable connectivity. Therefore, existing maritime communication primarily relies on satellite communication and long-range Wi-Fi. However, these solutions still have limitations in terms of cost, stability, and communication efficiency. Satellite communication solutions, such as Starlink and Iridium, provide global coverage and high reliability, making them essential for deep-sea and offshore communication. However, these systems have high operational costs and limited bandwidth per user, making them impractical for cost-sensitive nearshore communication. Additionally, geostationary satellites suffer from high latency, while low Earth orbit (LEO) satellite networks require specialized and expensive terminals, increasing hardware costs and limiting compatibility with existing maritime communication systems. On the other hand, 5G-based maritime communication offers high data rates and low latency, but its infrastructure deployment is demanding, requiring offshore base stations, relay networks, and high-frequency mmWave (millimeter-wave) technology. The high costs of deployment and maintenance restrict the feasibility of 5G networks for large-scale nearshore environments. Furthermore, in dynamic maritime environments, maintaining stable backhaul connections presents a significant challenge. To address these issues, this paper proposes a low-cost nearshore wireless communication solution utilizing tethered balloons as coastal base stations. Unlike satellite communication, which relies on expensive global infrastructure, or 5G networks, which require extensive offshore base station deployment, the proposed method provides a more economical and flexible nearshore communication alternative. The tethered balloon is physically connected to the coast, ensuring stable power supply and data backhaul while providing wide-area coverage to support communication for ships and offshore platforms. Compared to short-range communication solutions, this method reduces operational costs while significantly improving communication efficiency, making it suitable for scenarios where global satellite coverage is unnecessary and 5G infrastructure is impractical. Additionally, conventional uniform power allocation or channel-gain-based amplification methods often fail to meet the communication demands of dynamic maritime environments. This paper introduces a nonlinear dynamic power allocation method based on channel gain information to maximize downlink communication efficiency. Simulation results demonstrate that, compared to conventional methods, the proposed approach significantly improves downlink communication performance, verifying its feasibility in achieving efficient and stable communication in nearshore environments. Full article
Show Figures

Figure 1

9 pages, 4167 KB  
Proceeding Paper
Toward Context-Aware GNSS Positioning: A Preliminary Analysis
by Giovanni Cappello, Antonio Maratea, Ciro Gioia, Antonio Angrisano, Silvio Del Pizzo and Salvatore Gaglione
Eng. Proc. 2025, 88(1), 14; https://doi.org/10.3390/engproc2025088014 - 21 Mar 2025
Viewed by 982
Abstract
The vast majority of GNSS users move in urban areas, where the signal conditions are highly unstable and multipath or gross errors make GNSS navigation unreliable or plainly unfeasible. In this study, features from real GNSS data collected by different grades of receivers [...] Read more.
The vast majority of GNSS users move in urban areas, where the signal conditions are highly unstable and multipath or gross errors make GNSS navigation unreliable or plainly unfeasible. In this study, features from real GNSS data collected by different grades of receivers have been compared to find candidate statistical indicators of the context that allow the automatic recognition of open sky or obstructed environments. The features considered are all pre-PVT and snapshot-based and hence suitable for real-time applications. They are namely the number of visible satellites, the dilution of precision, multipath linear combination with dual-frequency measurements, and the C/N0 difference between each couple of satellites in the same epoch at the same frequency. All measurements have been gathered both in open sky and in obstructed scenarios. The evidences suggest multipath linear combination and the C/N0 difference between couples of satellites as the most promising baselines for an environment classifier based on Machine Learning. Full article
(This article belongs to the Proceedings of European Navigation Conference 2024)
Show Figures

Figure 1

24 pages, 3280 KB  
Article
Comparative Analysis on Modelling Approaches for the Simulation of Fatigue Disbonding with Cohesive Zone Models
by Johan Birnie, Maria Pia Falaschetti and Enrico Troiani
Aerospace 2025, 12(2), 139; https://doi.org/10.3390/aerospace12020139 - 13 Feb 2025
Cited by 3 | Viewed by 1918
Abstract
Adhesively bonded joints are essential in the aeronautical industry, offering benefits such as weight reduction and enhanced sustainability. However, certifying these joints is challenging due to unreliable methods for assessing their strength and the development of predictive models for fatigue-driven disbonding remains an [...] Read more.
Adhesively bonded joints are essential in the aeronautical industry, offering benefits such as weight reduction and enhanced sustainability. However, certifying these joints is challenging due to unreliable methods for assessing their strength and the development of predictive models for fatigue-driven disbonding remains an ongoing effort. This manuscript presents the implementation and validation of a cohesive zone model for studying high-cycle fatigue disbonding under Mode I and Mixed-Mode loading. The model was integrated into the commercial finite element analysis software Abaqus using user-defined material subroutine (UMAT). Two modelling approaches were investigated: one replacing the adhesive with a cohesive layer, and the other incorporating a cohesive layer at the adhesive’s mid-plane while modelling its entire thickness, using both 2D and 3D techniques. Validation was conducted against experimental data from the literature that examined the influence of adhesive thickness on fatigue behaviour in DCB and CLS tests. The findings of this study confirm that the model accurately predicts fatigue disbonding across all cases examined. Additionally, the analysis reveals that modelling adhesive thickness plays a critical role in the simulation’s outcomes. Variations in adhesive thickness can significantly alter the crack growth behaviour, highlighting the importance of carefully considering this parameter in future assessments and applications. Full article
Show Figures

Figure 1

26 pages, 4641 KB  
Article
Enhancing Safety in U.S. Coal Mines Through a Rib Support Recommendation Tool
by Alper Kirmaci, Dakshith Ruvin Wijesinghe, Dogukan Guner, Kutay E. Karadeniz, Cameron Mitchell and Taghi Sherizadeh
Geosciences 2025, 15(1), 17; https://doi.org/10.3390/geosciences15010017 - 8 Jan 2025
Cited by 1 | Viewed by 1414
Abstract
Despite ongoing efforts to enhance coal rib stability, the underground coal mining sector continues to face incidents of rib failure, leading to injuries and fatalities. The development and validation of effective rib support systems are crucial for mitigating these risks. Unfortunately, a standardized [...] Read more.
Despite ongoing efforts to enhance coal rib stability, the underground coal mining sector continues to face incidents of rib failure, leading to injuries and fatalities. The development and validation of effective rib support systems are crucial for mitigating these risks. Unfortunately, a standardized design methodology that accommodates the diverse geological conditions of U.S. coal mines is missing. Current practices are often based on trial-and-error or outdated methods, yielding unreliable outcomes. This research aims to fill this gap by creating a comprehensive methodology for designing rib support systems suitable for U.S. underground mines. It encompasses in situ pull-out tests of coal rib bolts, numerical model validations, and parametric studies on variables affecting rib stability. A significant achievement of this study is the creation of the rib support recommendation tool (RSR), a user-friendly application that offers site-specific rib support advice. This tool leverages the results from parametric studies and improved Coal Pillar Rib Rating (CPRR) system values to recommend effective rib support. Validated by field data, the RSR tool promises to significantly improve mining safety and efficiency by providing a systematic and reliable method for rib support design, with ongoing efforts to further validate its effectiveness. Full article
(This article belongs to the Section Geomechanics)
Show Figures

Figure 1

Back to TopTop