Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (257)

Search Parameters:
Keywords = truthful reporting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1324 KB  
Review
Artificial Intelligence Architectures in Oral Rehabilitation: A Focused Review of Deep Learning Models for Implant Planning, Prosthodontic Design, and Peri-Implant Diagnosis
by Hossam Dawa, Carlos Aroso, Ana Sofia Vinhas, José Manuel Mendes and Arthur Rodriguez Gonzalez Cortes
Appl. Sci. 2026, 16(8), 3739; https://doi.org/10.3390/app16083739 - 10 Apr 2026
Abstract
Deep learning is increasingly integrated into oral rehabilitation workflows, particularly in implant planning, prosthodontic design automation, and peri-implant diagnosis. However, reported performance is heterogeneous and difficult to compare across tasks, modalities, and validation designs. The goal of this study was to critically analyze [...] Read more.
Deep learning is increasingly integrated into oral rehabilitation workflows, particularly in implant planning, prosthodontic design automation, and peri-implant diagnosis. However, reported performance is heterogeneous and difficult to compare across tasks, modalities, and validation designs. The goal of this study was to critically analyze deep learning architecture families applied to oral rehabilitation and to provide task-driven selection guidance supported by an evidence table reporting dataset characteristics, validation strategy, and performance metrics. A focused narrative review was conducted using transparent, database-specific search criteria (final n = 10 included studies), emphasizing implant planning (cone–beam computed tomography [CBCT]-based segmentation), prosthodontic design (intraoral scan [IOS]/mesh inputs), and peri-implant diagnosis (periapical/panoramic radiographs). Evidence certainty for each clinical task was assessed using GRADE-informed ratings (High/Moderate/Low/Very Low). Extracted variables included clinical task, imaging modality, dataset size, architecture, validation strategy (internal vs. internal + external), split level, ground truth protocol, and performance metrics. A structured computational and hardware feasibility analysis was conducted for each architecture family to support real-world deployment planning. Encoder–decoder networks (U-Net/nnU-Net) dominate CBCT segmentation for implant planning, while detection architectures (Faster R-CNN, YOLO) support implant localization and peri-implant assessment on radiographs. Generative models (3D GANs, transformer-based point-to-mesh networks) enable crown design from three-dimensional scans. Hybrid CNN–Transformer architectures show promise for multimodal CBCT–IOS fusion, though direct evidence from the included studies remains limited to a single study. External validation remains uncommon yet essential given the risk of domain shift. In conclusion, architecture selection should be anchored to task geometry (2D vs. 3D), artifact burden, and required clinical output type. Reporting standards should prioritize dataset transparency, validation rigor, multi-center external testing, and uncertainty-aware outputs. Full article
Show Figures

Figure 1

28 pages, 2852 KB  
Article
Defect Monitoring of Complex Geometries Through Machine Learning in LPBF Metal Additive Manufacturing
by Marcin Magolon, Jan Boer and Mohamed Elbestawi
J. Manuf. Mater. Process. 2026, 10(4), 127; https://doi.org/10.3390/jmmp10040127 - 9 Apr 2026
Abstract
Laser powder bed fusion (LPBF) can fabricate intricate metal components but is prone to defects, such as porosity and cracks, that degrade performance. We present an in situ monitoring framework that fuses structure-borne acoustic emission (AE) and coaxial two-color pyrometry acquired synchronously at [...] Read more.
Laser powder bed fusion (LPBF) can fabricate intricate metal components but is prone to defects, such as porosity and cracks, that degrade performance. We present an in situ monitoring framework that fuses structure-borne acoustic emission (AE) and coaxial two-color pyrometry acquired synchronously at 1 MHz. Modality-specific encoders are pretrained separately, their latent representations are exported, and a lightweight feature-level fusion classifier with two binary heads predicts crack-like and porosity-like indications. Evaluation uses a held-out grouped experiment/build-machine-part split with independent Archimedes density and micro-CT ground truth. On the held-out test set, the fused model achieved F1 = 0.974 for crack-like detection and F1 = 0.987 for porosity-like detection, with AUROC = 0.998 and 0.993, respectively. Recall was 1.00 for both heads, corresponding to false-positive rates of 11.18% for crack-like and 0.945% for porosity-like indications. These results support synchronized AE-pyrometry fusion as a promising high-sensitivity in situ screening approach for LPBF. A later matched within-framework ablation campaign was also performed under stricter checkpoint-screening rules to compare AE + PY + Aux, AE + PY, AE-only, and PY-only variants under a common grouped-split protocol. Together, these results support multimodal monitoring while highlighting the need for explicit coupon/geometry-stratified reporting and for separately architecture-optimized unimodal baselines. Full article
Show Figures

Figure 1

29 pages, 6283 KB  
Article
Modularity-Driven Keyword Co-Occurrence Network for Mining Statistical Associations in Construction Safety Accidents
by Shu Liu, Weidong Yan, Jian Ma, Guoqi Liu and Rui Zhang
Buildings 2026, 16(7), 1461; https://doi.org/10.3390/buildings16071461 - 7 Apr 2026
Viewed by 154
Abstract
To address the limitations of traditional construction safety accident analysis, which relies on manually defined causal relationships, requires extensive data annotation, and struggles to identify latent risks from Chinese unstructured texts, this study proposes an unsupervised and data-driven framework, termed CESA-Miner, for mining [...] Read more.
To address the limitations of traditional construction safety accident analysis, which relies on manually defined causal relationships, requires extensive data annotation, and struggles to identify latent risks from Chinese unstructured texts, this study proposes an unsupervised and data-driven framework, termed CESA-Miner, for mining statistical association patterns among construction safety accidents. The proposed framework adopts a modularity-driven keyword optimization strategy to automatically identify a stable set of risk-related features. Based on this, an accident risk weighted co-occurrence network is constructed, where statistical associations are represented through keyword co-occurrence patterns and network community structures. Community detection algorithms are then applied to identify accident clusters and their underlying relationships. Using a dataset of 1368 official construction accident reports, the results show that the network modularity increases from 0.173 to 0.683, indicating significantly improved structural quality and community separability. In the absence of explicit ground truth, structural quality is evaluated using network modularity as a proxy metric. Compared with conventional clustering-based and embedding-based approaches, the proposed method yields a more structurally distinct network community organization and offers a complementary structure-aware perspective for characterizing accident relationships. The framework enables large-scale intelligent analysis of accident texts without requiring manual annotation, providing data-driven support for latent risk identification and statistical pattern analysis in construction safety. Full article
(This article belongs to the Special Issue AI in Construction: Automation, Optimization, and Safety)
Show Figures

Figure 1

22 pages, 4917 KB  
Technical Note
Reducing Latency in Digital Twins: A Framework for Near-Real-Time Progress and Quality Reporting
by Zvonko Sigmund, Ivica Završki, Ivan Marović and Kristijan Vilibić
Buildings 2026, 16(7), 1448; https://doi.org/10.3390/buildings16071448 - 6 Apr 2026
Viewed by 321
Abstract
While Digital Twins offer transformative potential, their efficacy for real-time control is constrained by the slow data acquisition and the high computational intensity required to process raw datasets like point clouds. This paper identifies these critical bottlenecks—specifically the latency between data capture and [...] Read more.
While Digital Twins offer transformative potential, their efficacy for real-time control is constrained by the slow data acquisition and the high computational intensity required to process raw datasets like point clouds. This paper identifies these critical bottlenecks—specifically the latency between data capture and actionable insight—and proposes a refined theoretical framework for near-real-time automated progress monitoring and quality reporting. Building on the findings of the NORMENG project and informing the subsequent AutoGreenTraC project, this research synthesizes state-of-the-art advancements in reality capture, including LIDAR, SfM-MVS, and 360-degree vision. The study highlights a fundamental divergence in stakeholder requirements: the need for millimeter-level precision in quality control versus the demand for high-velocity documentation for progress monitoring. A key innovation presented is the shift toward neural rendering techniques to bypass the computational delays of traditional photogrammetry and enable immediate on-site visualization. By structuring a tiered processing hierarchy that combines lightweight edge analysis for immediate safety and progress monitoring with asynchronous high-fidelity Digital Twin updates, the framework aims to establish a single source of truth. Full article
Show Figures

Figure 1

33 pages, 6049 KB  
Article
Blockchain-Based Mixed-Node Auction Mechanism
by Xu Liu and Junwu Zhu
Electronics 2026, 15(7), 1516; https://doi.org/10.3390/electronics15071516 - 4 Apr 2026
Viewed by 212
Abstract
Blockchain-based auctions often utilize smart contracts to automate auction rules, with much research focusing on enhancing privacy and fairness through cryptographic techniques. However, the authenticity of external data input into these systems is frequently overlooked. In particular, rational nodes may manipulate bidding data [...] Read more.
Blockchain-based auctions often utilize smart contracts to automate auction rules, with much research focusing on enhancing privacy and fairness through cryptographic techniques. However, the authenticity of external data input into these systems is frequently overlooked. In particular, rational nodes may manipulate bidding data by submitting false types to maximize their utility, compromising market fairness and the reliability of auction outcomes. The aim of this study is to propose an alternative blockchain-based auction mechanism to incentivize nodes to report types honestly. We propose the Mixed-Node Advertising Auction (MNAA) mechanism for digital advertising auctions on blockchain systems. MNAA integrates quasi-linear and value maximization utility models to design allocation and pricing rules that eliminate nodes’ incentives to misreport their types, ensuring the authenticity of data submitted to the auction. To enhance efficiency, MNAA employs state channel technology and off-chain smart contracts, reducing main chain interactions. Theoretical analysis confirms that MNAA incentivizes truthful behavior and ensures security and correctness. Simulation results show that MNAA outperforms Generalized Second Price (GSP), Mixed Bidders with Private Classes (MPR), and Vickrey–Clarke–Grooves (VCG) auctions in terms of liquid social welfare (LSW), publisher revenue, and allocation efficiency, while also improving the transaction throughput and showing good performance in terms of transaction costs and latency. Full article
(This article belongs to the Special Issue Novel Methods Applied to Security and Privacy Problems, Volume II)
Show Figures

Figure 1

16 pages, 1185 KB  
Article
Leveraging Large Language Models for Automated Extraction of Abdominal Aortic Aneurysm Features from Radiology Reports
by Praneel Mukherjee, Ryan C. Lee, Roham Hadidchi, Sonya Henry, Michael Coard, Matthew Davis, Yossef Rubinov, Ha Nguyen-Luong, Leah Katz and Tim Q. Duong
Diagnostics 2026, 16(7), 1083; https://doi.org/10.3390/diagnostics16071083 - 3 Apr 2026
Viewed by 240
Abstract
Background/Objectives. Abdominal computed tomography (CT) radiology reports contain critical information for abdominal aortic aneurysm (AAA) management, including aneurysm presence, size, rupture status, and prior repair. However, this information is often embedded within lengthy, heterogeneous reports, making manual extraction inefficient. We evaluated the [...] Read more.
Background/Objectives. Abdominal computed tomography (CT) radiology reports contain critical information for abdominal aortic aneurysm (AAA) management, including aneurysm presence, size, rupture status, and prior repair. However, this information is often embedded within lengthy, heterogeneous reports, making manual extraction inefficient. We evaluated the performance of multiple large language models (LLMs) for automated extraction of AAA-related findings from radiology reports. Methods. We retrospectively analyzed 500 abdominal CT reports mentioning AAA from an urban academic health system (2020–2024). Ground truth labels were established by manual review. Four open-source LLMs (Qwen2.5-7B-Instruct, Llama3-Med42-8B, GPT-OSS-20B, and MedGemma-27B-text-it) were evaluated for extraction of aneurysm presence, size, morphology, rupture status, impending rupture, and prior aortic repair. Model outputs were compared with ground truth using exact-match accuracy, and inter-model agreement was assessed using Fleiss’ kappa. Reasoning traces were examined to characterize correct and incorrect model behavior. Results. Accuracy for identifying AAA presence ranged from 0.90 to 0.95 (κ = 0.851), and prior aortic repair from 0.90 to 0.97 (κ = 0.793). Accuracy for aneurysm size ranged from 0.67 to 0.88 (κ = 0.340), with low κ’s due to class imbalance or dimension misselection. Rupture and impending rupture were identified with accuracies exceeding 0.90 across models, though agreement was lower (κ = 0.485 and 0.589), reflecting low event prevalence. Larger models (GPT-OSS-20B, MedGemma-27B) generally outperformed smaller models. Reasoning analysis revealed strengths in measurement prioritization but recurrent errors, including dimension misselection, over-inference of prior repair, and conservative classification of rupture-related findings. Conclusions. LLMs can accurately extract clinically relevant AAA information from radiology reports with interpretable reasoning, with larger and medically trained models outperforming smaller or general-purpose models. Performance varies by task and model, underscoring the need for careful validation and human-in-the-loop deployment in clinical settings. Full article
Show Figures

Figure 1

23 pages, 6950 KB  
Article
Under-Canopy Archaeological Mapping Using LiDAR Data and AI Methods
by Gabriele Mazzacca and Fabio Remondino
Heritage 2026, 9(4), 134; https://doi.org/10.3390/heritage9040134 - 27 Mar 2026
Viewed by 362
Abstract
Airborne laser scanning (ALS) and UAV-mounted LiDAR sensors have become well-established tools for identifying and mapping archaeological features across varying scales and contexts. Numerous algorithms have been developed over the years for generating Digital Terrain or Features Models (DTMs/DFMs), which provide an accurate [...] Read more.
Airborne laser scanning (ALS) and UAV-mounted LiDAR sensors have become well-established tools for identifying and mapping archaeological features across varying scales and contexts. Numerous algorithms have been developed over the years for generating Digital Terrain or Features Models (DTMs/DFMs), which provide an accurate representation of the ground or structures’ surface, serving as the foundation for subsequent archaeological analyses. In this study, we report the developed multi-level multi-resolution (MLMR) methodology, based on machine/deep learning methods, for DFM generation through point cloud semantic segmentation. The work also compares different approaches and the impact of the resolution on their performance. To this end, each approach’s performance is evaluated with a series of quantitative and qualitative analyses, with an eye on hardware limitations and time constraints. Three test sites from Mediterranean and Alpine environments, with manually annotated ground truth data, are used for the evaluation of each methodological approach. Full article
Show Figures

Figure 1

28 pages, 3056 KB  
Article
A Claim-Conditioned Framework for Assessing Emotion Expression Reliability in LLM-Generated Text
by Ahmet Remzi Özcan
Mathematics 2026, 14(7), 1110; https://doi.org/10.3390/math14071110 - 26 Mar 2026
Viewed by 335
Abstract
Reliable evaluation of emotional expression in large language model (LLM) outputs remains methodologically under-specified, particularly for long-form generation where label-only correctness provides limited evidence of affective reliability. A claim-conditioned framework is introduced for cross-model comparison under matched elicitation conditions, with TEAS (Text Emotion [...] Read more.
Reliable evaluation of emotional expression in large language model (LLM) outputs remains methodologically under-specified, particularly for long-form generation where label-only correctness provides limited evidence of affective reliability. A claim-conditioned framework is introduced for cross-model comparison under matched elicitation conditions, with TEAS (Text Emotion Adherence Score) as its core continuous metric. Defined in a shared prototype space induced by a frozen reference encoder, TEAS combines affective separability with entropy-aware uncertainty, enabling reliability assessment beyond discrete agreement within a fixed evaluator. Evaluation is conducted on a controlled synthetic corpus under a ground-truth-free, claim-conditioned protocol across four widely used LLM families (Gemini, GPT, Grok, and Mistral). In addition to overall comparative ordering, auxiliary diagnostic measures are reported to localize failure modes and support interpretation of model behavior, together with Holm-corrected pairwise comparisons, sequence-level drift analysis, and local hyperparameter sensitivity analysis. Empirical results show stable endpoint separation, aggregation-sensitive differences among close models, measurable sequence-level degradation, and stable relative orderings under tested local parameter variations. Overall, the study provides an interpretable and statistically grounded protocol for assessing emotion-expression reliability in LLM-generated text within a fixed reference space rather than as a human gold measure of emotional truth. Full article
(This article belongs to the Special Issue Mathematical Foundations in NLP: Applications and Challenges)
Show Figures

Figure 1

28 pages, 4780 KB  
Article
Retrieval over Response: Large Language Model-Augmented Decision Strategies for Hierarchical Wildfire Risk Evaluation
by Yuheng Cheng, Yuchen Lin, Yanwei Wu, Lida Huang, Tao Chen, Wenguo Weng and Xiaole Zhang
Fire 2026, 9(4), 143; https://doi.org/10.3390/fire9040143 - 26 Mar 2026
Viewed by 622
Abstract
The Analytic Hierarchy Process (AHP) is widely used in Multi-Criteria Decision Analysis (MCDA), yet its strong reliance on expert judgment constrains its scalability and may introduce variability in weighting outcomes, particularly in high-stakes applications such as wildfire risk assessment. In this study, we [...] Read more.
The Analytic Hierarchy Process (AHP) is widely used in Multi-Criteria Decision Analysis (MCDA), yet its strong reliance on expert judgment constrains its scalability and may introduce variability in weighting outcomes, particularly in high-stakes applications such as wildfire risk assessment. In this study, we investigate how Large Language Models (LLMs) can function as decision-support agents in an AHP-style hierarchical evaluation task derived from validated wildfire literature. Based on this structure, four representative LLM-assisted strategies are examined: Direct LLM Scoring (DLS), Multi-Model Debate Scoring (MDS), Full-Document Prompting (FDP), and Indicator-Guided Prompting (IGP). To evaluate their effectiveness, we benchmark LLM-generated rankings against expert-defined ground truth across 16 sub-criteria. Using the mean correlation coefficient R as the key evaluation metric, with reported values expressed as mean ± standard deviation across models: DLS shows no correlation with expert rankings (R = 0.009 ± 0.070), MDS yields marginal gains (R = 0.181), and FDP remains unstable (R = 0.081 ± 0.189). By contrast, IGP, which incorporates retrieval-informed structured prompting, shows the highest agreement with the expert reference among the four compared strategies (R = 0.598 ± 0.065), suggesting that structured contextual guidance may improve the performance of LLM-assisted weighting within the evaluated benchmark. This study suggests that, within the evaluated wildfire benchmark and the tested set of hosted LLMs, LLMs may serve as useful decision-support tools in MCDA tasks when guided by structured inputs or coordinated through multi-agent mechanisms. The proposed framework provides an interpretable basis for exploring LLM-assisted risk evaluation in the present wildfire benchmark, while further validation is needed before extending it to other environmental or safety-critical contexts. Full article
(This article belongs to the Special Issue Fire Risk Management and Emergency Prevention)
Show Figures

Figure 1

21 pages, 1305 KB  
Article
Spatial Encoding with Amplitude Modulation in Serial Flow Cytometry
by Eric W. Esch, Matthew DiSalvo, Megan A. Catterton, Paul N. Patrone and Gregory A. Cooksey
Sensors 2026, 26(5), 1697; https://doi.org/10.3390/s26051697 - 7 Mar 2026
Viewed by 405
Abstract
Serial flow cytometry was recently introduced as a method that can estimate measurement uncertainty (i.e., imprecision, the coefficient of variation of repeated measurements of individual particles) independent from population characteristics. Replication of light sources and detectors at multiple sites along a flow cytometer’s [...] Read more.
Serial flow cytometry was recently introduced as a method that can estimate measurement uncertainty (i.e., imprecision, the coefficient of variation of repeated measurements of individual particles) independent from population characteristics. Replication of light sources and detectors at multiple sites along a flow cytometer’s microchannel requires more equipment and can complicate detector synchronization. Here, we introduce amplitude modulation to encode each region of a serial cytometer with a unique carrier frequency, which enables demultiplexing of the combined signal incident on a single photodetector by fast Fourier transform (FFT) peak magnitude. To facilitate validation of detection, matching, and uncertainty quantification of fluorescence signals, we designed a microfluidic amplitude modulation (AM) serial flow cytometer that has ground truth detectors on individual regions (serial cytometry) in parallel with the combined channel detection for AM demultiplexing. With this report, we present metrics for event detection and dynamic range, prevalence and processing of overlapping detections, region-decoding accuracy, process yield, and uncertainty quantification on a brightness ladder of calibration microspheres. Despite being operated with reduced light intensities, the AM cytometer was capable of high-fidelity performance in comparison to conventional serial cytometry. For events above the detection limit, over 97% were analyzed. Both conventional and AM serial cytometers achieved median imprecisions in the range of 0.53% to 2.1% after outlier removal, which was well below the inherent intensity distribution of any of the microsphere subpopulations. Overall, AM cytometry supports uncertainty quantification and temporal analyses of serial cytometry data with a reduced number of photodetectors, which offers simplification of chip design with multiple measurement regions and wide-field detectors. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

36 pages, 7077 KB  
Article
Zero-Shot Vertebral Instance Segmentation on DICOM Spine Radiographs Using Promptable Segment Anything Models
by Alexander Sieradzki, Kamil Koszela, Szymon Koszykowski, Jakub Bednarek and Jarosław Kurek
J. Clin. Med. 2026, 15(5), 2042; https://doi.org/10.3390/jcm15052042 - 7 Mar 2026
Viewed by 411
Abstract
Background: Accurate vertebral instance segmentation on full-spine radiographs is essential for spinal parameter assessment, but supervised methods require costly instance-level annotations and may be sensitive to domain shift. Methods: We investigated whether promptable segmentation foundation models can generalize zero-shot to raw DICOM spine [...] Read more.
Background: Accurate vertebral instance segmentation on full-spine radiographs is essential for spinal parameter assessment, but supervised methods require costly instance-level annotations and may be sensitive to domain shift. Methods: We investigated whether promptable segmentation foundation models can generalize zero-shot to raw DICOM spine radiographs without task-specific training. We evaluated SAM-ViT-Huge, SAM2-Hiera-Large, and MedSAM-ViT-Base on 144 full-spine radiographs with 1309 annotated vertebral masks using a standardized pipeline for DICOM decoding, intensity normalization, automatic prompt generation, and instance-level evaluation. For each prompt, models produced three candidate masks. Performance was reported under an oracle protocol selecting the candidate with the highest IoU against ground truth and a model-score protocol selecting the candidate with the highest predicted IoU. Metrics included IoU, Dice, precision, recall, ASSD, and HD95. Results: The best configuration was SAM-ViT-Huge with rectangle prompting, reaching a mean IoU/Dice of 0.782/0.870 under oracle selection and 0.737/0.837 under model-score selection. SAM2-Hiera-Large with rectangle prompting followed (0.744/0.848 oracle; 0.699/0.815 model-score), ahead of MedSAM-ViT-Base (0.599/0.737 oracle; 0.387/0.499 model-score). Point prompting yielded consistently low overlap (IoU 0.224–0.319; Dice 0.276–0.414) despite high recall, indicating systematic over-segmentation and large boundary errors. Conclusions: Zero-shot vertebral instance segmentation on raw DICOM spine radiographs is feasible with promptable foundation models when prompts sufficiently constrain target extent. Rectangle prompting is clearly more effective than point prompting in this setting. Full article
Show Figures

Figure 1

17 pages, 356 KB  
Article
“A Lie Can Run Around the World Before the Truth Has Got Its Boots on”: Exploring the Portrayal of Journalism in Terry Pratchett’s Fantasy Novel ‘The Truth’
by Carl Knauf
Journal. Media 2026, 7(1), 52; https://doi.org/10.3390/journalmedia7010052 - 5 Mar 2026
Viewed by 365
Abstract
The image of the journalist in popular culture has increasingly added value to metajournalistic discourse. These portrayals have the power to influence the audience’s perception of real-world journalists and the industry. However, most research analyzes portrayals in film and television. Using Terry Pratchett’s [...] Read more.
The image of the journalist in popular culture has increasingly added value to metajournalistic discourse. These portrayals have the power to influence the audience’s perception of real-world journalists and the industry. However, most research analyzes portrayals in film and television. Using Terry Pratchett’s fantasy novel “The Truth,” this study explored how journalism, the media industry, and the journalist are portrayed in fantasy literature. Through a textual analysis of the novel, it was found that the work was a celebratory portrayal of journalism that shared a variety of themes found in film and television portrayals. Though its ethics were challenged throughout the novel, the Ankh-Morpork Times was devoted to the truth, served the watchdog role, and practiced social responsibility. Additionally, the novel’s historical rendition of the penny press highlighted the competitiveness of the media industry, how the public interest was challenged by political and corporate influence, and offered a portrayal of naïve news consumers. Lastly, it was found that William de Worde portrayed an ethical journalist and followed the common investigative journalist trope, but his character strayed from the usual editor, publisher, and male reporter tropes found in film and television. This study also suggests the possibility of looking at negative portrayals of journalism in fiction as a series of critical incidents in which journalism has difficulty fully repairing its paradigm. Full article
13 pages, 1312 KB  
Article
The First 1H NMR Total Assignment and a Quantum-Mechanically Driven Full Spin Analysis of the Steroid Hormone Equilenin
by Vidak Raičević, Niko S. Radulović, Katarina Urumović, Nebojša Kladar and Branislava Srđenović Čonić
Magnetochemistry 2026, 12(3), 32; https://doi.org/10.3390/magnetochemistry12030032 - 5 Mar 2026
Viewed by 465
Abstract
Equilenin is an equine estrogen constituting the basis of a highly-prescribed pharmaceutical preparation. Although routine 1H and 13C NMR data for it have been reported, complete assignments and a full analysis of the proton spin system have not been established. In [...] Read more.
Equilenin is an equine estrogen constituting the basis of a highly-prescribed pharmaceutical preparation. Although routine 1H and 13C NMR data for it have been reported, complete assignments and a full analysis of the proton spin system have not been established. In the present study, equilenin was examined by solution NMR in deuterochloroform, employing conventional spectral analysis in conjunction with quantum-mechanical techniques to achieve a 1H iterative full spin analysis (HiFSA). The resulting model reproduces the experimental spectrum with high fidelity and permits the determination of true chemical shifts and scalar coupling constants for this complex spin system. In addition, the 13C NMR spectrum was fully assigned using a combination of one- and two-dimensional experiments. The obtained data constitute a robust spectroscopic reference set for equilenin and the analytical value of the Cosmic Truth software for resolving spin systems in steroids. The results provide a valuable source of data for researchers seeking to implement NMR-based assays relevant to analytical, regulatory, and forensic applications. Full article
Show Figures

Figure 1

23 pages, 5494 KB  
Article
A Hybrid-Frequency Sampling Tactile Sensing System Based on a Flexible Piezoresistive Sensor Array: Design and Dynamic Loading Validation
by Zhenxing Wang and Xuan Dou
Sensors 2026, 26(5), 1559; https://doi.org/10.3390/s26051559 - 2 Mar 2026
Viewed by 405
Abstract
A Hybrid-Frequency Sampling Tactile Sensing System Based on a Flexible Piezoresistive Sensor Array is presented for reliable and real-time tactile perception under dynamic loading conditions. While recent studies have developed multi-channel tactile arrays, most systems remain limited by time-dependent drift in channel responses, [...] Read more.
A Hybrid-Frequency Sampling Tactile Sensing System Based on a Flexible Piezoresistive Sensor Array is presented for reliable and real-time tactile perception under dynamic loading conditions. While recent studies have developed multi-channel tactile arrays, most systems remain limited by time-dependent drift in channel responses, inconsistent dynamic behavior, or insufficient temporal resolution under simultaneous loading. In this work, a system-level design integrating a flexible piezoresistive sensor array with a real-time data acquisition module is developed, incorporating a hybrid-frequency sampling strategy to reduce system complexity while preserving reliable dynamic response in key sensing channels. Register-Transfer Level (RTL) simulation verified that the hardware scheduler rigorously executed the deterministic scanning logic, demonstrating a strict one-to-one correspondence with the physical hardware signals. The array consists of 34 piezoresistive sensing nodes embedded in an elastomeric substrate. Under the implemented hybrid-frequency sampling scheme, the system achieves an overall effective acquisition bandwidth of approximately 36.9 kHz, while maintaining a repeatability better than 4.9% and robust mechanical durability under cyclic bending deformation. Dynamic loading validation was performed using a self-developed pressure comparison platform for measuring the normal contact force applied on the tactile surface, serving as ground-truth data to verify that the voltages acquired by the proposed system accurately correspond to the actual applied force. Quantitative analysis shows a strong linear correlation (R2 ≈ 0.98) between the e-skin outputs and the reference forces. The recorded responses exhibit clear intensity-dependent trends and good temporal correspondence among sensing nodes, successfully distinguishing tactile stimuli such as gentle tapping, moderate pressing, and firm contact. The system also captures dynamic tactile responses during finger stroking, showing characteristic multi-unit activation patterns under spatiotemporally varying contact conditions. Compared with previously reported tactile systems typically operating below 100 Hz, the proposed design achieves an approximately 10× enhancement in effective sampling capability while significantly reducing system complexity through hybrid-frequency sampling, thereby supporting reliable dynamic tactile sensing in multi-unit arrays. These results demonstrate that the proposed system provides a practical and scalable hardware platform for dynamic tactile sensing in robotics, human–machine interaction, and wearable tactile systems. Full article
(This article belongs to the Special Issue Advanced Flexible Electronics for Sensing Application)
Show Figures

Figure 1

15 pages, 890 KB  
Article
Incremental Recall: An Efficient Method for Estimating Egocentric Network Density
by Chad A. Davis and Caimiao Liu
Computation 2026, 14(3), 59; https://doi.org/10.3390/computation14030059 - 2 Mar 2026
Viewed by 358
Abstract
Accurate estimation of network density is central to egocentric social network analysis, yet existing survey-based methods require researchers to balance accuracy against participant burden and systematic recall bias. Traditional approaches, such as fixed-list name generators, tend to overrepresent salient ties. Although the more [...] Read more.
Accurate estimation of network density is central to egocentric social network analysis, yet existing survey-based methods require researchers to balance accuracy against participant burden and systematic recall bias. Traditional approaches, such as fixed-list name generators, tend to overrepresent salient ties. Although the more recent random sampling method yields better accuracy, it relies on exhaustive free recall, which can be cognitively demanding and impractical for researchers. In this study, we introduce and evaluate an alternative approach—incremental recall—that structures alter nomination across relationship categories to improve coverage of differing tie strengths while reducing respondent burden. Using a large-scale Monte Carlo simulation encompassing over 9 million egocentric networks, we compare incremental recall against traditional fixed-list recall and random sampling across a wide range of network sizes, compositions, and recall bias assumptions. Results show that the incremental recall method consistently outperforms traditional fixed-list recall and performs comparably to or better than random sampling under unbiased and moderately biased recall conditions. Performance advantages persist even when respondents are unable to provide the full number of alters specified by design. We further validate these findings using empirical egocentric network data from 103 participants. Treating observed networks as proxy ground truths, empirical results closely mirror the simulation patterns, confirming the robustness of incremental recall under real-world reporting conditions. These findings demonstrate that incremental recall addresses a central practical challenge in egocentric social network research: balancing feasibility and accuracy in density estimation. The proposed method maintains strong performance while substantially reducing respondent burden and simplifying administration for applied studies. For researchers conducting large scale surveys where network density is one of several measures, incremental recall provides a practical and validated alternative to exhaustive recall that maintains robustness to realistic reporting biases. Full article
Show Figures

Figure 1

Back to TopTop