Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = abstract artifact

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5507 KB  
Article
RoboDeploy: A Metamodel-Driven Framework for Automated Multi-Host Docker Deployment of ROS 2 Systems in IoRT Environments
by Miguel Ángel Barcelona, Laura García-Borgoñón, Pablo Torner and Ariadna Belén Ruiz
Software 2026, 5(1), 1; https://doi.org/10.3390/software5010001 - 19 Dec 2025
Viewed by 350
Abstract
Robotic systems increasingly operate in complex and distributed environments, where software deployment and orchestration pose major challenges. This paper presents a model-driven approach that automates the containerized deployment of robotic systems in Internet of Robotic Things (IoRT) environments. Our solution integrates Model-Driven Engineering [...] Read more.
Robotic systems increasingly operate in complex and distributed environments, where software deployment and orchestration pose major challenges. This paper presents a model-driven approach that automates the containerized deployment of robotic systems in Internet of Robotic Things (IoRT) environments. Our solution integrates Model-Driven Engineering (MDE) with containerization technologies to improve scalability, reproducibility, and maintainability. A dedicated metamodel introduces high-level abstractions for describing deployment architectures, repositories, and container configurations. A web-based tool enables collaborative model editing, while an external deployment automator generates validated Docker and Compose artifacts to support seamless multi-host orchestration. We validated the approach through real-world experiments, which show that the method effectively automates deployment workflows, ensures consistency across development and production environments, and significantly reduces configuration effort. These results demonstrate that model-driven automation can bridge the gap between Software Engineering (SE) and robotics, enabling Software-Defined Robotics (SDR) and supporting scalable IoRT applications. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

23 pages, 1537 KB  
Review
Perspectives on Safety for Autonomous Vehicles
by Rahul Razdan, Raivo Sell, M. Ilhan Akbas and Mahesh Menase
Electronics 2025, 14(22), 4500; https://doi.org/10.3390/electronics14224500 - 18 Nov 2025
Viewed by 1199
Abstract
Autonomy is enabled by the close connection of traditional mechanical systems with information technology. Historically, both communities have built norms for validation and verification (V&V), but with very different properties for safety and associated legal liability. Thus, combining the two in the context [...] Read more.
Autonomy is enabled by the close connection of traditional mechanical systems with information technology. Historically, both communities have built norms for validation and verification (V&V), but with very different properties for safety and associated legal liability. Thus, combining the two in the context of autonomy has exposed unresolved challenges for V&V, and without a clear V&V structure, demonstrating safety is very difficult. Today, both traditional mechanical safety and information technology rely heavily on process-oriented mechanisms to demonstrate safety. In contrast, a third community, the semiconductor industry, has achieved remarkable success by inserting design artifacts which enable formally defined mathematical abstractions. These abstractions combined with associated software tooling (Electronics Design Automation) provide critical properties for scaling the V&V task, and effectively make an inductive argument for system correctness from well-defined component compositions. This article reviews the current methods in the mechanical and IT spaces, the current limitations of cyber-physical V&V, identifies open research questions, and proposes three directions for progress inspired by semiconductors: (i) guardian-based safety architectures, (ii) functional decompositions that preserve physical constraints, and (iii) abstraction mechanisms that enable scalable virtual testing. These perspectives highlight how principles from semiconductor V&V can inform a more rigorous and scalable safety framework for autonomous systems. Full article
(This article belongs to the Special Issue Automated Driving Systems: Latest Advances and Prospects)
Show Figures

Graphical abstract

39 pages, 5351 KB  
Review
Non-Invasive Techniques for fECG Analysis in Fetal Heart Monitoring: A Systematic Review
by Sanghamitra Subhadarsini Dash and Malaya Kumar Nath
Signals 2025, 6(4), 61; https://doi.org/10.3390/signals6040061 - 4 Nov 2025
Viewed by 2258
Abstract
An electrocardiogram (ECG) is a vital diagnostic tool that provides crucial insights into the heart rate, cardiac positioning, origin of electrical potentials, propagation of depolarization waves, and the identification of rhythm and conduction irregularities. Analysis of ECG is essential, especially during pregnancy, where [...] Read more.
An electrocardiogram (ECG) is a vital diagnostic tool that provides crucial insights into the heart rate, cardiac positioning, origin of electrical potentials, propagation of depolarization waves, and the identification of rhythm and conduction irregularities. Analysis of ECG is essential, especially during pregnancy, where monitoring fetal health is critical. Fetal electrocardiography (fECG) has emerged as a significant modality for evaluating the developmental status and well-being of the fetal heart throughout gestation, facilitating early detection of congenital heart diseases (CHDs) and other cardiac abnormalities. Typically, fECG signals are acquired non-invasively through electrodes placed on the maternal abdomen, which reduces risk and enhances user convenience. However, these signals are often contaminated via various sources, including maternal electrocardiogram (mECG), electromagnetic interference from power lines, baseline drift, motion artifacts, uterine contractions, and high-frequency noise. Such disturbances impair signal fidelity and threaten diagnostic accuracy. This scoping review adhering to PRISMA-ScR guidelines aims to highlight the methods for signal acquisition, existing databases for validation, and a range of algorithms proposed by researchers for improving the quality of fECG. A comprehensive examination of 157,000 uniquely identified publications from Google Scholar, PubMed, and Web of Science have resulted in the selection of 6210 records through a systematic screening of titles, abstracts, and keywords. Subsequently, 141 full-text articles were considered eligible for inclusion in this study (from 1950 to 2026). By critically evaluating established techniques in the current literature, a strategy is proposed for analyzing fECG and calculating heart rate variability (HRV) for identifying fetal heart-related abnormalities. Advances in these methodologies could significantly aid in the diagnosis of fetal heart diseases, assisting timely clinical interventions and prevention. Full article
Show Figures

Figure 1

24 pages, 2881 KB  
Article
Wear Leveling in SSDs Considered Harmful: A Case for Capacity Variance
by Ziyang Jiao and Biyuan Yang
Electronics 2025, 14(21), 4169; https://doi.org/10.3390/electronics14214169 - 25 Oct 2025
Viewed by 1532
Abstract
The trend of decreasing endurance of flash memory makes the overall lifetime of SSDs more sensitive to the effects of wear leveling. Under these circumstances, we observe that existing wear-leveling techniques exhibit anomalous behavior under workloads without clear access skew or under dynamic [...] Read more.
The trend of decreasing endurance of flash memory makes the overall lifetime of SSDs more sensitive to the effects of wear leveling. Under these circumstances, we observe that existing wear-leveling techniques exhibit anomalous behavior under workloads without clear access skew or under dynamic access patterns and produce high write amplification, as high as 5.4×, negating its intended benefits. We argue that wear leveling is an artifact for maintaining the fixed-capacity abstraction of a storage device, and it becomes unnecessary if the exported capacity of the SSD is to gracefully reduce. We show that this idea of capacity variance extends the lifetime of the SSD, allowing up to 2.94× more writes under real workloads. Full article
(This article belongs to the Special Issue Advances in Semiconductor Devices and Applications)
Show Figures

Figure 1

22 pages, 3708 KB  
Article
Faithful Narratives from Complex Conceptual Models: Should Modelers or Large Language Models Simplify Causal Maps?
by Tyler J. Gandee and Philippe J. Giabbanelli
Mach. Learn. Knowl. Extr. 2025, 7(4), 116; https://doi.org/10.3390/make7040116 - 7 Oct 2025
Viewed by 955
Abstract
(1) Background: Comprehensive conceptual models can result in complex artifacts, consisting of many concepts that interact through multiple mechanisms. This complexity can be acceptable and even expected when generating rich models, for instance to support ensuing analyses that find central concepts or decompose [...] Read more.
(1) Background: Comprehensive conceptual models can result in complex artifacts, consisting of many concepts that interact through multiple mechanisms. This complexity can be acceptable and even expected when generating rich models, for instance to support ensuing analyses that find central concepts or decompose models into parts that can be managed by different actors. However, complexity can become a barrier when the conceptual model is used directly by individuals. A ‘transparent’ model can support learning among stakeholders (e.g., in group model building) and it can motivate the adoption of specific interventions (i.e., using a model as evidence base). Although advances in graph-to-text generation with Large Language Models (LLMs) have made it possible to transform conceptual models into textual reports consisting of coherent and faithful paragraphs, turning a large conceptual model into a very lengthy report would only displace the challenge. (2) Methods: We experimentally examine the implications of two possible approaches: asking the text generator to simplify the model, either via abstractive (LLMs) or extractive summarization, or simplifying the model through graph algorithms and then generating the complete text. (3) Results: We find that the two approaches have similar scores on text-based evaluation metrics including readability and overlap scores (ROUGE, BLEU, Meteor), but faithfulness can be lower when the text generator decides on what is an interesting fact and is tasked with creating a story. These automated metrics capture textual properties, but they do not assess actual user comprehension, which would require an experimental study with human readers. (4) Conclusions: Our results suggest that graph algorithms may be preferable to support modelers in scientific translations from models to text while minimizing hallucinations. Full article
Show Figures

Figure 1

29 pages, 10807 KB  
Article
From Abstraction to Realization: A Diagrammatic BIM Framework for Conceptual Design in Architectural Education
by Nancy Alassaf
Sustainability 2025, 17(19), 8853; https://doi.org/10.3390/su17198853 - 3 Oct 2025
Viewed by 1458
Abstract
The conceptual design phase in architecture establishes the foundation for subsequent design decisions and influences up to 80% of a building’s lifecycle environmental impact. While Building Information Modeling (BIM) demonstrates transformative potential for sustainable design, its application during conceptual design remains constrained by [...] Read more.
The conceptual design phase in architecture establishes the foundation for subsequent design decisions and influences up to 80% of a building’s lifecycle environmental impact. While Building Information Modeling (BIM) demonstrates transformative potential for sustainable design, its application during conceptual design remains constrained by perceived technical complexity and limited support for abstract thinking. This research examines how BIM tools can facilitate conceptual design through diagrammatic reasoning, thereby bridging technical capabilities with creative exploration. A mixed-methods approach was employed to develop and validate a Diagrammatic BIM (D-BIM) framework. It integrates diagrammatic reasoning, parametric modeling, and performance evaluation within BIM environments. The framework defines three core relationships—dissection, articulation, and actualization—which enable transitions from abstract concepts to detailed architectural forms in Revit’s modeling environments. Using Richard Meier’s architectural language as a structured test case, a 14-week quasi-experimental study with 19 third-year architecture students assessed the framework’s effectiveness through pre- and post-surveys, observations, and artifact analysis. Statistical analysis revealed significant improvements (p < 0.05) with moderate to large effect sizes across all measures, including systematic design thinking, diagram utilization, and academic self-efficacy. Students demonstrated enhanced design iteration, abstraction-to-realization transitions, and performance-informed decision-making through quantitative and qualitative assessments during early design stages. However, the study’s limitations include a small, single-institution sample, the absence of a control group, a focus on a single architectural language, and the exploratory integration of environmental analysis tools. Findings indicate that the framework repositions BIM as a cognitive design environment that supports creative ideation while integrating structured design logic and performance analysis. The study advances Education for Sustainable Development (ESD) by embedding critical, systems-based, and problem-solving competencies, demonstrating BIM’s role in sustainability-focused early design. This research provides preliminary evidence that conceptual design and BIM are compatible when supported with diagrammatic reasoning, offering a foundation for integrating competency-based digital pedagogy that bridges creative and technical dimensions of architectural design. Full article
(This article belongs to the Special Issue Advances in Engineering Education and Sustainable Development)
Show Figures

Figure 1

25 pages, 562 KB  
Article
VeriFlow: A Framework for the Static Verification of Web Application Access Control via Policy-Graph Consistency
by Tao Zhang, Fuzhong Hao, Yunfan Wang, Bo Zhang and Guangwei Xie
Electronics 2025, 14(18), 3742; https://doi.org/10.3390/electronics14183742 - 22 Sep 2025
Viewed by 1368
Abstract
The evolution of industrial automation toward Industry 3.0 and 4.0 has driven the emergence of Industrial Edge-Cloud Platforms, which increasingly depend on web interfaces for managing and monitoring critical operational technology. This convergence introduces significant security risks, particularly from Broken Access Control (BAC)—a [...] Read more.
The evolution of industrial automation toward Industry 3.0 and 4.0 has driven the emergence of Industrial Edge-Cloud Platforms, which increasingly depend on web interfaces for managing and monitoring critical operational technology. This convergence introduces significant security risks, particularly from Broken Access Control (BAC)—a vulnerability consistently ranked as the top web application risk by the Open Web Application Security Project (OWASP). BAC flaws in industrial contexts can lead not only to data breaches but also to disruptions of physical processes. To address this urgent need for robust web-layer defense, this paper presents VeriFlow, a static verification framework for access control in web applications. VeriFlow reformulates access control verification as a consistency problem between two core artifacts: (1) a Formal Access Control Policy (P), which declaratively defines intended permissions, and (2) a Navigational Graph, which models all user-driven UI state transitions. By annotating the graph with policy P, VeriFlow verifies a novel Path-Permission Safety property, ensuring that no sequence of legitimate UI interactions can lead a user from an authorized state to an unauthorized one. A key technical contribution is a static analysis method capable of extracting navigational graphs directly from the JavaScript bundles of Single-Page Applications (SPAs), circumventing the limitations of traditional dynamic crawlers. In empirical evaluations, VeriFlow outperformed baseline tools in vulnerability detection, demonstrating its potential to deliver strong security guarantees that are provable within its abstracted navigational model. By formally checking policy-graph consistency, it systematically addresses a class of vulnerabilities often missed by dynamic tools, though its effectiveness is subject to the model-reality gap inherent in static analysis. Full article
Show Figures

Figure 1

28 pages, 1010 KB  
Article
Figurative Imagery and Religious Discourse in Al-Mufaḍḍaliyyāt
by Ula Aweida
Religions 2025, 16(9), 1165; https://doi.org/10.3390/rel16091165 - 10 Sep 2025
Viewed by 2053
Abstract
This study examines al-Mufaḍḍaliyyāt anthology as a foundational corpus wherein pre-Islamic and early Islamic Arabic poetry emerged not only as a cultural artifact but as a generative locus for theological reflection. Through a close reading of selected poems and nuanced engagement with the [...] Read more.
This study examines al-Mufaḍḍaliyyāt anthology as a foundational corpus wherein pre-Islamic and early Islamic Arabic poetry emerged not only as a cultural artifact but as a generative locus for theological reflection. Through a close reading of selected poems and nuanced engagement with the figurative language specifically metaphor, personification, and symbolic narrative, the research situates poetry as a mode of epistemic inquiry that articulates religious meaning alongside Qurʾānic revelation. Drawing on ʿAbd al-Qāhir al-Jurjānī’s theory of semantic structure and metaphor, in dialogue with Paul Ricoeur’s conception of metaphor as imaginative cognition, the study proposes that poetic discourse operates as a site of “imaginative theology”, i.e., a space wherein the abstract is rendered sensorially legible and metaphysical concepts are dramatized in affective and embodied terms. The analysis reveals how key Qurʾānic themes including divine will, mortality, ethical restraint are anticipated, echoed, and reconfigured through poetic imagery. Thus, al-Mufaḍḍaliyyāt is not merely a literary corpus vis-à-vis Islamic scripture but also functions as an active interlocutor in the formation of early Islamic moral and theological imagination. This interdisciplinary inquiry contributes to broader discussions on the interpenetration of poetics and theology as well as on the cognitive capacities of literature to shape religious consciousness. Full article
23 pages, 534 KB  
Article
LLM-Powered, Expert-Refined Causal Loop Diagramming via Pipeline Algebra
by Kirk Reinholtz, Kamran Eftekhari Shahroudi and Svetlana Lawrence
Systems 2025, 13(9), 784; https://doi.org/10.3390/systems13090784 - 7 Sep 2025
Viewed by 2238
Abstract
Building a causal-loop diagram (CLD) is central to system-dynamics modeling but demands domain insight, the mastery of CLD notation, and the ability to juggle AI, mathematical, and execution tools. Pipeline Algebra (PA) reduces that burden by treating each step—LLM prompting, symbolic or numeric [...] Read more.
Building a causal-loop diagram (CLD) is central to system-dynamics modeling but demands domain insight, the mastery of CLD notation, and the ability to juggle AI, mathematical, and execution tools. Pipeline Algebra (PA) reduces that burden by treating each step—LLM prompting, symbolic or numeric computation, algorithmic transforms, and cloud execution—as a typed, idempotent operator in one algebraic expression. Operators are intrinsically idempotent (implemented through memoization), so every intermediate result is re-used verbatim, yielding bit-level reproducibility even when individual components are stochastic. Unlike DAG (directed acyclic graph) frameworks such as Airflow or Snakemake, which force analysts to wire heterogeneous APIs together with glue code, PA’s compact notation lets them think in the problem space, rather than in workflow plumbing—echoing Iverson’s dictum that “notation is a tool of thought.” We demonstrated PA on a peer-reviewed study of novel-energy commercialization. Starting only from the article’s abstract, an AI-extracted problem statement, and an AI-assisted web search, PA produced an initial CLD. A senior system-dynamics practitioner identified two shortcomings: missing best-practice patterns and lingering dependence on the problem statement. A one-hour rewrite that embedded best-practice rules, used iterative prompting, and removed the problem statement yielded a diagram that conformed to accepted conventions and better captured the system. The results suggest that earlier gaps were implementation artifacts, not flaws in PA’s design; quantitative validation will be the subject of future work. Full article
Show Figures

Figure 1

17 pages, 2559 KB  
Systematic Review
Optical Coherence Tomography Angiography (OCTA) Characteristics of Acute Retinal Arterial Occlusion: A Systematic Review
by Saud Aljohani
Healthcare 2025, 13(16), 2056; https://doi.org/10.3390/healthcare13162056 - 20 Aug 2025
Viewed by 2881
Abstract
Purpose: To systematically review the evidence regarding the characteristics of Optical Coherence Tomography Angiography (OCTA) in acute retinal arterial occlusion (RAO), with a particular focus on vascular alterations across the superficial and deep capillary plexuses, choroid, and peripapillary regions. Methods: A comprehensive [...] Read more.
Purpose: To systematically review the evidence regarding the characteristics of Optical Coherence Tomography Angiography (OCTA) in acute retinal arterial occlusion (RAO), with a particular focus on vascular alterations across the superficial and deep capillary plexuses, choroid, and peripapillary regions. Methods: A comprehensive literature search was performed across PubMed, Web of Science, Scopus, EMBASE, Google Scholar, and the Cochrane Database up to April 2025. The search terms included “Optical coherence tomography angiography,” “OCTA,” “Retinal arterial occlusion,” “Central retinal artery occlusion,” and “Branch retinal artery occlusion.” Studies were included if they evaluated the role of OCTA in diagnosing or assessing acute RAO. Case reports, conference abstracts, and non-English articles were excluded. Two reviewers independently conducted the study selection and data extraction. The methodological quality of the included studies was assessed using the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) tool. Results: The initial search yielded 457 articles, from which 10 studies were ultimately included in the final analysis after a rigorous screening process excluding duplicates, non-English publications, and ineligible articles based on title, abstract, or full-text review. The included studies consistently demonstrated that OCTA is a valuable, noninvasive modality for evaluating microvascular changes in RAO. Key OCTA findings in acute RAO include significant perfusion deficits and reduced vessel density in both the superficial capillary plexus (SCP) and deep capillary plexus (DCP). Several studies noted more pronounced involvement of the SCP compared to the DCP. OCTA parameters, such as vessel density in the macular region, have been found to correlate with visual acuity, suggesting a prognostic value. While findings regarding the foveal avascular zone (FAZ) were mixed, the peripapillary area frequently showed reduced vessel density. Conclusion: Acute RAO is an ocular emergency that causes microvascular ischemic changes detectable by OCTA. This review establishes OCTA as a significant noninvasive tool for diagnosing, monitoring, and prognosticating RAO. It effectively visualizes perfusion deficits that correlate with clinical outcomes. However, limitations such as susceptibility to motion artifacts, segmentation errors, and the lack of standardized normative data must be considered. Future standardization of OCTA protocols and analysis is essential to enhance its clinical application in managing RAO. Full article
Show Figures

Figure 1

53 pages, 915 KB  
Review
Neural Correlates of Huntington’s Disease Based on Electroencephalography (EEG): A Mechanistic Review and Discussion of Excitation and Inhibition (E/I) Imbalance
by James Chmiel, Jarosław Nadobnik, Szymon Smerdel and Mirela Niedzielska
J. Clin. Med. 2025, 14(14), 5010; https://doi.org/10.3390/jcm14145010 - 15 Jul 2025
Cited by 4 | Viewed by 2605
Abstract
Introduction: Huntington’s disease (HD) disrupts cortico-striato-thalamocortical circuits decades before clinical onset. Electroencephalography (EEG) offers millisecond temporal resolution, low cost, and broad accessibility, yet its mechanistic and biomarker potential in HD remains underexplored. We conducted a mechanistic review to synthesize half a century [...] Read more.
Introduction: Huntington’s disease (HD) disrupts cortico-striato-thalamocortical circuits decades before clinical onset. Electroencephalography (EEG) offers millisecond temporal resolution, low cost, and broad accessibility, yet its mechanistic and biomarker potential in HD remains underexplored. We conducted a mechanistic review to synthesize half a century of EEG findings, identify reproducible electrophysiological signatures, and outline translational next steps. Methods: Two independent reviewers searched PubMed, Scopus, Google Scholar, ResearchGate, and the Cochrane Library (January 1970–April 2025) using the terms “EEG” OR “electroencephalography” AND “Huntington’s disease”. Clinical trials published in English that reported raw EEG (not ERP-only) in human HD gene carriers were eligible. Abstract/title screening, full-text appraisal, and cross-reference mining yielded 22 studies (~700 HD recordings, ~600 controls). We extracted sample characteristics, acquisition protocols, spectral/connectivity metrics, and neuroclinical correlations. Results: Across diverse platforms, a consistent spectral trajectory emerged: (i) presymptomatic carriers show a focal 7–9 Hz (low-alpha) power loss that scales with CAG repeat length; (ii) early-manifest patients exhibit widespread alpha attenuation, delta–theta excess, and a flattened anterior-posterior gradient; (iii) advanced disease is characterized by global slow-wave dominance and low-voltage tracings. Source-resolved studies reveal early alpha hypocoherence and progressive delta/high-beta hypersynchrony, microstate shifts (A/B ↑, C/D ↓), and rising omega complexity. These electrophysiological changes correlate with motor burden, cognitive slowing, sleep fragmentation, and neurovascular uncoupling, and achieve 80–90% diagnostic accuracy in shallow machine-learning pipelines. Conclusions: EEG offers a coherent, stage-sensitive window on HD pathophysiology—from early thalamocortical disinhibition to late network fragmentation—and fulfills key biomarker criteria. Translation now depends on large, longitudinal, multi-center cohorts with harmonized high-density protocols, rigorous artifact control, and linkage to clinical milestones. Such infrastructure will enable the qualification of alpha-band restoration, delta-band hypersynchrony, and neurovascular coupling as pharmacodynamic readouts, fostering precision monitoring and network-targeted therapy in Huntington’s disease. Full article
Show Figures

Figure 1

12 pages, 755 KB  
Systematic Review
Dual-Energy Computed Tomography, a New Metal Artifact Reduction Technique for Total Hip Arthroplasty: Is There a Light in the Darkness?
by Andrea Coppola, Luigi Tessitore, Chiara Macina, Filippo Piacentino, Federico Fontana, Andrea Pautasso, Velio Ascenti, Roberto Minici, Domenico Laganà, Tommasa Catania, Giorgio Ascenti, Massimo Venturini and Fabio D’Angelo
J. Clin. Med. 2025, 14(7), 2258; https://doi.org/10.3390/jcm14072258 - 26 Mar 2025
Viewed by 1652
Abstract
Background/Objectives: To evaluate dual-energy computed tomography (DECT) in comparison with conventional CT for periprosthetic bone and surrounding soft tissues in total hip arthroplasty (THA). Methods: Two authors independently screened titles and abstracts for eligibility, discussing any disagreements with a third author [...] Read more.
Background/Objectives: To evaluate dual-energy computed tomography (DECT) in comparison with conventional CT for periprosthetic bone and surrounding soft tissues in total hip arthroplasty (THA). Methods: Two authors independently screened titles and abstracts for eligibility, discussing any disagreements with a third author for final decisions. The articles were categorized into two main groups: those focusing on periprosthetic bone and those on blood vessels or pelvic organs. Results: A total of 37 articles were selected to be included in this systematic review. Conclusions: Our systematic review reveals significant variability in the use of DECT for periprosthetic bone and soft tissue imaging, due to differences in equipment, protocols, and clinical settings. While many studies indicate that virtual monochromatic imaging (VMI), especially when combined with metal artifact reduction (MAR), improves image quality, there is no consensus on optimal energy levels. Future research should focus on large-scale, multicenter studies with standardized protocols to compare reconstruction techniques, energy levels, and combined MAR-VMI use. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

31 pages, 860 KB  
Systematic Review
Radiofrequency Echographic Multi Spectrometry—A Novel Tool in the Diagnosis of Osteoporosis and Prediction of Fragility Fractures: A Systematic Review
by Elena Icătoiu, Andreea-Iulia Vlădulescu-Trandafir, Laura-Maria Groșeanu, Florian Berghea, Claudia-Oana Cobilinschi, Claudia-Gabriela Potcovaru, Andra-Rodica Bălănescu and Violeta-Claudia Bojincă
Diagnostics 2025, 15(5), 555; https://doi.org/10.3390/diagnostics15050555 - 25 Feb 2025
Cited by 8 | Viewed by 2711
Abstract
Background/Objectives: Given the significant economic and social burden of osteoporosis, there is growing interest in developing an efficient alternative to the traditional dual-energy X-ray absorptiometry (DXA). Radiofrequency Echographic Multi Spectrometry (REMS) is an innovative, non-ionizing imaging technique that recently emerged as a viable [...] Read more.
Background/Objectives: Given the significant economic and social burden of osteoporosis, there is growing interest in developing an efficient alternative to the traditional dual-energy X-ray absorptiometry (DXA). Radiofrequency Echographic Multi Spectrometry (REMS) is an innovative, non-ionizing imaging technique that recently emerged as a viable tool to diagnose osteoporosis and estimate the fragility fracture risk. Nevertheless, its clinical use is still limited due to its novelty and continuing uncertainty of long-term performance. Methods: In order to evaluate the accuracy of the REMS, a systematic review of the English-language literature was conducted. Three databases were searched for relevant publications from 1 January 2015 until 1 December 2024 using the keyword combinations “(radiofrequency echographic multi spectrometry OR REMS) AND (dual-energy X-ray absorptiometry OR DXA)”. The initial search yielded 602 candidate articles. After screening the titles and abstracts following the eligibility criteria, 17 publications remained for full-text evaluation. Results: The reviewed studies demonstrated strong diagnostic agreement between REMS and DXA. Additionally, REMS showed enhanced diagnostic capabilities in cases where lumbar bone mineral density measurements by DXA were impaired by artifacts such as vertebral fractures, deformities, osteoarthritis, or vascular calcifications. REMS exhibited excellent intra-operator repeatability and precision, comparable to or exceeding the reported performance of DXA. The fragility score (FS), a parameter reflecting bone quality and structural integrity, effectively discriminated between fractured and non-fractured patients. Moreover, REMS proved to be a radiation-free option for bone health monitoring in radiation-sensitive populations or patients requiring frequent imaging to assess fracture risk. Conclusions: This current study underscores the robustness of REMS as a reliable method for diagnosing and monitoring osteoporosis and evaluating bone fragility via the FS. It also identifies critical knowledge gaps and emphasizes the need for further prospective studies to validate and expand the clinical applications of REMS across diverse patient populations. Full article
(This article belongs to the Collection Biomedical Optics: From Technologies to Applications)
Show Figures

Figure 1

16 pages, 632 KB  
Article
Distributed Software Build Assurance for Software Supply Chain Integrity
by Ken Lew, Arijet Sarker, Simeon Wuthier, Jinoh Kim, Jonghyun Kim and Sang-Yoon Chang
Appl. Sci. 2024, 14(20), 9262; https://doi.org/10.3390/app14209262 - 11 Oct 2024
Viewed by 2723
Abstract
Computing and networking are increasingly implemented in software. We design and build a software build assurance scheme detecting if there have been injections or modifications in the various steps in the software supply chain, including the source code, compiling, and distribution. Building on [...] Read more.
Computing and networking are increasingly implemented in software. We design and build a software build assurance scheme detecting if there have been injections or modifications in the various steps in the software supply chain, including the source code, compiling, and distribution. Building on the reproducible build and software bill of materials (SBOM), our work is distinguished from previous research in assuring multiple software artifacts across the software supply chain. Reproducible build, in particular, enables our scheme, as our scheme requires the software materials/artifacts to be consistent across machines with the same operating system/specifications. Furthermore, we use blockchain to deliver the proof reference, which enables our scheme to be distributed so that the assurance beneficiary and verifier are the same, i.e., the node downloading the software verifies its own materials, artifacts, and outputs. Blockchain also significantly improves the assurance efficiency. We first describe and explain our scheme using abstraction and then implement our scheme to assure Ethereum as the target software to provide concrete proof-of-concept implementation, validation, and experimental analyses. Our scheme enables more significant performance gains than relying on a centralized server thanks to the use of blockchain (e.g., two to three orders of magnitude quicker in verification) and adds small overheads (e.g., generating and verifying proof have an overhead of approximately one second, which is two orders of magnitude smaller than the software download or build processes). Full article
Show Figures

Figure 1

11 pages, 507 KB  
Article
The Impact of a Computing Curriculum Accessible to Students with ASD on the Development of Computing Artifacts
by Abdu Arslanyilmaz, Margaret L. Briley, Gregory V. Boerio, Katie Petridis, Ramlah Ilyas and Feng Yu
Knowledge 2024, 4(1), 85-95; https://doi.org/10.3390/knowledge4010005 - 5 Mar 2024
Cited by 1 | Viewed by 2147
Abstract
There has been no study examining the effectiveness of an accessible computing curriculum for students with autism spectrum disorder (ASD) on their learning of computational thinking concepts (CTCs), flow control, data representation, abstraction, user interactivity, synchronization, parallelism, and logic. This study aims to [...] Read more.
There has been no study examining the effectiveness of an accessible computing curriculum for students with autism spectrum disorder (ASD) on their learning of computational thinking concepts (CTCs), flow control, data representation, abstraction, user interactivity, synchronization, parallelism, and logic. This study aims to investigate the effects of an accessible computing curriculum for students with ASD on their learning of CTCs as measured by the scores of 312 computing artifacts developed by two groups of students with ASD. Conducted among 21 seventh-grade students with ASD (10 in the experimental group and 11 in the control), this study involved collecting data on the computing projects of these students over 24 instructional sessions. Group classification was considered the independent variable, and computing project scores were set as the dependent variables. The results showed that the original curriculum was statistically significantly more effective for students in learning logic than the accessible one when all seven CTCs were examined as a single construct. Both curriculums were statistically significantly effective in progressively improving students’ learning of data representation, abstraction, synchronization, parallelism, and all CTCs as a single construct when examining the gradual increase in their computing artifact scores over the 24 sessions. Both curriculums were statistically significantly effective in increasing the scores of synchronization and all CTCs as a single construct when the correlations between CTCs and sessions for individual groups were analyzed. The findings underscore that students with ASD can effectively learn computing skills through accessible or standard curriculums, provided that adjustments are made during delivery. Full article
Show Figures

Figure 1

Back to TopTop