Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (36)

Search Parameters:
Keywords = clean audit

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 7214 KB  
Article
Quantitative Mapping of Conceptual Hierarchies and Data-Driven Taxonomies of Japanese Architectural Concepts: A 28-Term Testbed
by Gledis Gjata and Satoshi Yamada
Architecture 2026, 6(2), 62; https://doi.org/10.3390/architecture6020062 - 13 Apr 2026
Viewed by 232
Abstract
Discourse on Japanese architecture relies on qualitative interpretation to link abstract concepts such as “ma” and “mu”, used here as illustrative examples of the conceptual register, with physical spaces, such as engawa, yet lacks quantitative, data-driven validation. This study addresses this gap by [...] Read more.
Discourse on Japanese architecture relies on qualitative interpretation to link abstract concepts such as “ma” and “mu”, used here as illustrative examples of the conceptual register, with physical spaces, such as engawa, yet lacks quantitative, data-driven validation. This study addresses this gap by testing two primary hypotheses: (1) whether abstract Japanese architectural terms form a distinct, computationally recoverable conceptual layer, and (2) whether the corresponding concrete architectural devices cohere into a unified physical mesh rather than being fragmented into unrelated subclusters. We investigate this using a Natural Language Processing (NLP) framework centred on a fine-tuned BERT model, utilising an exhaustive Adjusted Rand Index (ARI) enumeration search over two-way partitions on a target vocabulary of 28 terms. Furthermore, a “definitional audit” compares a FULL corpus against a CLEAN corpus, stripped of explicit glossary-like sentences, to mitigate “shortcut learning”, allowing sensitivity at the conceptual physical boundary to be inspected. Both hypotheses are supported. A stable two-block structure appears across all evaluations, comprising a compact conceptual pocket {aware, ma, mu, wabi, sabi, and wabi_sabi} and a larger physical mesh integrating vocabulary for room, garden, and shrine. Interface structure concentrates in a narrow boundary corridor, most consistently along the engawa–shakkei linkage, with en acting as the principal physical-side interface hub under sparsified network views. In the definitional audit (FULL versus CLEAN), ikezuishi is the only recurrently unstable item, shifting sides under small, defensible changes in corpus cleaning and Japanese-aware sentence segmentation, which is best read as a sensitivity signal rather than a substantive change in macro-structure. Removing glossary-like definitions slightly tightens dispersion while preserving the backbone split, which supports definitional audits as a practical robustness check for distributional studies of architectural vocabularies. Full article
(This article belongs to the Special Issue Architecture in the Digital Age)
Show Figures

Figure 1

42 pages, 4153 KB  
Article
Hierarchical Reconciliation of Fifty-One Years of Highway–Rail Grade Crossing Data with Verified Multistage Inference
by Raj Bridgelall
Algorithms 2026, 19(4), 282; https://doi.org/10.3390/a19040282 - 3 Apr 2026
Viewed by 239
Abstract
Highway–rail grade crossing (HRGC) safety research relies on federal incident and inventory datasets that span multiple decades. However, inconsistencies in geographic identifiers and incomplete reconstruction of crossing denominators can distort exposure-based rate metrics. This study develops, documents, and validates a transparent nine-stage reconciliation [...] Read more.
Highway–rail grade crossing (HRGC) safety research relies on federal incident and inventory datasets that span multiple decades. However, inconsistencies in geographic identifiers and incomplete reconstruction of crossing denominators can distort exposure-based rate metrics. This study develops, documents, and validates a transparent nine-stage reconciliation pipeline applied to 51 years (1975–2025) of national HRGC incident data from the Federal Railroad Administration Form 57 and Form 71 datasets. The hierarchical pipeline integrated deterministic alignment and multistage inference methods to produce an audited, geographically consistent dataset. The study formalizes four longitudinal county-level cumulative exposure indices that characterize spatiotemporal patterns of incident concentration relative to static population and infrastructure denominators. These metrics include accumulated incidents per million population (AIPM), accumulated incidents per crossing (AIPC), crossings per million population (CPM), and crossings per 100 square miles (CPHSM). All four metrics exhibited pronounced right-skewness: AIPM, CPM, and CPHSM approximated exponential forms, and AIPC approximated a log-normal form. Statistical tests detected statistically significant tail deviations in three metrics; CPM did not reject the exponential fit at conventional significance levels. Spatial analysis shows coherent regional concentration in incident rates in the Central Plains and lower Mississippi corridors. The national time series exhibits a late-1970s plateau, sustained exponential decline beginning around 1980, and stabilization but persistent incident rates after 2001. Population-normalized AIPM remained statistically indistinguishable between the reconciled and record-dropped datasets; however, crossing-based metrics changed materially when reconstructing denominators from the reconciled crossing universe. Statistical comparisons confirmed that incident-only denominators introduced substantial measurement bias in local risk assessment. State-level rank reversals persisted even when omnibus distributional tests failed to reject equality. By formalizing multistage data cleaning and quantifying its analytical impact over an unprecedented longitudinal horizon, this study establishes denominator integrity and geographic reconciliation as prerequisites for valid HRGC exposure assessment and provides a framework for future predictive modeling. Full article
(This article belongs to the Special Issue Transportation and Traffic Engineering)
Show Figures

Graphical abstract

33 pages, 8140 KB  
Article
Diagnosing Shortcut Learning in CNN-Based Photovoltaic Fault Recognition from RGB Images: A Multi-Method Explainability Audit
by Bogdan Marian Diaconu
AI 2026, 7(3), 94; https://doi.org/10.3390/ai7030094 - 4 Mar 2026
Viewed by 563
Abstract
Convolutional neural networks (CNNs) can achieve high accuracy in photovoltaic (PV) fault recognition from RGB imagery, yet their decisions may rely on shortcut cues induced by heterogeneous backgrounds, viewpoints, and class imbalance. This work presents a multi-method explainability audit on the Kaggle PV [...] Read more.
Convolutional neural networks (CNNs) can achieve high accuracy in photovoltaic (PV) fault recognition from RGB imagery, yet their decisions may rely on shortcut cues induced by heterogeneous backgrounds, viewpoints, and class imbalance. This work presents a multi-method explainability audit on the Kaggle PV Panel Defect Dataset (six classes), comparing five architectures (Baseline CNN, VGG16, ResNet50, InceptionV3, EfficientNetB0). Explanations are obtained with LIME superpixel surrogates (reported together with kernel-weighted surrogate fidelity), occlusion sensitivity (quantified via IoU@Top10% against consistent proxy masks, Shannon entropy, and Hoyer sparsity), and Integrated Gradients evaluated by deletion–insertion faithfulness and a Faithfulness Gap. While ResNet50 yields the best predictive performance, EfficientNetB0 shows the most consistent faithfulness evidence and stable panel-centered attributions. The analysis highlights class-dependent vulnerability to context cues, especially for the Clean and damaged classes, and supports using quantitative explainability diagnostics during model selection and dataset curation to mitigate shortcuts in vision-based PV monitoring. Full article
Show Figures

Figure 1

18 pages, 608 KB  
Article
TDI-SF: Trustworthy Dynamic Inference via Uncertainty-Gated Retrieval and Similarity-Gated Strict Fallback
by Yiyi Xu, Siyuan Li, Zhouxiang Yu, Jiahao Hu and Pengfei Liu
Appl. Sci. 2026, 16(4), 2023; https://doi.org/10.3390/app16042023 - 18 Feb 2026
Viewed by 273
Abstract
Retrieval-time augmentation can correct hard test samples but may also introduce harmful interference when retrieved neighbors are unreliable. We propose TDI-SF (trustworthy dynamic inference via similarity-gated strict fallback), a safety-oriented dynamic inference strategy that intervenes only when needed and falls back to a [...] Read more.
Retrieval-time augmentation can correct hard test samples but may also introduce harmful interference when retrieved neighbors are unreliable. We propose TDI-SF (trustworthy dynamic inference via similarity-gated strict fallback), a safety-oriented dynamic inference strategy that intervenes only when needed and falls back to a frozen baseline when retrieval quality is insufficient. Uncertainty-gated selective retrieval triggers on a hard subset, defined by high entropy or low margin predictions (q=0.3), and similarity-gated fusion weights neighbor evidence by maximum similarity with a strict fallback threshold (alpha-mode=maxsim, min_maxsim). We evaluate on ImageNet-100 (ResNet-50) and CICIDS2017 (MLP) and report overall accuracy, hard-subset accuracy, calibration, negative flips, and risk–coverage behavior alongside efficiency. Comprehensive evaluation under both clean and degraded retrieval conditions demonstrates the value of each component. On ImageNet-100, TDI-SF improves hard-subset accuracy by 0.92% and overall accuracy by 0.30%, applying retrieval to only 32.6% of samples with 1.38 ms overhead per triggered sample. On CICIDS2017, the same mechanism yields +1.30% hard-subset gains with only 0.43 ms/hard overhead. These results show a simple, auditable recipe for safer retrieval-augmented inference across heterogeneous domains. Full article
(This article belongs to the Special Issue Latest Research on Computer Vision and Its Application)
Show Figures

Graphical abstract

55 pages, 2886 KB  
Article
Hybrid AI and LLM-Enabled Agent-Based Real-Time Decision Support Architecture for Industrial Batch Processes: A Clean-in-Place Case Study
by Apolinar González-Potes, Diego Martínez-Castro, Carlos M. Paredes, Alberto Ochoa-Brust, Luis J. Mena, Rafael Martínez-Peláez, Vanessa G. Félix and Ramón A. Félix-Cuadras
AI 2026, 7(2), 51; https://doi.org/10.3390/ai7020051 - 1 Feb 2026
Cited by 1 | Viewed by 3146
Abstract
A hybrid AI and LLM-enabled architecture is presented for real-time decision support in industrial batch processes, where supervision still relies heavily on human operators and ad hoc SCADA logic. Unlike algorithmic contributions proposing novel AI methods, this work addresses the practical integration and [...] Read more.
A hybrid AI and LLM-enabled architecture is presented for real-time decision support in industrial batch processes, where supervision still relies heavily on human operators and ad hoc SCADA logic. Unlike algorithmic contributions proposing novel AI methods, this work addresses the practical integration and deployment challenges arising when applying existing AI techniques to safety-critical industrial environments with legacy PLC/SCADA infrastructure and real-time constraints. The framework combines deterministic rule-based agents, fuzzy and statistical enrichment, and large language models (LLMs) to support monitoring, diagnostic interpretation, preventive maintenance planning, and operator interaction with minimal manual intervention. High-frequency sensor streams are collected into rolling buffers per active process instance; deterministic agents compute enriched variables, discrete supervisory states, and rule-based alarms, while an LLM-driven analytics agent answers free-form operator queries over the same enriched datasets through a conversational interface. The architecture is instantiated and deployed in the Clean-in-Place (CIP) system of an industrial beverage plant and evaluated following a case study design aimed at demonstrating architectural feasibility and diagnostic behavior under realistic operating regimes rather than statistical generalization. Three representative multi-stage CIP executions—purposively selected from 24 runs monitored during a six-month deployment—span nominal baseline, preventive-warning, and diagnostic-alert conditions. The study quantifies stage-specification compliance, state-to-specification consistency, and temporal stability of supervisory states, and performs spot-check audits of numerical consistency between language-based summaries and enriched logs. Results in the evaluated CIP deployment show high time within specification in sanitizing stages (100% compliance across the evaluated runs), coherent and mostly stable supervisory states in variable alkaline conditions (state-specification consistency Γs0.98), and data-grounded conversational diagnostics in real time (median numerical error below 3% in audited samples), without altering the existing CIP control logic. These findings suggest that the architecture can be transferred to other industrial cleaning and batch operations by reconfiguring process-specific rules and ontologies, though empirical validation in other process types remains future work. The contribution lies in demonstrating how to bridge the gap between AI theory and industrial practice through careful system architecture, data transformation pipelines, and integration patterns that enable reliable AI-enhanced decision support in production environments, offering a practical path toward AI-assisted process supervision with explainable conversational interfaces that support preventive maintenance decision-making and equipment health monitoring. Full article
Show Figures

Figure 1

50 pages, 3579 KB  
Article
Safety-Aware Multi-Agent Deep Reinforcement Learning for Adaptive Fault-Tolerant Control in Sensor-Lean Industrial Systems: Validation in Beverage CIP
by Apolinar González-Potes, Ramón A. Félix-Cuadras, Luis J. Mena, Vanessa G. Félix, Rafael Martínez-Peláez, Rodolfo Ostos, Pablo Velarde-Alvarado and Alberto Ochoa-Brust
Technologies 2026, 14(1), 44; https://doi.org/10.3390/technologies14010044 - 7 Jan 2026
Viewed by 1196
Abstract
Fault-tolerant control in safety-critical industrial systems demands adaptive responses to equipment degradation, parameter drift, and sensor failures while maintaining strict operational constraints. Traditional model-based controllers struggle under these conditions, requiring extensive retuning and dense instrumentation. Recent safe multi-agent reinforcement learning (MARL) frameworks with [...] Read more.
Fault-tolerant control in safety-critical industrial systems demands adaptive responses to equipment degradation, parameter drift, and sensor failures while maintaining strict operational constraints. Traditional model-based controllers struggle under these conditions, requiring extensive retuning and dense instrumentation. Recent safe multi-agent reinforcement learning (MARL) frameworks with control barrier functions (CBFs) achieve real-time constraint satisfaction in robotics and power systems, yet assume comprehensive state observability—incompatible with sensor-hostile industrial environments where instrumentation degradation and contamination risks dominate design constraints. This work presents a safety-aware multi-agent deep reinforcement learning framework for adaptive fault-tolerant control in sensor-lean industrial environments, achieving formal safety through learned implicit barriers under partial observability. The framework integrates four synergistic mechanisms: (1) multi-layer safety architecture combining constrained action projection, prioritized experience replay, conservative training margins, and curriculum-embedded verification achieving zero constraint violations; (2) multi-agent coordination via decentralized execution with learned complementary policies. Additional components include (3) curriculum-driven sim-to-real transfer through progressive four-stage learning achieving 85–92% performance retention without fine-tuning; (4) offline extended Kalman filter validation enabling 70% instrumentation reduction (91–96% reconstruction accuracy) for regulatory auditing without real-time estimation dependencies. Validated through sustained deployment in commercial beverage manufacturing clean-in-place (CIP) systems—a representative safety-critical testbed with hard flow constraints (≥1.5 L/s), harsh chemical environments, and zero-tolerance contamination requirements—the framework demonstrates superior control precision (coefficient of variation: 2.9–5.3% versus 10% industrial standard) across three hydraulic configurations spanning complexity range 2.1–8.2/10. Comprehensive validation comprising 37+ controlled stress-test campaigns and hundreds of production cycles (accumulated over 6 months) confirms zero safety violations, high reproducibility (CV variation < 0.3% across replicates), predictable complexity–performance scaling (R2=0.89), and zero-retuning cross-topology transferability. The system has operated autonomously in active production for over 6 months, establishing reproducible methodology for safe MARL deployment in partially-observable, sensor-hostile manufacturing environments where analytical CBF approaches are structurally infeasible. Full article
Show Figures

Figure 1

16 pages, 257 KB  
Article
The Polish (Un)Sustainability Paradox: A Critical Analysis of High SDG Rankings and Low Administrative Effectiveness
by Marta du Vall and Marta Majorek
Sustainability 2026, 18(1), 165; https://doi.org/10.3390/su18010165 - 23 Dec 2025
Cited by 1 | Viewed by 750
Abstract
This article analyzes the effectiveness of Poland’s central government administration in implementing the 2030 Agenda for Sustainable Development, addressing the context of high-level strategic declarations versus actual policy outcomes. The study employs a qualitative critical document analysis, conducted as comprehensive desk research. This [...] Read more.
This article analyzes the effectiveness of Poland’s central government administration in implementing the 2030 Agenda for Sustainable Development, addressing the context of high-level strategic declarations versus actual policy outcomes. The study employs a qualitative critical document analysis, conducted as comprehensive desk research. This method involves a comparative analysis of official strategic and policy documents (e.g., “Strategy for Responsible Development”) against the empirical findings of external audits from the Supreme Audit Office (NIK), supplemented by national (GUS) and international statistical data. The analysis reveals a fundamental “implementation gap.” While Poland has successfully created a robust strategic and institutional framework, reflected in high international SDG rankings, this success masks deep deficits and stagnation in key areas, particularly in the environmental dimension. Audits consistently confirm systemic problems with inter-ministerial coordination, ensuring adequate financing, and the lack of reliable evaluation for key programs, such as “Clean Air” or the circular economy roadmap. Considering these findings, the study concludes that operational effectiveness does not match strategic declarations. The analysis identifies systemic weaknesses and recommends urgent, targeted strategic actions to bridge the gap between policy and practice, particularly by strengthening coordination and evaluation mechanisms. Full article
30 pages, 3714 KB  
Article
Reproducibility and Validation of a Computational Framework for Architectural Semantics: A Methodological Study with Japanese Architectural Concepts
by Gledis Gjata and Satoshi Yamada
Buildings 2025, 15(22), 4107; https://doi.org/10.3390/buildings15224107 - 14 Nov 2025
Cited by 1 | Viewed by 1100
Abstract
Architectural discourse is a specialised language whose key terms shift with context, which complicates empirical claims about meaning. This study addresses this problem by testing whether a rigorously audited, reproducible NLP framework can recover a core theoretical distinction in architectural language, specifically the [...] Read more.
Architectural discourse is a specialised language whose key terms shift with context, which complicates empirical claims about meaning. This study addresses this problem by testing whether a rigorously audited, reproducible NLP framework can recover a core theoretical distinction in architectural language, specifically the conceptual versus physical split, using Japanese terms as a focused case. The objective is to evaluate contextual embeddings against static baselines under controlled conditions and to release an end-to-end pipeline that others can rerun exactly. We assemble a ~1.98-million-word corpus spanning architecture, history, philosophy, and theology; train Word2Vec (CBOW, Skip-gram) and a fine-tuned BERT on the same sentences; derive embeddings; and cluster terms with k-means and Agglomerative methods. Internal validity is assessed using the Adjusted Rand Index against a phenomenological gold standard split; external validity is correlated with WordSim-353; robustness is examined through a negative-control relabelling and a definitional audit comparing FULL and CLEAN corpora; seeds, versions, and artefacts are pinned for exact reruns in the archived environment; and identity across different hardware is not claimed. The study finds that BERT cleanly recovers the split with ARI 0.852 (FULL) and 0.718 (CLEAN). BERT and CBOW show no seed variation. Both Word2Vec models hover near chance, but Skip-gram shows instability across seeds. We provide a transparent, reusable methodology, with released assets, that enables falsifiable and scalable claims about architectural semantics. Full article
Show Figures

Figure 1

21 pages, 3037 KB  
Article
Water Security with Social Organization and Forest Care in the Megalopolis of Central Mexico
by Úrsula Oswald-Spring and Fernando Jaramillo-Monroy
Water 2025, 17(22), 3245; https://doi.org/10.3390/w17223245 - 13 Nov 2025
Cited by 1 | Viewed by 1332
Abstract
This article examines the effects of climate change on the 32 million inhabitants of the Megalopolis of Central Mexico (MCM), which is threatened by chaotic urbanization, land-use changes, the deforestation of the Forest of Water by organized crime, unsustainable agriculture, and biodiversity loss. [...] Read more.
This article examines the effects of climate change on the 32 million inhabitants of the Megalopolis of Central Mexico (MCM), which is threatened by chaotic urbanization, land-use changes, the deforestation of the Forest of Water by organized crime, unsustainable agriculture, and biodiversity loss. Expensive hydraulic management extracting water from deep aquifers, long pipes exploiting water from neighboring states, and sewage discharged outside the endorheic basin result in expensive pumping costs and air pollution. This mismanagement has increased water scarcity. The overexploitation of aquifers and the pollution by toxic industrial and domestic sewage mixed with rainfall has increased the ground subsidence, damaging urban infrastructure and flooding marginal neighborhoods with toxic sewage. A system approach, satellite data, and participative research methodology were used to explore potential water scarcity and weakened water security for 32 million inhabitants. An alternative nature-based approach involves recovering the Forest of Water (FW) with IWRM, including the management of Natural Protected Areas, the rainfall recharge of aquifers, and cleaning domestic sewage inside the valley where the MCM is found. This involves recovering groundwater, reducing the overexploitation of aquifers, and limiting floods. Citizen participation in treating domestic wastewater with eco-techniques, rainfall collection, and purification filters improves water availability, while the greening of urban areas limits the risk of climate disasters. The government is repairing the broken drinking water supply and drainage systems affected by multiple earthquakes. Adaptation to water scarcity and climate risks requires the recognition of unpaid female domestic activities and the role of indigenous people in protecting the Forest of Water with the involvement of three state authorities. A digital platform for water security, urban planning, citizen audits against water authority corruption, and aquifer recharge through nature-based solutions provided by the System of Natural Protected Areas, Biological and Hydrological Corridors [SAMBA] are improving livelihoods for the MCM’s inhabitants and marginal neighborhoods, with greater equity and safety. Full article
Show Figures

Figure 1

50 pages, 2576 KB  
Perspective
Bridging the AI–Energy Paradox: A Compute-Additionality Covenant for System Adequacy in Energy Transition
by George Kyriakarakos
Sustainability 2025, 17(21), 9444; https://doi.org/10.3390/su17219444 - 24 Oct 2025
Cited by 2 | Viewed by 2762
Abstract
As grids decarbonize and end-use sectors electrify, the rapid penetration of artificial intelligence (AI) and hyperscale data centers reshapes the electrical load profile and power quality requirements. This leads not only to higher consumption but also coincident demand in constrained urban nodes, steeper [...] Read more.
As grids decarbonize and end-use sectors electrify, the rapid penetration of artificial intelligence (AI) and hyperscale data centers reshapes the electrical load profile and power quality requirements. This leads not only to higher consumption but also coincident demand in constrained urban nodes, steeper ramps and tighter power quality constraints. The article investigates to what extent a compute-additionality covenant can reduce resource inadequacy (LOLE) at an acceptable $/kW-yr under realistic grid constraints, tying interconnection/capacity releases to auditable contributions (ELCC-accredited firm-clean MW in-zone or verified PCC-level services such as FFR/VAR/black-start). Using two worked cases (mature market and EMDE context) the way in which tranche-gated interconnection, ELCC accreditation and PCC-level services can hold LOLE at the planning target while delivering auditable FFR/VAR/ride-through performance at acceptable normalized costs is illustrated. Enforcement relies on standards-based telemetry and cybersecurity (IEC 61850/62351/62443) and PCC compliance (e.g., IEEE/IEC). Supply and network-side options are screened with stage-gates and indicative ELCC/PCC contributions. In a representative mature case, adequacy at 0.1 day·yr−1 is maintained at ≈$200 per compute-kW-yr. A covenant term sheet (tranche sizing, benefit–risk sharing, compliance workflow) is developed along an integration roadmap. Taken together, this perspective outlines a governance mechanism that aligns rapid compute growth with system adequacy and decarbonization. Full article
Show Figures

Figure 1

14 pages, 927 KB  
Perspective
Polypharmacy as a Chronic Condition: A Diagnostic Mindset for Safer and Smarter Care
by Waseem Jerjes and Azeem Majeed
J. Clin. Med. 2025, 14(20), 7388; https://doi.org/10.3390/jcm14207388 - 19 Oct 2025
Viewed by 1587
Abstract
Polypharmacy is typically seen as an unavoidable consequence of multimorbidity and aging, with clinicians addressing complex medication lists unsystematically. In this perspective, we argue that polypharmacy should be managed as a chronic condition. Like diabetes or hypertension, for example, the medication burden shows [...] Read more.
Polypharmacy is typically seen as an unavoidable consequence of multimorbidity and aging, with clinicians addressing complex medication lists unsystematically. In this perspective, we argue that polypharmacy should be managed as a chronic condition. Like diabetes or hypertension, for example, the medication burden shows persistence, progression in its absence despite active management, predictable complications (such as falls, delirium, renal injury, functional decline), and a need for structured surveillance. We introduce a pragmatic diagnostic framework that moves beyond pill counts to modality-agnostic, regimen-level risk across prescribed and non-prescribed medicines. Diagnosis rests on prolonged exposure, composite burden indices (e.g., anticholinergic/sedative load), medication-related complications or prescribing cascades, and the need for a planned review. As biologics, gene therapies and long-acting formulations can lower tablet numbers while increasing monitoring, administration, and interaction complexity. We treat polypharmacy as cumulative pharmacodynamic and operational burden. We advocate stage matched care with unique, functional aims—decreasing the harmful burden instead of mass deprescribing—and position a structured medication review as the standard for polypharmacy with support from pharmacists, shared decision making, and safety netted taper plans. The framework fosters patient-centred care, embedding continuity and equity, and outlines a concise outcome set that integrates pharmacometric measures with patient-reported function and treatment burden. At the systems level, the framework enables registries, recall systems, decision support, and audit/feedback mechanisms to shift from sporadic medication list clean-up to a structured, measurable long-term program. Redefining polypharmacy in this way aligns clinical practice, education, and policy with real-world evidence, fostering a cohesive pathway to safer, streamlined, and more patient-centred care in community settings. Full article
(This article belongs to the Section Pharmacology)
Show Figures

Figure 1

16 pages, 254 KB  
Article
Advancing Energy Transition and Climate Accountability in Wisconsin Firms: A Content Analysis of Corporate Sustainability Reporting
by Hadi Veisi
Sustainability 2025, 17(19), 8935; https://doi.org/10.3390/su17198935 - 9 Oct 2025
Cited by 1 | Viewed by 1296
Abstract
Corporate ESG (Environmental, Social, and Governance) reporting is increasingly envisioned as evidence of accountability in the energy transition, yet persistent gaps remain between commitments and practices. This study applied the Global Reporting Initiative (GRI) framework—specifically indicators 302 (Energy) and 305 (Emissions)—to evaluate the [...] Read more.
Corporate ESG (Environmental, Social, and Governance) reporting is increasingly envisioned as evidence of accountability in the energy transition, yet persistent gaps remain between commitments and practices. This study applied the Global Reporting Initiative (GRI) framework—specifically indicators 302 (Energy) and 305 (Emissions)—to evaluate the credibility, scope, and strategic depth of disclosures by 20 Wisconsin (WI) firms in the energy, manufacturing, food, and service sectors. Guided by accountability and legitimacy theory, a comparative content analysis was conducted, complemented by Spearman correlation to examine associations between firm size and disclosure quality. Results show that while firms consistently report basic metrics such as total energy consumption and Scope 1 emissions, disclosures on Scope 3 emissions, renewable sourcing, and energy-efficiency achievements remain partial and selectively framed. Third-party assurance is inconsistently applied, and methodological transparency—such as external audit and coding protocols—is limited, weakening credibility. A statistically significant negative correlation was observed between annual revenue and disclosure quality, indicating that greater financial capacity does not necessarily translate into greater transparency. These findings highlight methodological and governance shortcomings, including reliance on generic ESG frameworks rather than climate-focused standards such as Task Force on Climate-related Financial Disclosures (TCFD). Integrated reporting approaches are recommended to improve comparability, credibility, and alignment with Wisconsin’s Clean Energy Transition Plan. Full article
26 pages, 4288 KB  
Article
Risk-Informed Dual-Threshold Screening for SPT-Based Liquefaction: A Probability-Calibrated Random Forest Approach
by Hani S. Alharbi
Buildings 2025, 15(17), 3206; https://doi.org/10.3390/buildings15173206 - 5 Sep 2025
Viewed by 1306
Abstract
Soil liquefaction poses a significant risk to foundations during earthquakes, prompting the need for simple, risk-aware screening tools that go beyond single deterministic boundaries. This study creates a probability-calibrated dual-threshold screening rule using a random forest (RF) classifier trained on 208 SPT case [...] Read more.
Soil liquefaction poses a significant risk to foundations during earthquakes, prompting the need for simple, risk-aware screening tools that go beyond single deterministic boundaries. This study creates a probability-calibrated dual-threshold screening rule using a random forest (RF) classifier trained on 208 SPT case histories with quality-based weights (A/B/C = 1.0/0.70/0.40). The model is optimized with random search and calibrated through isotonic regression. Iso-probability contours from 1000 bootstrap samples produce paired thresholds for fines-corrected, overburden-normalized blow count N1,60,CS and normalized cyclic stress ratio CSR7.5,1 at target liquefaction probabilities Pliq = 5%, 20%, 50%, 80%, and 95%, with 90% confidence intervals. On an independent test set (n = 42), the calibrated model achieves AUC = 0.95, F1 = 0.92, and a better Brier score than the uncalibrated RF. The screening rule classifies a site as susceptible when N1,60,CS is at or below and CSR7.5,1 is at or above the probability-specific thresholds. Designed for level ground, free field, and clean-to-silty sand sites, this tool maintains the familiarity of SPT-based charts while making risk assessment transparent and auditable for different facility importance levels. Sensitivity tests show its robustness to reasonable rescaling of quality weights. The framework offers transparent thresholds with uncertainty bands for routine preliminary assessments and to guide the need for more detailed, site-specific analyses. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

31 pages, 3123 KB  
Review
A Review of the Potential of Drone-Based Approaches for Integrated Building Envelope Assessment
by Shayan Mirzabeigi, Ryan Razkenari and Paul Crovella
Buildings 2025, 15(13), 2230; https://doi.org/10.3390/buildings15132230 - 25 Jun 2025
Cited by 1 | Viewed by 4747
Abstract
The urgent need for affordable and scalable building retrofit solutions has intensified due to stringent clean energy targets. Traditional building energy audits, which are essential in assessing energy performance, are often time-consuming and costly because of the extensive field analysis required. There has [...] Read more.
The urgent need for affordable and scalable building retrofit solutions has intensified due to stringent clean energy targets. Traditional building energy audits, which are essential in assessing energy performance, are often time-consuming and costly because of the extensive field analysis required. There has been a gradual shift towards the public use of drones, which present opportunities for effective remote procedures that could disrupt a variety of built environment disciplines. Drone-based approaches to data collection offer a great opportunity for the analysis and inspection of existing building stocks, enabling architects, engineers, energy auditors, and owners to document building performance, visualize heat transfer using infrared thermography, and create digital models using 3D photogrammetry. This study provides a review of the potential of a drone-based approach to integrated building envelope assessment, aiming to streamline the process. By evaluating various scanning techniques and their integration with drones, this research explores how drones can enhance data collection for defect identification, as well as digital model creation. A proposed drone-based workflow is tested through a case study in Syracuse, New York, demonstrating its feasibility and effectiveness in creating 3D models and conducting energy simulations. The study also discusses various challenges associated with drone-based approaches, including data accuracy, environmental conditions, operator training, and regulatory compliance, offering practical solutions and highlighting areas for further research. A discussion of the findings underscores the potential of drone technology to revolutionize building inspections, making them more efficient, accurate, and scalable, thus supporting the development of sustainable and energy-efficient buildings. Full article
Show Figures

Figure 1

22 pages, 687 KB  
Article
Performance and Scalability of Data Cleaning and Preprocessing Tools: A Benchmark on Large Real-World Datasets
by Pedro Martins, Filipe Cardoso, Paulo Váz, José Silva and Maryam Abbasi
Data 2025, 10(5), 68; https://doi.org/10.3390/data10050068 - 5 May 2025
Cited by 10 | Viewed by 12230
Abstract
Data cleaning remains one of the most time-consuming and critical steps in modern data science, directly influencing the reliability and accuracy of downstream analytics. In this paper, we present a comprehensive evaluation of five widely used data cleaning tools—OpenRefine, Dedupe, Great Expectations, TidyData [...] Read more.
Data cleaning remains one of the most time-consuming and critical steps in modern data science, directly influencing the reliability and accuracy of downstream analytics. In this paper, we present a comprehensive evaluation of five widely used data cleaning tools—OpenRefine, Dedupe, Great Expectations, TidyData (PyJanitor), and a baseline Pandas pipeline—applied to large-scale, messy datasets spanning three domains (healthcare, finance, and industrial telemetry). We benchmark each tool on dataset sizes ranging from 1 million to 100 million records, measuring execution time, memory usage, error detection accuracy, and scalability under increasing data volumes. Additionally, we assess qualitative aspects such as usability and ease of integration, reflecting real-world adoption concerns. We incorporate recent findings on parallelized data cleaning and highlight how domain-specific anomalies (e.g., negative amounts in finance, sensor corruption in industrial telemetry) can significantly impact tool choice. Our findings reveal that no single solution excels across all metrics; while Dedupe provides robust duplicate detection and Great Expectations offers in-depth rule-based validation, tools like TidyData and baseline Pandas pipelines demonstrate strong scalability and flexibility under chunk-based ingestion. The choice of tool ultimately depends on domain-specific requirements (e.g., approximate matching in finance and strict auditing in healthcare) and the magnitude of available computational resources. By highlighting each framework’s strengths and limitations, this study offers data practitioners clear, evidence-driven guidance for selecting and combining tools to tackle large-scale data cleaning challenges. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

Back to TopTop