Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (28)

Search Parameters:
Keywords = privacy quantification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 968 KB  
Review
Applications of Artificial Intelligence in Fisheries: From Data to Decisions
by Syed Ariful Haque and Saud M. Al Jufaili
Big Data Cogn. Comput. 2026, 10(1), 19; https://doi.org/10.3390/bdcc10010019 - 5 Jan 2026
Viewed by 990
Abstract
AI enhances aquatic resource management by automating species detection, optimizing feed, forecasting water quality, protecting species interactions, and strengthening the detection of illegal, unreported, and unregulated fishing activities. However, these advancements are inconsistently employed, subject to domain shifts, limited by the availability of [...] Read more.
AI enhances aquatic resource management by automating species detection, optimizing feed, forecasting water quality, protecting species interactions, and strengthening the detection of illegal, unreported, and unregulated fishing activities. However, these advancements are inconsistently employed, subject to domain shifts, limited by the availability of labeled data, and poorly benchmarked across operational contexts. Recent developments in technology and applications in fisheries genetics and monitoring, precision aquaculture, management, and sensing infrastructure are summarized in this paper. We studied automated species recognition, genomic trait inference, environmental DNA metabarcoding, acoustic analysis, and trait-based population modeling in fisheries genetics and monitoring. We used digital-twin frameworks for supervised learning in feed optimization, reinforcement learning for water quality control, vision-based welfare monitoring, and harvest forecasting in aquaculture. We explored automatic identification system trajectory analysis for illicit fishing detection, global effort mapping, electronic bycatch monitoring, protected species tracking, and multi-sensor vessel surveillance in fisheries management. Acoustic echogram automation, convolutional neural network-based fish detection, edge-computing architectures, and marine-domain foundation models are foundational developments in sensing infrastructure. Implementation challenges include performance degradation across habitat and seasonal transitions, insufficient standardized multi-region datasets for rare and protected taxa, inadequate incorporation of model uncertainty into management decisions, and structural inequalities in data access and technology adoption among smallholder producers. Standardized multi-region benchmarks with rare-taxa coverage, calibrated uncertainty quantification in assessment and control systems, domain-robust energy-efficient algorithms, and privacy-preserving data partnerships are our priorities. These integrated priorities enable transition from experimental prototypes to a reliable, collaborative infrastructure for sustainable wild capture and farmed aquatic systems. Full article
Show Figures

Figure 1

27 pages, 3431 KB  
Review
Machine Learning-Driven Precision Nutrition: A Paradigm Evolution in Dietary Assessment and Intervention
by Wenbin Quan, Jingbo Zhou, Juan Wang, Jihong Huang and Liping Du
Nutrients 2026, 18(1), 45; https://doi.org/10.3390/nu18010045 - 22 Dec 2025
Viewed by 1010
Abstract
The rising global burden of chronic diseases highlights the limitations of traditional dietary guidelines. Precision Nutrition (PN) aims to deliver personalized dietary advice to optimize individual health, and the effective implementation of PN fundamentally relies on comprehensive and accurate dietary data. However, conventional [...] Read more.
The rising global burden of chronic diseases highlights the limitations of traditional dietary guidelines. Precision Nutrition (PN) aims to deliver personalized dietary advice to optimize individual health, and the effective implementation of PN fundamentally relies on comprehensive and accurate dietary data. However, conventional dietary assessment methods often suffer from quantification errors and poor adaptability to dynamic changes, leading to inaccurate data and ineffective guidance. Machine learning (ML) offers a powerful suite of tools to address these limitations, enabling a paradigm shift across the nutritional management pipeline. Using dietary data as a thematic thread, this article outlines this transformation and synthesizes recent advances across dietary assessment, in-depth mining, and nutritional intervention. Additionally, current challenges and future trends in this domain are also further discussed. ML is driving a critical shift from a subjective, static mode to an objective, dynamic, and personalized paradigm, enabling a loop nutrition management framework. Precise food recognition and nutrient estimation can be implemented automatically with ML techniques like computer vision (CV) and natural language processing (NLP). Integrating with multiple data sources, ML is conducive to uncovering dietary patterns, assessing nutritional status, and deciphering intricate nutritional mechanisms. It also facilitates the development of personalized dietary intervention strategies tailored to individual needs, while enabling adaptive optimization based on users’ feedback and intervention effectiveness. Although challenges regarding data privacy and model interpretability persist, ML undeniably constitutes the vital technical support for advancing PN into practical reality. Full article
(This article belongs to the Section Nutrition Methodology & Assessment)
Show Figures

Figure 1

67 pages, 699 KB  
Review
Machine Learning for Sensor Analytics: A Comprehensive Review and Benchmark of Boosting Algorithms in Healthcare, Environmental, and Energy Applications
by Yifan Xie and Sai Pranay Tummala
Sensors 2025, 25(23), 7294; https://doi.org/10.3390/s25237294 - 30 Nov 2025
Viewed by 1151
Abstract
Sensor networks generate high-dimensional temporally dependent data across healthcare, environmental monitoring, and energy management, which demands robust machine learning for reliable forecasting. While gradient boosting methods have emerged as powerful tools for sensor-based regression, systematic evaluation under realistic deployment conditions remains limited. This [...] Read more.
Sensor networks generate high-dimensional temporally dependent data across healthcare, environmental monitoring, and energy management, which demands robust machine learning for reliable forecasting. While gradient boosting methods have emerged as powerful tools for sensor-based regression, systematic evaluation under realistic deployment conditions remains limited. This work provides a comprehensive review and empirical benchmark of boosting algorithms spanning classical methods (AdaBoost and GBM), modern gradient boosting frameworks (XGBoost, LightGBM, and CatBoost), and adaptive extensions for streaming data and hybrid architectures. We conduct rigorous cross-domain evaluation on continuous glucose monitoring, urban air-quality forecasting, and building-energy prediction, assessing not only predictive accuracy but also robustness under sensor degradation, temporal generalization through proper time-series validation, feature-importance stability, and computational efficiency. Our analysis reveals fundamental trade-offs challenging conventional assumptions. Algorithmic sophistication yields diminishing returns when intrinsic predictability collapses due to exogenous forcing. Random cross-validation (CV) systematically overestimates performance through temporal leakage, with magnitudes varying substantially across domains. Calibration drift emerges as the dominant failure mode, causing catastrophic degradation across all the static models regardless of sophistication. Importantly, feature-importance stability does not guarantee predictive reliability. We synthesize the findings into actionable guidelines for algorithm selection, hyperparameter configuration, and deployment strategies while identifying critical open challenges, including uncertainty quantification, physics-informed architectures, and privacy-preserving distributed learning. Full article
(This article belongs to the Special Issue Feature Review Papers in Intelligent Sensors)
Show Figures

Figure 1

46 pages, 1957 KB  
Review
Emerging AI- and Biomarker-Driven Precision Medicine in Autoimmune Rheumatic Diseases: From Diagnostics to Therapeutic Decision-Making
by Ola A. Al-Ewaidat and Moawiah M. Naffaa
Rheumato 2025, 5(4), 17; https://doi.org/10.3390/rheumato5040017 - 17 Nov 2025
Cited by 3 | Viewed by 2874
Abstract
Background/Objectives: Autoimmune rheumatic diseases (AIRDs) are complex, heterogeneous, and relapsing–remitting conditions in which early diagnosis, flare prediction, and individualized therapy remain major unmet needs. This review aims to synthesize recent progress in AI-driven, biomarker-based precision medicine, integrating advances in imaging, multi-omics, and digital [...] Read more.
Background/Objectives: Autoimmune rheumatic diseases (AIRDs) are complex, heterogeneous, and relapsing–remitting conditions in which early diagnosis, flare prediction, and individualized therapy remain major unmet needs. This review aims to synthesize recent progress in AI-driven, biomarker-based precision medicine, integrating advances in imaging, multi-omics, and digital health to enhance diagnosis, risk stratification, and therapeutic decision-making in AIRD. Methods: A comprehensive synthesis of 2020–2025 literature was conducted across PubMed, Scopus, and preprint databases, focusing on studies applying artificial intelligence, machine learning, and multimodal biomarkers in rheumatoid arthritis, systemic lupus erythematosus, systemic sclerosis, spondyloarthritis, and related autoimmune diseases. The review emphasizes methodological rigor (TRIPOD+AI, PROBAST+AI, CONSORT-AI/SPIRIT-AI), implementation infrastructures (ACR RISE registry, federated learning), and equity frameworks to ensure generalizable, safe, and ethically governed translation into clinical practice. Results: Emerging evidence demonstrates that AI-integrated imaging enables automated quantification of synovitis, erosions, and vascular inflammation; multi-omics stratification reveals interferon- and B-cell-related molecular programs predictive of therapeutic response; and digital biomarkers from wearables and smartphones extend monitoring beyond the clinic, capturing early flare signatures. Registry-based AI pipelines and federated collaboration now allow multicenter model training without compromising patient privacy. Across diseases, predictive frameworks for biologic and Janus kinase (JAK) inhibitor response show growing discriminatory performance, though prospective and equity-aware validation remain limited. Conclusions: AI-enabled fusion of imaging, molecular, and digital biomarkers is reshaping the diagnostic and therapeutic landscape of AIRD. Standardized validation, interoperability, and governance frameworks are essential to transition these tools from research to real-world precision rheumatology. The convergence of registries, federated learning, and transparent reporting standards marks a pivotal step toward pragmatic, equitable, and continuously learning systems of care. Full article
Show Figures

Graphical abstract

47 pages, 3715 KB  
Article
Exploring Uncertainty in Medical Federated Learning: A Survey
by Xiaoyang Zeng, Awais Ahmed and Muhammad Hanif Tunio
Electronics 2025, 14(20), 4072; https://doi.org/10.3390/electronics14204072 - 16 Oct 2025
Viewed by 2194
Abstract
The adoption of artificial intelligence (AI) in healthcare requires not only accurate predictions but also a clear understanding of its reliability. In safety-critical domains such as medical imaging and diagnosis, clinicians must assess the confidence in model outputs to ensure safe decision making. [...] Read more.
The adoption of artificial intelligence (AI) in healthcare requires not only accurate predictions but also a clear understanding of its reliability. In safety-critical domains such as medical imaging and diagnosis, clinicians must assess the confidence in model outputs to ensure safe decision making. Uncertainty quantification (UQ) addresses this need by providing confidence estimates and identifying situations in which models may fail. Such uncertainty estimates enable risk-aware deployment, improve model robustness, and ultimately strengthen clinical trust. Although prior studies have surveyed UQ in centralized learning, a systematic review in the federated learning (FL) context is still lacking. As a privacy-preserving collaborative paradigm, FL enables institutions to jointly train models without sharing raw patient data. However, compared with centralized learning, FL introduces more complex sources of uncertainty. In addition to data uncertainty caused by noisy inputs and model uncertainty from distributed optimization, there also exists distributional uncertainty arising from client heterogeneity and personalized uncertainty associated with site-specific biases. These intertwined uncertainties complicate model reliability and highlight the urgent need for UQ strategies tailored to federated settings. This survey reviews UQ in medical FL. We categorize uncertainties unique to FL and compare them with those in centralized learning. We examine the sources of uncertainty, existing FL architectures, UQ methods, and their integration with privacy-preserving techniques, and we analyze their advantages, limitations, and trade-offs. Finally, we highlight key challenges—scalable UQ under non-IID conditions, federated OOD detection, and clinical validation—and outline future opportunities such as hybrid UQ strategies and personalization. By combining methodological advances in UQ with application perspectives, this survey provides a structured overview to inform the development of more reliable and privacy-preserving FL systems in healthcare. Full article
Show Figures

Figure 1

21 pages, 1247 KB  
Review
Bayesian Graphical Models for Multiscale Inference in Medical Image-Based Joint Degeneration Analysis
by Rahul Kumar, Kiran Marla, Puja Ravi, Kyle Sporn, Rohit Srinivas, Swapna Vaja, Alex Ngo and Alireza Tavakkoli
Diagnostics 2025, 15(18), 2295; https://doi.org/10.3390/diagnostics15182295 - 10 Sep 2025
Viewed by 1648
Abstract
Joint degeneration is a major global health issue requiring improved diagnostic and prognostic tools. This review examines whether integrating Bayesian graphical models with multiscale medical imaging can enhance detection, analysis, and prediction of joint degeneration compared to traditional single-scale methods. Recent advances in [...] Read more.
Joint degeneration is a major global health issue requiring improved diagnostic and prognostic tools. This review examines whether integrating Bayesian graphical models with multiscale medical imaging can enhance detection, analysis, and prediction of joint degeneration compared to traditional single-scale methods. Recent advances in quantitative MRI, such as T2 mapping, enable early detection of subtle cartilage changes, supporting earlier intervention. Bayesian graphical models provide a flexible framework for representing complex relationships and updating predictions as new evidence emerges. Unlike prior reviews that address Bayesian methods or musculoskeletal imaging separately, this work synthesizes these domains into a unified framework that spans molecular, cellular, tissue, and organ-level analyses, providing methodological guidance and clinical translation pathways. Key topics within Bayesian inference include multiscale analysis, probabilistic graphical models, spatial-temporal modeling, network connectivity analysis, advanced imaging biomarkers, quantitative analysis, quantitative MRI techniques, radiomics and texture analysis, multimodal integration strategies, uncertainty quantification, variational inference approaches, Monte Carlo methods, and model selection and validation, as well as diffusion models for medical imaging and Bayesian joint diffusion models. Additional attention is given to diffusion models for advanced medical image generation, addressing challenges such as limited datasets and patient privacy. Clinical translation and validation requirements are emphasized, highlighting the need for rigorous evaluation to ensure that synthesized or processed images maintain diagnostic accuracy. Finally, this review discusses implementation challenges and outlines future research directions, emphasizing the potential for earlier diagnosis, improved risk assessment, and personalized treatment strategies to reduce the growing global burden of musculoskeletal disorders. Full article
Show Figures

Figure 1

30 pages, 578 KB  
Article
Two-Stage Mining of Linkage Risk for Data Release
by Runshan Hu, Yuanguo Lin, Mu Yang, Yuanhui Yu and Vladimiro Sassone
Mathematics 2025, 13(17), 2731; https://doi.org/10.3390/math13172731 - 25 Aug 2025
Viewed by 1062
Abstract
Privacy risk mining, a crucial domain in data privacy protection, endeavors to uncover potential information among datasets that could be linked to individuals’ sensitive data. Existing anonymization and privacy assessment techniques either lack quantitative granularity or fail to adapt to dynamic, heterogeneous data [...] Read more.
Privacy risk mining, a crucial domain in data privacy protection, endeavors to uncover potential information among datasets that could be linked to individuals’ sensitive data. Existing anonymization and privacy assessment techniques either lack quantitative granularity or fail to adapt to dynamic, heterogeneous data environments. In this work, we propose a unified two-phase linkability quantification framework that systematically measures privacy risks at both the inter-dataset and intra-dataset levels. Our approach integrates unsupervised clustering on attribute distributions with record-level matching to compute interpretable, fine-grained risk scores. By aligning risk measurement with regulatory standards such as the GDPR, our framework provides a practical, scalable solution for safeguarding user privacy in evolving data-sharing ecosystems. Extensive experiments on real-world and synthetic datasets show that our method achieves up to 96.7% precision in identifying true linkage risks, outperforming the compared baseline by 13 percentage points under identical experimental settings. Ablation studies further demonstrate that the hierarchical risk fusion strategy improves sensitivity to latent vulnerabilities, providing more actionable insights than previous privacy gain-based metrics. Full article
Show Figures

Figure 1

17 pages, 2166 KB  
Article
Dyn-Pri: A Dynamic Privacy Sensitivity Assessment Framework for V2G Interactive Service Scenarios
by Tianbao Liu, Jingyang Wang, Nan Zhang, Jing Guo, Yanyan Tao, Qingyao Li and Zi Li
World Electr. Veh. J. 2025, 16(8), 459; https://doi.org/10.3390/wevj16080459 - 11 Aug 2025
Viewed by 656
Abstract
In V2G service operations, highly efficient data sharing among participants is useful in grid load balancing and renewable energy integration. However, the data quality and sharing efficiency greatly rely on entities’ willingness to share. Moreover, there is no rational assessment framework for the [...] Read more.
In V2G service operations, highly efficient data sharing among participants is useful in grid load balancing and renewable energy integration. However, the data quality and sharing efficiency greatly rely on entities’ willingness to share. Moreover, there is no rational assessment framework for the privacy sensitivity of sharing data, which highly affects data sharing willingness. Existing privacy sensitivity assessment methods rely on static privacy attributes and fail to rationally assess privacy threats within V2G service scenarios. To address these limitations, this paper proposes Dyn-Pri, a novel multi-dimensional privacy sensitivity assessment framework for large-scale V2G interactive service scenarios. Dyn-Pri features an adaptive comprehensive multi-dimensional quantification model that integrates both the three privacy elements’ intrinsic effects and the dynamic, intertwining influences among them. The experimental validations in three typical V2G scenarios demonstrate that Dyn-Pri has significant advantages in the precision of sensitivity assessments, and balancing data utilization and privacy protection enhances renewable energy integration efficiency while ensuring cross-domain data security. Full article
Show Figures

Figure 1

18 pages, 1572 KB  
Article
A Distributed Multi-Microgrid Cooperative Energy Sharing Strategy Based on Nash Bargaining
by Shi Su, Qian Zhang and Qingyang Xie
Electronics 2025, 14(15), 3155; https://doi.org/10.3390/electronics14153155 - 7 Aug 2025
Cited by 4 | Viewed by 1035
Abstract
With the rapid development of energy transformation, the proportion of new energy is increasing, and the efficient trading mechanism of multi-microgrids can realize energy sharing to improve the consumption rate of new energy. A distributed multi-microgrid cooperative energy sharing strategy is proposed based [...] Read more.
With the rapid development of energy transformation, the proportion of new energy is increasing, and the efficient trading mechanism of multi-microgrids can realize energy sharing to improve the consumption rate of new energy. A distributed multi-microgrid cooperative energy sharing strategy is proposed based on Nash bargaining. Firstly, by comprehensively considering the adjustable heat-to-electrical ratio, ladder-type positive and negative carbon trading, peak–valley electricity price and demand response, a multi-microgrid system with wind–solar-storage-load and combined heat and power is constructed. Then, a multi-microgrid cooperative game optimization framework is established based on Nash bargaining, and the complex nonlinear problem is decomposed into two stages to be solved. In the first stage, the cost minimization problem of multi-microgrids is solved based on the alternating direction multiplier method to maximize consumption rate and protect privacy. In the second stage, through the established contribution quantification model, Nash bargaining theory is used to fairly distribute the benefits of cooperation. The simulation results of three typical microgrids verify that the proposed strategy has good convergence properties and computational efficiency. Compared with the independent operation, the proposed strategy reduces the cost by 41% and the carbon emission by 18,490 kg, thus realizing low-carbon operation and optimal economic dispatch. Meanwhile, the power supply pressure of the main grid is reduced through energy interaction, thus improving the utilization rate of renewable energy. Full article
Show Figures

Figure 1

34 pages, 712 KB  
Review
Transformation of Demand-Response Aggregator Operations in Future US Electricity Markets: A Review of Technologies and Open Research Areas with Game Theory
by Styliani I. Kampezidou and Dimitri N. Mavris
Appl. Sci. 2025, 15(14), 8066; https://doi.org/10.3390/app15148066 - 20 Jul 2025
Viewed by 1568
Abstract
The decarbonization of electricity generation by 2030 and the realization of a net-zero economy by 2050 are central to the United States’ climate strategy. However, large-scale renewable integration introduces operational challenges, including extreme ramping, unsafe dispatch, and price volatility. This review investigates how [...] Read more.
The decarbonization of electricity generation by 2030 and the realization of a net-zero economy by 2050 are central to the United States’ climate strategy. However, large-scale renewable integration introduces operational challenges, including extreme ramping, unsafe dispatch, and price volatility. This review investigates how demand–response (DR) aggregators and distributed loads can support these climate goals while addressing critical operational challenges. We hypothesize that current DR aggregator frameworks fall short in the areas of distributed load operational flexibility, scalability with the number of distributed loads (prosumers), prosumer privacy preservation, DR aggregator and prosumer competition, and uncertainty management, limiting their potential to enable large-scale prosumer participation. Using a systematic review methodology, we evaluate existing DR aggregator and prosumer frameworks through the proposed FCUPS criteria—flexibility, competition, uncertainty quantification, privacy, and scalability. The main results highlight significant gaps in current frameworks: limited support for decentralized operations; inadequate privacy protections for prosumers; and insufficient capabilities for managing competition, uncertainty, and flexibility at scale. We conclude by identifying open research directions, including the need for game-theoretic and machine learning approaches that ensure privacy, scalability, and robust market participation. Addressing these gaps is essential to shape future research agendas and to enable DR aggregators to contribute meaningfully to US climate targets. Full article
Show Figures

Figure 1

22 pages, 8849 KB  
Article
Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness
by Junhui Song, Zhangqi Zheng, Afei Li, Zhixin Xia and Yongshan Liu
Appl. Sci. 2025, 15(14), 7843; https://doi.org/10.3390/app15147843 - 13 Jul 2025
Viewed by 1951
Abstract
Federated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—commonly referred to as the non-independent and [...] Read more.
Federated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—commonly referred to as the non-independent and identically distributed (non-IID) nature of client data—remains a fundamental challenge. To mitigate this issue, a heterogeneity-aware and robust FL framework is proposed to enhance model generalization and stability under non-IID conditions. The proposed approach introduces two key innovations. First, a heterogeneity quantification mechanism is designed based on statistical feature distributions, enabling the effective measurement of inter-client data discrepancies. This metric is further employed to guide the model aggregation process through a heterogeneity-aware weighted strategy. Second, a multi-loss optimization scheme is formulated, integrating classification loss, heterogeneity loss, feature center alignment, and L2 regularization for improved robustness against distributional shifts during local training. Comprehensive experiments are conducted on four benchmark datasets, including CIFAR-10, SVHN, MNIST, and NotMNIST under Dirichlet-based heterogeneity settings (alpha = 0.1 and alpha = 0.5). The results demonstrate that the proposed method consistently outperforms baseline approaches such as FedAvg, FedProx, FedSAM, and FedMOON. Notably, an accuracy improvement of approximately 4.19% over FedSAM is observed on CIFAR-10 (alpha = 0.5), and a 1.82% gain over FedMOON on SVHN (alpha = 0.1), along with stable enhancements on MNIST and NotMNIST. Furthermore, ablation studies confirm the contribution and necessity of each component in addressing data heterogeneity. Full article
(This article belongs to the Special Issue Cyber-Physical Systems Security: Challenges and Approaches)
Show Figures

Figure 1

28 pages, 2795 KB  
Article
A Data Protection Method for the Electricity Business Environment Based on Differential Privacy and Federal Incentive Mechanisms
by Xu Zhou, Hongshan Luo, Simin Chen and Yuling He
Energies 2025, 18(13), 3403; https://doi.org/10.3390/en18133403 - 27 Jun 2025
Viewed by 540
Abstract
In the development process of the power industry, accurately assessing the level of development of the electricity business environment is of great significance. However, traditional evaluation systems have limitations, with the issue of “data silos” being prominent, and user privacy under federated learning [...] Read more.
In the development process of the power industry, accurately assessing the level of development of the electricity business environment is of great significance. However, traditional evaluation systems have limitations, with the issue of “data silos” being prominent, and user privacy under federated learning is also at risk. This paper proposes a federated learning-based data protection method for the electricity business environment to address these challenges. Based on the World Bank’s B-READY framework, this paper constructs an electricity business environment evaluation system containing nine indicators, focusing on three aspects: electricity regulations, public services, and operational efficiency. The indicators are weighted using the Sequence Relation and Entropy Weight Method. To address the issue of sensitive data protection, we first use federated learning technology to build a distributed modeling framework, ensuring that raw data never leaves the local environment during the collaborative modeling process. Next, we embed a differential privacy mechanism in the model parameter transmission stage, encrypting the model parameters by adding controlled noise. Finally, an incentive mechanism based on contribution quantification is implemented to encourage participation from all parties. This paper conducts experiments using the data of Shenzhen City, Guangdong Province. Compared with the FNN model and the SVR model, the MLP model reduces MAE by 78.9% and 94.12%, respectively, and increases R2 by 37.95% and 55.62%, respectively. The superiority of the method proposed in this paper has been proved. Full article
Show Figures

Figure 1

22 pages, 3388 KB  
Article
Aggregating Image Segmentation Predictions with Probabilistic Risk Control Guarantees
by Joaquin Alvarez and Edgar Roman-Rangel
Mathematics 2025, 13(11), 1711; https://doi.org/10.3390/math13111711 - 23 May 2025
Viewed by 1198
Abstract
In this work, we introduce a framework to combine arbitrary image segmentation algorithms from different agents under data privacy constraints to produce an aggregated prediction set satisfying finite-sample risk control guarantees. We leverage distribution-free uncertainty quantification techniques in order to aggregate deep neural [...] Read more.
In this work, we introduce a framework to combine arbitrary image segmentation algorithms from different agents under data privacy constraints to produce an aggregated prediction set satisfying finite-sample risk control guarantees. We leverage distribution-free uncertainty quantification techniques in order to aggregate deep neural networks for image segmentation tasks. Our method can be applied in settings to merge the predictions of multiple agents with arbitrarily dependent prediction sets. Moreover, we perform experiments in medical imaging tasks to illustrate our proposed framework. Our results show that the framework reduced the empirical false positive rate by 50% without compromising the false negative rate, with respect to the false positive rate of any of the constituent models in the aggregated prediction algorithm. Full article
(This article belongs to the Special Issue Artificial Intelligence: Deep Learning and Computer Vision)
Show Figures

Figure 1

15 pages, 4921 KB  
Article
Thin Cells of Polymer-Modified Liquid Crystals Described by Voronoi Diagrams
by Felicity Woolhouse and Ingo Dierking
Materials 2025, 18(5), 1106; https://doi.org/10.3390/ma18051106 - 28 Feb 2025
Cited by 2 | Viewed by 1042
Abstract
We investigated patterns formed during the polymerization process of bifunctional monomers in a liquid crystal for both large polymer concentrations (polymer-dispersed liquid crystals, PDLC) and small concentrations (polymer-stabilized liquid crystals, PSLC). The resulting experimental patterns are reminiscent of Voronoi diagrams, so a reverse [...] Read more.
We investigated patterns formed during the polymerization process of bifunctional monomers in a liquid crystal for both large polymer concentrations (polymer-dispersed liquid crystals, PDLC) and small concentrations (polymer-stabilized liquid crystals, PSLC). The resulting experimental patterns are reminiscent of Voronoi diagrams, so a reverse Voronoi algorithm was developed that provides the seed locations of cells, thus allowing a computational reproduction of the experimental patterns. Several metrics were developed to quantify the commonality between the faithful experimental patterns and the idealized and generated ones. This led to descriptions of the experimental patterns with accuracies better than 90% and showed that the curvature or concavity of the cell edges was below 2%. Possible reasons for the discrepancies between the original and generated Voronoi diagrams are discussed. The introduced algorithm and quantification of the patterns could be transferred to many other experimental problems, for example, melting of thin polymer films, ultra-thin metal films, or bio-membranes. The discrepancies between the experimental and ideal Voronoi diagrams are quantified, which may be useful in the quality control of privacy windows, reflective displays, or smart glass. Full article
(This article belongs to the Special Issue The 15th Anniversary of Materials—Recent Advances in Soft Matter)
Show Figures

Figure 1

15 pages, 307 KB  
Article
Proactive Data Categorization for Privacy in DevPrivOps
by Catarina Silva, João P. Barraca and Paulo Salvador
Information 2025, 16(3), 185; https://doi.org/10.3390/info16030185 - 28 Feb 2025
Cited by 4 | Viewed by 1139
Abstract
Assessing privacy within data-driven software is challenging due to its subjective nature and the diverse array of privacy-enhancing technologies. A simplistic personal/non-personal data classification fails to capture the nuances of data specifications and potential privacy vulnerabilities. Robust, privacy-focused data categorization is vital for [...] Read more.
Assessing privacy within data-driven software is challenging due to its subjective nature and the diverse array of privacy-enhancing technologies. A simplistic personal/non-personal data classification fails to capture the nuances of data specifications and potential privacy vulnerabilities. Robust, privacy-focused data categorization is vital for a deeper understanding of data characteristics and the evaluation of potential privacy risks. We introduce a framework for Privacy-sensitive Data Categorization (PsDC), which accounts for data inference from multiple sources and behavioral analysis. Our approach uses a hierarchical, multi-tiered tree structure, encompassing direct data categorization, dynamic tags, and structural attributes. PsDC is a data-categorization model designed for integration with the DevPrivOps methodology and for use in privacy-quantification models. Our analysis demonstrates its applicability in network-management infrastructure, service and application deployment, and user-centered design interfaces. We illustrate how PsDC can be implemented in these scenarios to mitigate privacy risks. We also highlight the importance of proactively reducing privacy risks by ensuring that developers and users understand the privacy “value” of data. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

Back to TopTop