Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (43)

Search Parameters:
Keywords = source-location privacy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 569 KB  
Review
Wearables in Healthcare Organizations: Implications for Occupational Health, Organizational Performance, and Economic Outcomes
by Daniele Virgillito, Pierluigi Catalfo and Caterina Ledda
Healthcare 2025, 13(18), 2289; https://doi.org/10.3390/healthcare13182289 - 12 Sep 2025
Viewed by 621
Abstract
Background: Healthcare organizations face major challenges in protecting staff health and ensuring business continuity, particularly in high-risk settings. Wearable technologies are emerging tools to monitor occupational health indicators, improve staff safety, and strengthen organizational resilience. Objectives: This scoping review aimed to [...] Read more.
Background: Healthcare organizations face major challenges in protecting staff health and ensuring business continuity, particularly in high-risk settings. Wearable technologies are emerging tools to monitor occupational health indicators, improve staff safety, and strengthen organizational resilience. Objectives: This scoping review aimed to map the current evidence on wearable technologies in healthcare, focusing on their impact on occupational health, staff safety, and economic outcomes, as well as barriers and facilitators to their adoption. Methods: A systematic search was conducted in PubMed, Scopus, Web of Science, Embase, CINAHL, sources from inception to July 2025. Studies were included if they evaluated wearable technologies used by healthcare workers and assessed outcomes related to occupational health, organizational resilience, absenteeism, presenteeism, or cost-effectiveness. The review followed the Arksey and O’Malley framework and PRISMA-ScR guidelines. Results: 31 studies met the inclusion criteria. Most were pilot or feasibility studies; only two randomized controlled trials were identified. The wearable technologies evaluated included continuous physiological monitoring devices, real-time location systems, hands-free communication tools, and consumer-grade devices. Evidence suggests potential benefits in improving staff safety, reducing stress and burnout, and enhancing workflow efficiency. However, economic evidence was limited and outcomes varied considerably. Barriers included alert fatigue, privacy concerns, interoperability challenges, and limited staff engagement. Facilitators included leadership support, user-centered design, and adequate infrastructure. Conclusions: Wearable technologies show promise for supporting occupational health and organizational resilience in healthcare, but evidence remains fragmented. Full article
(This article belongs to the Section Artificial Intelligence in Healthcare)
Show Figures

Figure 1

41 pages, 9508 KB  
Article
CTAARCHS: Cloud-Based Technologies for Archival Astronomical Research Contents and Handling Systems
by Stefano Gallozzi, Georgios Zacharis, Federico Fiordoliva and Fabrizio Lucarelli
Metrics 2025, 2(3), 18; https://doi.org/10.3390/metrics2030018 - 8 Sep 2025
Viewed by 353
Abstract
This paper presents a flexible approach to a multipurpose, heterogeneous archive and data management system model that merges the robustness of legacy grid-based technologies with modern cloud and edge computing paradigms. It leverages innovations driven by big data, IoT, AI, and machine learning [...] Read more.
This paper presents a flexible approach to a multipurpose, heterogeneous archive and data management system model that merges the robustness of legacy grid-based technologies with modern cloud and edge computing paradigms. It leverages innovations driven by big data, IoT, AI, and machine learning to create an adaptive data storage and processing framework. In today’s digital age, where data are the new intangible gold, the “gold rush” lies in managing and storing massive datasets effectively—especially when these data serve governmental or commercial purposes, raising concerns about privacy and data misuse by third-party aggregators. Astronomical data, in particular, require this same thoughtful approach. Scientific discovery increasingly depends on efficient extraction and processing of large datasets. Distributed archival models, unlike centralized warehouses, offer scalability by allowing data to be accessed and processed across locations via cloud services. Incorporating edge computing further enables real-time access with reduced latency. Major astronomical projects must also avoid common single points of failure (SPOFs), often resulting from suboptimal technological choices driven by collaboration politics or In-Kind Contributions (IKCs). These missteps can hinder innovation and long-term project success. The principal goal of this work is to outline best practices in archival and data management projects—from policy development and task planning to use-case definition and implementation. Only after these steps can a coherent selection of hardware, software, or virtual environments be made. The proposed model—CTAARCHS (Cloud-based Technologies for Astronomical Archiving Research Contents and Handling Systems)—is an open-source, multidisciplinary platform supporting big data needs in astronomy. It promotes broad institutional collaboration, offering code repositories and sample data for immediate use. Full article
Show Figures

Figure 1

19 pages, 4950 KB  
Article
Google Location History as an Alternative Data Source for Understanding Travel Behavior in Medan, Binjai, and Deli Serdang (Mebidang), Indonesia
by Arif Wismadi, Mohamad Rachmadian Narotama, Gary Haq, Steve Cinderby, Deni Prasetio Nugroho and Jan Prabowo Harmanto
Future Transp. 2025, 5(2), 50; https://doi.org/10.3390/futuretransp5020050 - 1 May 2025
Viewed by 1811
Abstract
The performance of urban transport is a critical aspect of a city’s functionality, which needs to be supported by innovative data sources to analyze travel patterns. This study explores the use of Google Location History (GLH) as a participatory geographic information system for [...] Read more.
The performance of urban transport is a critical aspect of a city’s functionality, which needs to be supported by innovative data sources to analyze travel patterns. This study explores the use of Google Location History (GLH) as a participatory geographic information system for mobility surveys, offering a cost-effective and more detailed alternative to traditional approaches. GLH is a novel data source with high potential, but still underutilized and underresearched, especially in developing countries. This study uses a new approach in GLH data collection and data processing. Data were collected from 420 respondents in Medan, Binjai, and Deli Serdang (Mebidang) in Indonesia, to examine urban travel patterns, including trip distances, modes, and purposes, while addressing issues of data accuracy, privacy, and representation. GLH provides granular insights into mobility, reducing biases associated with self-reported surveys and identifying discrepancies between stated and actual transport usage. The findings highlight GLH’s potential for understanding spatial mobility patterns linked to demographic characteristics and travel purpose in more detail. However, technical challenges, such as data anomalies and the reliance on two devices for data collection, underscore the need to improve location readings and develop add-on tools capable of direct data export for large-scale mobility surveys. This study advances the application of GLH in mobility research, demonstrating its potential use and challenges for large-scale mobility surveys. Future research should address privacy concerns and optimize data collection to enable more inclusive and sustainable urban mobility strategies. Full article
Show Figures

Figure 1

16 pages, 2334 KB  
Article
PhiShield: An AI-Based Personalized Anti-Spam Solution with Third-Party Integration
by Hyunsol Mun, Jeeeun Park, Yeonhee Kim, Boeun Kim and Jongkil Kim
Electronics 2025, 14(8), 1581; https://doi.org/10.3390/electronics14081581 - 13 Apr 2025
Cited by 2 | Viewed by 1310
Abstract
In this paper, we present PhiShield, which is a spam filter system designed to offer real-time email collection and analysis at the end node. Before our work, most existing spam detection systems focused more on detection accuracy rather than usability and privacy. PhiShield [...] Read more.
In this paper, we present PhiShield, which is a spam filter system designed to offer real-time email collection and analysis at the end node. Before our work, most existing spam detection systems focused more on detection accuracy rather than usability and privacy. PhiShield is introduced to enhance both of these features by precisely choosing the deployment location where it achieves personalization and proactive defense. The PhiShield system is designed to allow enhanced compatibility and proactive phishing prevention for users. Phishield is implemented as a browser extension and is compatible with third-party email services such as Gmail. As it is implemented as a browser extension, it assesses emails before a user clicks on them. It offers proactive prevention for users by showing a personalized report, not the content of the phishing email, when a phishing email is detected. Therefore, it provides users with transparency surrounding phishing mechanisms and helps them mitigate phishing risks in practice. We test various locally trained Artificial Intelligence (AI)-based detection models and show that a Long Short-Term Memory (LSTM) model is suitable for practical phishing email detection (>98% accuracy rate) with a reasonable training cost. This means that an organization or user can develop their own private detection rules and supplementarily use the private rules in addition to the third-party email service. In this paper, we implement PhiShield to show the scalability and practicality of our solution and provide a performance evaluation of approximately 300,000 emails from various sources. Full article
(This article belongs to the Special Issue New Technologies for Network Security and Anomaly Detection)
Show Figures

Figure 1

19 pages, 2885 KB  
Article
Quantitative and Spatially Explicit Clustering of Urban Grocery Shoppers in Montreal: Integrating Loyalty Data with Synthetic Population
by Duo Zhang, Laurette Dubé, Antonia Gieschen, Catherine Paquet and Raja Sengupta
ISPRS Int. J. Geo-Inf. 2025, 14(4), 159; https://doi.org/10.3390/ijgi14040159 - 6 Apr 2025
Viewed by 794
Abstract
This study integrates customer loyalty program data with a synthetic population to analyze grocery shopping behaviours in Montreal. Using clustering algorithms, we classify 295,631 loyalty program members into seven distinct consumer segments based on behavioural and sociodemographic attributes. The findings reveal significant heterogeneity [...] Read more.
This study integrates customer loyalty program data with a synthetic population to analyze grocery shopping behaviours in Montreal. Using clustering algorithms, we classify 295,631 loyalty program members into seven distinct consumer segments based on behavioural and sociodemographic attributes. The findings reveal significant heterogeneity in consumer behaviour, emphasizing the impact of urban geography on shopping decisions. This segmentation also provides valuable insights for retailers optimizing store locations and marketing strategies and for policymakers aiming to enhance urban accessibility. Additionally, our approach strengthens agent-based model (ABM) simulations by incorporating demographic and behavioural diversity, leading to more realistic consumer representations. While integrating loyalty data with synthetic populations mitigates privacy concerns, challenges remain regarding data sparsity and demographic inconsistencies. Future research should explore multi-source data integration and advanced clustering methods. Overall, this study contributes to geographically explicit modelling, demonstrating the effectiveness of combining behavioural and synthetic demographic data in urban retail analysis. Full article
Show Figures

Figure 1

16 pages, 2444 KB  
Article
Enhanced Privacy-Preserving Architecture for Fundus Disease Diagnosis with Federated Learning
by Raymond Jiang, Yulia Kumar and Dov Kruger
Appl. Sci. 2025, 15(6), 3004; https://doi.org/10.3390/app15063004 - 10 Mar 2025
Cited by 1 | Viewed by 1126
Abstract
In recent years, advances in diagnosing and classifying diseases using machine learning (ML) have grown exponentially. However, due to the many privacy regulations regarding personal data, pooling together data from multiple sources and storing them in a single (centralized) location for traditional ML [...] Read more.
In recent years, advances in diagnosing and classifying diseases using machine learning (ML) have grown exponentially. However, due to the many privacy regulations regarding personal data, pooling together data from multiple sources and storing them in a single (centralized) location for traditional ML model training are often infeasible. Federated learning (FL), a collaborative learning paradigm, can sidestep this major pitfall by creating a global ML model that is trained by aggregating model weights from individual models that are separately trained on their own data silos, therefore avoiding most data privacy concerns. This study addresses the centralized data issue with FL by applying a novel DataWeightedFed architectural approach for effective fundus disease diagnosis from ophthalmic images. It includes a novel method for aggregating model weights by comparing the size of each model’s data and taking a dynamically weighted average of all the model’s weights. Experimental results showed a small average 1.85% loss in accuracy when training using FL compared to centralized ML model systems, a nearly 92% improvement over the conventional 55% accuracy loss. The obtained results demonstrate that this study’s FL architecture can maximize both privacy preservation and accuracy for ML in fundus disease diagnosis and provide a secure, collaborative ML model training solution within the eye healthcare space. Full article
Show Figures

Figure 1

39 pages, 413 KB  
Review
Federated Learning: Navigating the Landscape of Collaborative Intelligence
by Konstantinos Lazaros, Dimitrios E. Koumadorakis, Aristidis G. Vrahatis and Sotiris Kotsiantis
Electronics 2024, 13(23), 4744; https://doi.org/10.3390/electronics13234744 - 30 Nov 2024
Cited by 21 | Viewed by 8577
Abstract
As data become increasingly abundant and diverse, their potential to fuel machine learning models is increasingly vast. However, traditional centralized learning approaches, which require aggregating data into a single location, face significant challenges. Privacy concerns, stringent data protection regulations like GDPR, and the [...] Read more.
As data become increasingly abundant and diverse, their potential to fuel machine learning models is increasingly vast. However, traditional centralized learning approaches, which require aggregating data into a single location, face significant challenges. Privacy concerns, stringent data protection regulations like GDPR, and the high cost of data transmission hinder the feasibility of centralizing sensitive data from disparate sources such as hospitals, financial institutions, and personal devices. Federated Learning addresses these issues by enabling collaborative model training without requiring raw data to leave its origin. This decentralized approach ensures data privacy, reduces transmission costs, and allows organizations to harness the collective intelligence of distributed data while maintaining compliance with ethical and legal standards. This review delves into FL’s current applications and its potential to reshape IoT systems into more collaborative, privacy-centric, and flexible frameworks, aiming to enlighten and motivate those navigating the confluence of machine learning and IoT advancements. Full article
15 pages, 2848 KB  
Article
Improving the Accuracy of mmWave Radar for Ethical Patient Monitoring in Mental Health Settings
by Colm Dowling, Hadi Larijani, Mike Mannion, Matt Marais and Simon Black
Sensors 2024, 24(18), 6074; https://doi.org/10.3390/s24186074 - 19 Sep 2024
Cited by 2 | Viewed by 3189
Abstract
Monitoring patient safety in high-risk mental health environments is a challenge for clinical staff. There has been a recent increase in the adoption of contactless sensing solutions for remote patient monitoring. mmWave radar is a technology that has high potential in this field [...] Read more.
Monitoring patient safety in high-risk mental health environments is a challenge for clinical staff. There has been a recent increase in the adoption of contactless sensing solutions for remote patient monitoring. mmWave radar is a technology that has high potential in this field due it its low cost and protection of privacy; however, it is prone to multipath reflections and other sources of environmental noise. This paper discusses some of the challenges in mmWave remote sensing applications for patient safety in mental health wards. In line with these challenges, we propose a novel low-data solution to mitigate the impact of multipath reflections and other sources of noise in mmWave sensing. Our solution uses an unscented Kalman filter for target tracking over time and analyses features of movement to determine whether targets are human or not. We chose a commercial off-the-shelf radar and compared the accuracy and reliability of sensor measurements before and after applying our solution. Our results show a marked decrease in false positives and false negatives during human target tracking, as well as an improvement in spatial location detection in a two-dimensional space. These improvements demonstrate how a simple low-data solution can improve existing mmWave sensors, making them more suitable for patient safety solutions in high-risk environments. Full article
Show Figures

Figure 1

14 pages, 5936 KB  
Article
GeoLocator: A Location-Integrated Large Multimodal Model (LMM) for Inferring Geo-Privacy
by Yifan Yang, Siqin Wang, Daoyang Li, Shuju Sun and Qingyang Wu
Appl. Sci. 2024, 14(16), 7091; https://doi.org/10.3390/app14167091 - 13 Aug 2024
Cited by 5 | Viewed by 3491
Abstract
To ensure the sustainable development of artificial intelligence (AI) application in urban and geospatial science, it is important to protect the geographic privacy, or geo-privacy, which refers to an individual’s geographic location details. As a crucial aspect of personal security, geo-privacy plays a [...] Read more.
To ensure the sustainable development of artificial intelligence (AI) application in urban and geospatial science, it is important to protect the geographic privacy, or geo-privacy, which refers to an individual’s geographic location details. As a crucial aspect of personal security, geo-privacy plays a key role not only in individual protection but also in maintaining ethical standards in geoscientific practices. Despite its importance, geo-privacy is often not sufficiently addressed in daily activities. With the increasing use of large multimodal models (LMMs) such as GPT-4 for open-source intelligence (OSINT), the risks related to geo-privacy breaches have significantly escalated. This study introduces a novel GPT-4-based model, GeoLocator, integrated with location capabilities, and conducts four experiments to evaluate its ability to accurately infer location information from images and social media content. The results demonstrate that GeoLocator can generate specific geographic details with high precision, thereby increasing the potential for inadvertent exposure of sensitive geospatial information. This highlights the dual challenges posed by online data-sharing and information-gathering technologies in the context of geo-privacy. We conclude with a discussion on the broader impacts of GeoLocator and our findings on individuals and communities, emphasizing the urgent need for increased awareness and protective measures against geo-privacy breaches in the era of advancing AI and widespread social media usage. This contribution thus advocates for sustainable and responsible geoscientific practices. Full article
(This article belongs to the Special Issue Artificial Intelligence and the Future of Smart Cities)
Show Figures

Figure 1

22 pages, 1841 KB  
Article
A Data Attack Detection Framework for Cryptography-Based Secure Aggregation Methods in 6G Intelligent Applications
by Zhe Sun, Junxi Liang, Lihua Yin, Pingchuan Xu, Chao Li, Junping Wan and Hanyi Wang
Electronics 2024, 13(11), 1999; https://doi.org/10.3390/electronics13111999 - 21 May 2024
Cited by 3 | Viewed by 1741
Abstract
Eagerly anticipated, 6G networks are attributed with a variety of characteristics by researchers. A pivotal characteristic of 6G networks is the deep integration of sensing and networking, along with intelligent network applications operating on top of this infrastructure. To optimally harness the data [...] Read more.
Eagerly anticipated, 6G networks are attributed with a variety of characteristics by researchers. A pivotal characteristic of 6G networks is the deep integration of sensing and networking, along with intelligent network applications operating on top of this infrastructure. To optimally harness the data collected by sensors distributed across various locations, the training paradigm of the new generation of 6G intelligence applications aligns naturally with the federated-learning paradigm. The exposure of gradients in federated learning to inversion attacks is a critical concern. To address this, cryptography-based secure aggregation methods are commonly implemented to protect the privacy and confidentiality of gradients. However, the semantic meaninglessness of encrypted data makes it difficult to assess the correctness, availability, and source legitimacy of participants’ data. In this paper, we propose a data attack detection framework for cryptography-based secure aggregation methods in 6G intelligence applications that address the security vulnerabilities associated with encrypted data obscurity. We employ a suite of encrypted-data-auditing techniques to prevent data-aggregation errors, data poisoning, and illegal data sources. Additionally, we have compared a series of promising security methods, analyzed, and provided recommendations for the most suitable security approaches in specific 6G scenarios. Full article
(This article belongs to the Special Issue Recent Advances in Reliability and Security in 5G/6G Mobile Networks)
Show Figures

Figure 1

22 pages, 4327 KB  
Article
An Underwater Source Location Privacy Protection Scheme Based on Game Theory in a Multi-Attacker Cooperation Scenario
by Beibei Wang, Xiufang Yue, Kun Hao, Yonglei Liu, Zhisheng Li and Xiaofang Zhao
Sensors 2024, 24(9), 2851; https://doi.org/10.3390/s24092851 - 30 Apr 2024
Cited by 1 | Viewed by 1548
Abstract
Ensuring source location privacy is crucial for the security of underwater acoustic sensor networks amid the growing use of marine environmental monitoring. However, the traditional source location privacy scheme overlooks multi-attacker cooperation strategies and also has the problem of high communication overhead. This [...] Read more.
Ensuring source location privacy is crucial for the security of underwater acoustic sensor networks amid the growing use of marine environmental monitoring. However, the traditional source location privacy scheme overlooks multi-attacker cooperation strategies and also has the problem of high communication overhead. This paper addresses the aforementioned limitations by proposing an underwater source location privacy protection scheme based on game theory under the scenario of multiple cooperating attackers (SLP-MACGT). First, a transformation method of a virtual coordinate system is proposed to conceal the real position of nodes to a certain extent. Second, through using the relay node selection strategy, the diversity of transmission paths is increased, passive attacks by adversaries are resisted, and the privacy of source nodes is protected. Additionally, a secure data transmission technique utilizing fountain codes is employed to resist active attacks by adversaries, ensuring data integrity and enhancing data transmission stability. Finally, Nash equilibrium could be achieved after the multi-round evolutionary game theory of source node and multiple attackers adopting their respective strategies. Simulation experiments and performance evaluation verify the effectiveness and reliability of SLP-MACGT regarding aspects of the packet forwarding success rate, security time, delay and energy consumption: the packet delivery rate average increases by 30%, security time is extended by at least 85%, and the delay is reduced by at least 90% compared with SSLP, PP-LSPP, and MRGSLP. Full article
Show Figures

Figure 1

30 pages, 1266 KB  
Article
A Total Randomized SLP-Preserving Technique with Improved Privacy and Lifetime in WSNs for IoT and the Impact of Radio Range on SLP
by Florence Mukamanzi, Raja Manjula, Raja Datta, Tejodbhav Koduru, Damien Hanyurwimfura and Mukanyiligira Didacienne
Sensors 2023, 23(24), 9623; https://doi.org/10.3390/s23249623 - 5 Dec 2023
Cited by 2 | Viewed by 1613
Abstract
Enhanced source location privacy and prolonged network lifetime are imperative for WSNs—the skin of IoT. To address these issues, a novel technique named source location privacy with enhanced privacy and network lifetime (SLP-E) is proposed. It employs a reverse random walk followed by [...] Read more.
Enhanced source location privacy and prolonged network lifetime are imperative for WSNs—the skin of IoT. To address these issues, a novel technique named source location privacy with enhanced privacy and network lifetime (SLP-E) is proposed. It employs a reverse random walk followed by a walk on annular rings, to create divergent routing paths in the network, and finally, min-hop routing together with the walk on dynamic rings to send the packets to the base station (BS). The existing random walk-based SLP approaches have either focused on enhancing only privacy at the cost of network lifetime (NLT) or have aimed at improving the amount of privacy without degrading the network lifetime performance. Unlike these schemes, the objectives of the proposed work are to simultaneously improve the safety period and network lifetime along with achieving uniform privacy. This combination of improvements has not been considered so far in a single SLP random walk-based scheme. Additionally, this study investigates for the first time the impact of the sensors’ radio range on both privacy strength and network lifetime metrics in the context of SLP within WSNs. The performance measurements conducted using the proposed analytical models and the simulation results indicate an improvement in the safety period and network lifespan. The safety period in SLP-E increased by 26.5%, 97%, 123%, and 15.7% when compared with SLP-R, SRR, PRLPRW, and PSSLP techniques, respectively. Similarly, the network lifetime of SLP-E increased by 17.36%, 0.2%, 83.41%, and 13.42% when compared with SLP-R, SRR, PRLPRW, and PSSLP techniques, respectively. No matter where a source node is located within a network, the SLP-E provides uniform and improved privacy and network lifetime. Further, the simulation results demonstrate that the sensors’ radio range has an impact on the safety period, capture ratio, and the network lifetime. Full article
Show Figures

Figure 1

17 pages, 8393 KB  
Article
Confidentiality Preserved Federated Learning for Indoor Localization Using Wi-Fi Fingerprinting
by Rajeev Kumar, Renu Popli, Vikas Khullar, Isha Kansal and Ashutosh Sharma
Buildings 2023, 13(8), 2048; https://doi.org/10.3390/buildings13082048 - 10 Aug 2023
Cited by 6 | Viewed by 2158
Abstract
For the establishment of future ubiquitous location-aware applications, a scalable indoor localization technique is essential technology. Numerous classification techniques for indoor localization exist, but none have proven to be as quick, secure, and dependable as what is now needed. This research proposes an [...] Read more.
For the establishment of future ubiquitous location-aware applications, a scalable indoor localization technique is essential technology. Numerous classification techniques for indoor localization exist, but none have proven to be as quick, secure, and dependable as what is now needed. This research proposes an effective and privacy-protective federated architecture-based framework for location classification via Wi-Fi fingerprinting. The federated indoor localization classification (f-ILC) system that was suggested had distributed client–server architecture with data privacy for any and all related edge devices or clients. To try and evaluate the proposed f-ILC framework, different data from different sources on the Internet were collected and given in a format that had already been processed. Experiments were conducted with standard learning, federated learning with a single client, and federated learning with several clients to make sure that federated deep learning models worked correctly. The success of the f-ILC framework was computed using a number of factors, such as validation of accuracy and loss. The results showed that the suggested f-ILC framework performed better than traditional distributed deep learning-based classifiers in terms of accuracy and loss while keeping data secure. Due to its innovative design and superior performance over existing classifier tools, edge devices’ data privacy makes this proposed architecture the ideal solution. Full article
(This article belongs to the Special Issue Study on Building Simulation)
Show Figures

Figure 1

21 pages, 7329 KB  
Article
Development of Debiasing Technique for Lung Nodule Chest X-ray Datasets to Generalize Deep Learning Models
by Michael J. Horry, Subrata Chakraborty, Biswajeet Pradhan, Manoranjan Paul, Jing Zhu, Hui Wen Loh, Prabal Datta Barua and U. Rajendra Acharya
Sensors 2023, 23(14), 6585; https://doi.org/10.3390/s23146585 - 21 Jul 2023
Cited by 6 | Viewed by 3585
Abstract
Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and [...] Read more.
Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening. Full article
(This article belongs to the Special Issue Feature Papers in "Sensing and Imaging" Section 2023)
Show Figures

Figure 1

32 pages, 3430 KB  
Article
Empowering Patient Similarity Networks through Innovative Data-Quality-Aware Federated Profiling
by Alramzana Nujum Navaz, Mohamed Adel Serhani, Hadeel T. El Kassabi and Ikbal Taleb
Sensors 2023, 23(14), 6443; https://doi.org/10.3390/s23146443 - 16 Jul 2023
Cited by 4 | Viewed by 2490
Abstract
Continuous monitoring of patients involves collecting and analyzing sensory data from a multitude of sources. To overcome communication overhead, ensure data privacy and security, reduce data loss, and maintain efficient resource usage, the processing and analytics are moved close to where the data [...] Read more.
Continuous monitoring of patients involves collecting and analyzing sensory data from a multitude of sources. To overcome communication overhead, ensure data privacy and security, reduce data loss, and maintain efficient resource usage, the processing and analytics are moved close to where the data are located (e.g., the edge). However, data quality (DQ) can be degraded because of imprecise or malfunctioning sensors, dynamic changes in the environment, transmission failures, or delays. Therefore, it is crucial to keep an eye on data quality and spot problems as quickly as possible, so that they do not mislead clinical judgments and lead to the wrong course of action. In this article, a novel approach called federated data quality profiling (FDQP) is proposed to assess the quality of the data at the edge. FDQP is inspired by federated learning (FL) and serves as a condensed document or a guide for node data quality assurance. The FDQP formal model is developed to capture the quality dimensions specified in the data quality profile (DQP). The proposed approach uses federated feature selection to improve classifier precision and rank features based on criteria such as feature value, outlier percentage, and missing data percentage. Extensive experimentation using a fetal dataset split into different edge nodes and a set of scenarios were carefully chosen to evaluate the proposed FDQP model. The results of the experiments demonstrated that the proposed FDQP approach positively improved the DQ, and thus, impacted the accuracy of the federated patient similarity network (FPSN)-based machine learning models. The proposed data-quality-aware federated PSN architecture leveraging FDQP model with data collected from edge nodes can effectively improve the data quality and accuracy of the federated patient similarity network (FPSN)-based machine learning models. Our profiling algorithm used lightweight profile exchange instead of full data processing at the edge, which resulted in optimal data quality achievement, thus improving efficiency. Overall, FDQP is an effective method for assessing data quality in the edge computing environment, and we believe that the proposed approach can be applied to other scenarios beyond patient monitoring. Full article
Show Figures

Figure 1

Back to TopTop