Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = unbiased risk estimate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2872 KiB  
Review
Challenges in Toxicological Risk Assessment of Environmental Cadmium Exposure
by Soisungwan Satarug
Toxics 2025, 13(5), 404; https://doi.org/10.3390/toxics13050404 - 16 May 2025
Cited by 1 | Viewed by 645
Abstract
Dietary exposure to a high dose of cadmium (Cd) ≥ 100 µg/day for at least 50 years or a lifetime intake of Cd ≥ 1 g can cause severe damage to the kidneys and bones. Alarmingly, however, exposure to a dose of Cd [...] Read more.
Dietary exposure to a high dose of cadmium (Cd) ≥ 100 µg/day for at least 50 years or a lifetime intake of Cd ≥ 1 g can cause severe damage to the kidneys and bones. Alarmingly, however, exposure to a dose of Cd between 10 and 15 µg/day and excretion of Cd at a rate below 0.5 µg/g creatinine have been associated with an increased risk of diseases with a high prevalence worldwide, such as chronic kidney disease (CKD), fragile bones, diabetes, and cancer. These findings have cast considerable doubt on a “tolerable” Cd exposure level of 58 µg/day for a 70 kg person, while questioning the threshold level for the Cd excretion rate of 5.24 µg/g creatinine. The present review addresses many unmet challenges in a threshold-based risk assessment for Cd. Special emphasis is given to the benchmark dose (BMD) methodology to estimate the Cd exposure limit that aligns with a no-observed-adverse-effect level (NOAEL). Cd exposure limits estimated from conventional dosing experiments and human data are highlighted. The results of the BMDL modeling of the relationship between Cd excretion and various indicators of its effects on kidneys are summarized. It is recommended that exposure guidelines for Cd should employ the most recent scientific research data, dose–response curves constructed from an unbiased exposure indicator, and clinically relevant adverse effects such as proteinuria, albuminuria, and a decrease in the estimated glomerular filtration rate (eGFR). These are signs of developing CKD and its progression to the end stage, when dialysis or a kidney transplant is required for survival. Full article
Show Figures

Figure 1

14 pages, 299 KiB  
Article
Properties of the SURE Estimates When Using Continuous Thresholding Functions for Wavelet Shrinkage
by Alexey Kudryavtsev and Oleg Shestakov
Mathematics 2024, 12(23), 3646; https://doi.org/10.3390/math12233646 - 21 Nov 2024
Viewed by 832
Abstract
Wavelet analysis algorithms in combination with thresholding procedures are widely used in nonparametric regression problems when estimating a signal function from noisy data. The advantages of these methods lie in their computational efficiency and the ability to adapt to the local features of [...] Read more.
Wavelet analysis algorithms in combination with thresholding procedures are widely used in nonparametric regression problems when estimating a signal function from noisy data. The advantages of these methods lie in their computational efficiency and the ability to adapt to the local features of the estimated function. It is usually assumed that the signal function belongs to some special class. For example, it can be piecewise continuous or piecewise differentiable and have a compact support. These assumptions, as a rule, allow the signal function to be economically represented on some specially selected basis in such a way that the useful signal is concentrated in a relatively small number of large absolute value expansion coefficients. Then, thresholding is performed to remove the noise coefficients. Typically, the noise distribution is assumed to be additive and Gaussian. This model is well studied in the literature, and various types of thresholding and parameter selection strategies adapted for specific applications have been proposed. The risk analysis of thresholding methods is an important practical task, since it makes it possible to assess the quality of both the methods themselves and the equipment used for processing. Most of the studies in this area investigate the asymptotic order of the theoretical risk. In practical situations, the theoretical risk cannot be calculated because it depends explicitly on the unobserved, noise-free signal. However, a statistical risk estimate constructed on the basis of the observed data can also be used to assess the quality of noise reduction methods. In this paper, a model of a signal contaminated with additive Gaussian noise is considered, and the general formulation of the thresholding problem with threshold functions belonging to a special class is discussed. Lower bounds are obtained for the threshold values that minimize the unbiased risk estimate. Conditions are also given under which this risk estimate is asymptotically normal and strongly consistent. The results of these studies can provide the basis for further research in the field of constructing confidence intervals and obtaining estimates of the convergence rate, which, in turn, will make it possible to obtain specific values of errors in signal processing for a wide range of thresholding methods. Full article
15 pages, 3450 KiB  
Article
Adaptive Truncation Threshold Determination for Multimode Fiber Single-Pixel Imaging
by Yangyang Xiang, Junhui Li, Mingying Lan, Le Yang, Xingzhuo Hu, Jianxin Ma and Li Gao
Appl. Sci. 2024, 14(16), 6875; https://doi.org/10.3390/app14166875 - 6 Aug 2024
Viewed by 1088
Abstract
Truncated singular value decomposition (TSVD) is a popular recovery algorithm for multimode fiber single-pixel imaging (MMF-SPI), and it uses truncation thresholds to suppress noise influences. However, due to the sensitivity of MMF relative to stochastic disturbances, the threshold requires frequent re-determination as noise [...] Read more.
Truncated singular value decomposition (TSVD) is a popular recovery algorithm for multimode fiber single-pixel imaging (MMF-SPI), and it uses truncation thresholds to suppress noise influences. However, due to the sensitivity of MMF relative to stochastic disturbances, the threshold requires frequent re-determination as noise levels dynamically fluctuate. In response, we design an adaptive truncation threshold determination (ATTD) method for TSVD-based MMF-SPI in disturbed environments. Simulations and experiments reveal that ATTD approaches the performance of ideal clairvoyant benchmarks, and it corresponds to the best possible image recovery under certain noise levels and surpasses both traditional truncation threshold determination methods with less computation—fixed threshold and Stein’s unbiased risk estimator (SURE)—specifically under high noise levels. Moreover, target insensitivity is demonstrated via numerical simulations, and the robustness of the self-contained parameters is explored. Finally, we also compare and discuss the performance of TSVD-based MMF-SPI, which uses ATTD, and machine learning-based MMF-SPI, which uses diffusion models, to provide a comprehensive understanding of ATTD. Full article
(This article belongs to the Special Issue Optical Imaging and Sensing: From Design to Its Practical Use)
Show Figures

Figure 1

21 pages, 9059 KiB  
Article
Hyperspectral Prediction Model of Nitrogen Content in Citrus Leaves Based on the CEEMDAN–SR Algorithm
by Changlun Gao, Ting Tang, Weibin Wu, Fangren Zhang, Yuanqiang Luo, Weihao Wu, Beihuo Yao and Jiehao Li
Remote Sens. 2023, 15(20), 5013; https://doi.org/10.3390/rs15205013 - 18 Oct 2023
Cited by 9 | Viewed by 1948
Abstract
Nitrogen content is one of the essential elements in citrus leaves (CL), and many studies have been conducted to determine the nutrient content in CL using hyperspectral technology. To address the key problem that the conventional spectral data-denoising algorithms directly discard high-frequency signals, [...] Read more.
Nitrogen content is one of the essential elements in citrus leaves (CL), and many studies have been conducted to determine the nutrient content in CL using hyperspectral technology. To address the key problem that the conventional spectral data-denoising algorithms directly discard high-frequency signals, resulting in missing effective signals, this study proposes a denoising preprocessing algorithm, complete ensemble empirical mode decomposition with adaptive noise joint sparse representation (CEEMDAN–SR), for CL hyperspectral data. For this purpose, 225 sets of fresh CL were collected at the Institute of Fruit Tree Research of the Guangdong Academy of Agricultural Sciences, to measure their elemental nitrogen content and the corresponding hyperspectral data. First, the spectral data were preprocessed using CEEMDAN–SR, Stein’s unbiased risk estimate and the linear expansion of thresholds (SURE–LET), sparse representation (SR), Savitzky–Golay (SG), and the first derivative (FD). Second, feature extraction was carried out using principal component analysis (PCA), uninformative variables elimination (UVE), and the competitive adaptive re-weighted sampling (CARS) algorithm. Finally, partial least squares regression (PLSR), support vector regression (SVR), random forest (RF), and Gaussian process regression (GPR) were used to construct a CL nitrogen prediction model. The results showed that most of the prediction models preprocessed using the CEEMDAN–SR algorithm had better accuracy and robustness. The prediction models based on CEEMDAN–SR preprocessing, PCA feature extraction, and GPR modeling had an R2 of 0.944, NRMSE of 0.057, and RPD of 4.219. The study showed that the CEEMDAN–SR algorithm can be effectively used to denoise CL hyperspectral data and reduce the loss of effective information. The prediction model using the CEEMDAN–SR+PCA+GPR algorithm could accurately obtain the nitrogen content of CL and provide a reference for the accurate fertilization of citrus trees. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

21 pages, 1597 KiB  
Article
Generalized Support Vector Regression and Symmetry Functional Regression Approaches to Model the High-Dimensional Data
by Mahdi Roozbeh, Arta Rouhi, Nur Anisah Mohamed and Fatemeh Jahadi
Symmetry 2023, 15(6), 1262; https://doi.org/10.3390/sym15061262 - 15 Jun 2023
Cited by 8 | Viewed by 2048
Abstract
The analysis of the high-dimensional dataset when the number of explanatory variables is greater than the observations using classical regression approaches is not applicable and the results may be misleading. In this research, we proposed to analyze such data by introducing modern and [...] Read more.
The analysis of the high-dimensional dataset when the number of explanatory variables is greater than the observations using classical regression approaches is not applicable and the results may be misleading. In this research, we proposed to analyze such data by introducing modern and up-to-date techniques such as support vector regression, symmetry functional regression, ridge, and lasso regression methods. In this study, we developed the support vector regression approach called generalized support vector regression to provide more efficient shrinkage estimation and variable selection in high-dimensional datasets. The generalized support vector regression can improve the performance of the support vector regression by employing an accurate algorithm for obtaining the optimum value of the penalty parameter using a cross-validation score, which is an asymptotically unbiased feasible estimator of the risk function. In this regard, using the proposed methods to analyze two real high-dimensional datasets (yeast gene data and riboflavin data) and a simulated dataset, the most efficient model is determined based on three criteria (correlation squared, mean squared error, and mean absolute error percentage deviation) according to the type of datasets. On the basis of the above criteria, the efficiency of the proposed estimators is evaluated. Full article
(This article belongs to the Special Issue Symmetry in Multivariate Analysis)
Show Figures

Figure 1

20 pages, 4291 KiB  
Article
A Reference-Free Method for the Thematic Accuracy Estimation of Global Land Cover Products Based on the Triple Collocation Approach
by Pengfei Chen, Huabing Huang, Wenzhong Shi and Rui Chen
Remote Sens. 2023, 15(9), 2255; https://doi.org/10.3390/rs15092255 - 24 Apr 2023
Cited by 1 | Viewed by 2162
Abstract
Global land cover (GLC) data are an indispensable resource for understanding the relationship between human activities and the natural environment. Estimating their classification accuracy is significant for studying environmental change and sustainable development. With the rapid emergence of various GLC products, the lack [...] Read more.
Global land cover (GLC) data are an indispensable resource for understanding the relationship between human activities and the natural environment. Estimating their classification accuracy is significant for studying environmental change and sustainable development. With the rapid emergence of various GLC products, the lack of high-quality reference data poses a severe risk to traditional accuracy estimation methods, in which reference data are always required. Thus, meeting the needs of large-scale, fast evaluation for GLC products becomes challenging. The triple collocation approach (TCCA) is originally applied to assess classification accuracy in earthquake damage mapping when ground truth is unavailable. TCCA can provide unbiased accuracy estimation of three classification systems when their errors are conditionally independent. In this study, we extend the idea of TCCA and test its performance in the accuracy estimation of GLC data without ground reference data. Firstly, to generate two additional classification systems besides the original GLC data, a k-order neighbourhood is defined for each assessment unit (i.e., geographic tiles), and a local classification strategy is implemented to train two classifiers based on local samples and features from remote sensing images. Secondly, to reduce the uncertainty from complex classification schemes, the multi-class problem in GLC is transformed into multiple binary-class problems when estimating the accuracy of each land class. Building upon over 15 million sample points with remote sensing features retrieved from Google Earth Engine, we demonstrate the performance of our method on WorldCover 2020, and the experiment shows that screening reliable sample points during training local classifiers can significantly improve the overall estimation with a relative error of less than 4% at the continent level. This study proves the feasibility of estimating GLC accuracy using the existing land information and remote sensing data, reducing the demand for costly reference data in GLC assessment and enriching the assessment approaches for large-scale land cover data. Full article
Show Figures

Figure 1

14 pages, 797 KiB  
Article
Using Statistical Test Method to Establish a Decision Model of Performance Evaluation Matrix
by Chin-Chia Liu, Chun-Hung Yu and Kuen-Suan Chen
Appl. Sci. 2023, 13(8), 5139; https://doi.org/10.3390/app13085139 - 20 Apr 2023
Cited by 1 | Viewed by 1524
Abstract
Many studies have pointed out that the Performance Evaluation Matrix (PEM) is a convenient and useful tool for the evaluation, analysis, and improvement of service operating systems. All service items of the operating system can collect customer satisfaction and importance through questionnaires and [...] Read more.
Many studies have pointed out that the Performance Evaluation Matrix (PEM) is a convenient and useful tool for the evaluation, analysis, and improvement of service operating systems. All service items of the operating system can collect customer satisfaction and importance through questionnaires and then convert them into satisfaction indices and importance indices to establish PEM and its evaluation rules. Since the indices have unknown parameters, if the evaluation is performed directly by the point estimates of the indices, there will be a risk of misjudgment due to sampling error. In addition, most of the studies only determine the critical-to-quality (CTQ) that needs to be improved, and do not discuss the treatment rules in the case of limited resources nor perform the confirmation after improvement. Therefore, to address similar research gaps, this paper proposed the unbiased estimators of these two indices and determined the critical-to-quality (CTQ) service items which need to be improved through the one-tailed statistical hypothesis test by building a PEM method of the satisfaction index. In addition, through the one-tailed statistical hypothesis test method of the importance index, the improvement priority of service items was determined under the condition of limited resources. Confirmation of the effect on improvement is an important step in management. Thus, this paper adopted a statistical two-tailed hypothesis test to verify whether the satisfaction of all the CTQ service items that need to be improved was enhanced. Since the method proposed in this paper was established through statistical hypothesis tests, the risk of misjudgment due to sampling error could be reduced. Obviously, reducing the misjudgment risk is the advantage of the method in this paper. Based on the precondition, utilizing the model in this study may assist the industries to determine CTQ rapidly, implement the most efficient improvement under the condition of limited resources and also confirm the improvement effect at the same time. Finally, a case study of computer-assisted language learning system (CALL System) was used to illustrate a way to apply the model proposed in this paper. Full article
(This article belongs to the Special Issue Smart Service Technology for Industrial Applications II)
Show Figures

Figure 1

11 pages, 3444 KiB  
Article
mTORC1-Dependent Protein and Parkinson’s Disease: A Mendelian Randomization Study
by Cheng Tan, Jianzhong Ai and Ye Zhu
Brain Sci. 2023, 13(4), 536; https://doi.org/10.3390/brainsci13040536 - 24 Mar 2023
Cited by 7 | Viewed by 3101
Abstract
Background: The mTOR pathway is crucial in controlling the growth, differentiation, and survival of neurons, and its pharmacological targeting has promising potential as a treatment for Parkinson’s disease. However, the function of mTORC1 downstream proteins, such as RPS6K, EIF4EBP, EIF-4E, EIF-4G, and EIF4A, [...] Read more.
Background: The mTOR pathway is crucial in controlling the growth, differentiation, and survival of neurons, and its pharmacological targeting has promising potential as a treatment for Parkinson’s disease. However, the function of mTORC1 downstream proteins, such as RPS6K, EIF4EBP, EIF-4E, EIF-4G, and EIF4A, in PD development remains unclear. Methods: We performed a Mendelian randomization study to evaluate the causal relationship between mTORC1 downstream proteins and Parkinson’s disease. We utilized various MR methods, including inverse-variance-weighted, weighted median, MR–Egger, MR-PRESSO, and MR-RAPS, and conducted sensitivity analyses to identify potential pleiotropy and heterogeneity. Results: The genetic proxy EIF4EBP was found to be inversely related to PD risk (OR = 0.79, 95% CI = 0.67–0.92, p = 0.003), with the results from WM, MR-PRESSO, and MR-RAPS being consistent. The plasma protein levels of EIF4G were also observed to show a suggestive protective effect on PD (OR = 0.85, 95% CI = 0.75–0.97, p = 0.014). No clear causal effect was found for the genetically predicted RP-S6K, EIF-4E, and EIF-4A on PD risk. Sensitivity analyses showed no significant imbalanced pleiotropy or heterogeneity, indicating that the MR estimates were robust and independent. Conclusion: Our unbiased MR study highlights the protective role of serum EIF4EBP levels in PD, suggesting that the pharmacological activation of EIF4EBP activity could be a promising treatment option for PD. Full article
(This article belongs to the Special Issue The Genetics of Parkinson's Diseases)
Show Figures

Figure 1

15 pages, 1048 KiB  
Article
Target Trial Emulation Using Hospital-Based Observational Data: Demonstration and Application in COVID-19
by Oksana Martinuka, Maja von Cube, Derek Hazard, Hamid Reza Marateb, Marjan Mansourian, Ramin Sami, Mohammad Reza Hajian, Sara Ebrahimi and Martin Wolkewitz
Life 2023, 13(3), 777; https://doi.org/10.3390/life13030777 - 13 Mar 2023
Cited by 5 | Viewed by 5878
Abstract
Methodological biases are common in observational studies evaluating treatment effectiveness. The objective of this study is to emulate a target trial in a competing risks setting using hospital-based observational data. We extend established methodology accounting for immortal time bias and time-fixed confounding biases [...] Read more.
Methodological biases are common in observational studies evaluating treatment effectiveness. The objective of this study is to emulate a target trial in a competing risks setting using hospital-based observational data. We extend established methodology accounting for immortal time bias and time-fixed confounding biases to a setting where no survival information beyond hospital discharge is available: a condition common to coronavirus disease 2019 (COVID-19) research data. This exemplary study includes a cohort of 618 hospitalized patients with COVID-19. We describe methodological opportunities and challenges that cannot be overcome applying traditional statistical methods. We demonstrate the practical implementation of this trial emulation approach via clone–censor–weight techniques. We undertake a competing risk analysis, reporting the cause-specific cumulative hazards and cumulative incidence probabilities. Our analysis demonstrates that a target trial emulation framework can be extended to account for competing risks in COVID-19 hospital studies. In our analysis, we avoid immortal time bias, time-fixed confounding bias, and competing risks bias simultaneously. Choosing the length of the grace period is justified from a clinical perspective and has an important advantage in ensuring reliable results. This extended trial emulation with the competing risk analysis enables an unbiased estimation of treatment effects, along with the ability to interpret the effectiveness of treatment on all clinically important outcomes. Full article
(This article belongs to the Special Issue COVID-19 Prevention and Treatment: 2nd Edition)
Show Figures

Figure 1

23 pages, 4279 KiB  
Article
A Simple, Test-Based Method to Control the Overestimation Bias in the Analysis of Potential Prognostic Tumour Markers
by Marzia Ognibene, Annalisa Pezzolo, Roberto Cavanna, Davide Cangelosi, Stefania Sorrentino and Stefano Parodi
Cancers 2023, 15(4), 1188; https://doi.org/10.3390/cancers15041188 - 13 Feb 2023
Viewed by 1373
Abstract
The early evaluation of prognostic tumour markers is commonly performed by comparing the survival of two groups of patients identified on the basis of a cut-off value. The corresponding hazard ratio (HR) is usually estimated, representing a measure of the relative [...] Read more.
The early evaluation of prognostic tumour markers is commonly performed by comparing the survival of two groups of patients identified on the basis of a cut-off value. The corresponding hazard ratio (HR) is usually estimated, representing a measure of the relative risk between patients with marker values above and below the cut-off. A posteriori methods identifying an optimal cut-off are appropriate when the functional form of the relation between the marker distribution and patient survival is unknown, but they are prone to an overestimation bias. In the presence of a small sample size, which is typical of rare diseases, the external validation sets are hardly available and internal cross-validation could be unfeasible. We describe a new method to obtain an unbiased estimate of the HR at an optimal cut-off, exploiting the simple relation between the HR and the associated p-value estimated by a random permutation analysis. We validate the method on both simulated data and set of gene expression profiles from two large, publicly available data sets. Furthermore, a reanalysis of a previously published study, which included 134 Stage 4S neuroblastoma patients, allowed for the identification of E2F1 as a new gene with potential oncogenic activity. This finding was confirmed by an immunofluorescence analysis on an independent cohort. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

14 pages, 802 KiB  
Article
Nowcasting COVID-19 Statistics Reported with Delay: A Case-Study of Sweden and the UK
by Adam Altmejd, Joacim Rocklöv and Jonas Wallin
Int. J. Environ. Res. Public Health 2023, 20(4), 3040; https://doi.org/10.3390/ijerph20043040 - 9 Feb 2023
Cited by 6 | Viewed by 5926
Abstract
The COVID-19 pandemic has demonstrated the importance of unbiased, real-time statistics of trends in disease events in order to achieve an effective response. Because of reporting delays, real-time statistics frequently underestimate the total number of infections, hospitalizations and deaths. When studied by event [...] Read more.
The COVID-19 pandemic has demonstrated the importance of unbiased, real-time statistics of trends in disease events in order to achieve an effective response. Because of reporting delays, real-time statistics frequently underestimate the total number of infections, hospitalizations and deaths. When studied by event date, such delays also risk creating an illusion of a downward trend. Here, we describe a statistical methodology for predicting true daily quantities and their uncertainty, estimated using historical reporting delays. The methodology takes into account the observed distribution pattern of the lag. It is derived from the “removal method”—a well-established estimation framework in the field of ecology. Full article
(This article belongs to the Section Infectious Disease Epidemiology)
Show Figures

Figure 1

15 pages, 3940 KiB  
Article
Chromosomal Microarray Study in Prader-Willi Syndrome
by Merlin G. Butler, Waheeda A. Hossain, Neil Cowen and Anish Bhatnagar
Int. J. Mol. Sci. 2023, 24(2), 1220; https://doi.org/10.3390/ijms24021220 - 7 Jan 2023
Cited by 5 | Viewed by 4648
Abstract
A high-resolution chromosome microarray analysis was performed on 154 consecutive individuals enrolled in the DESTINY PWS clinical trial for Prader-Willi syndrome (PWS). Of these 154 PWS individuals, 87 (56.5%) showed the typical 15q11-q13 deletion subtypes, 62 (40.3%) showed non-deletion maternal disomy 15 and [...] Read more.
A high-resolution chromosome microarray analysis was performed on 154 consecutive individuals enrolled in the DESTINY PWS clinical trial for Prader-Willi syndrome (PWS). Of these 154 PWS individuals, 87 (56.5%) showed the typical 15q11-q13 deletion subtypes, 62 (40.3%) showed non-deletion maternal disomy 15 and five individuals (3.2%) had separate unexpected microarray findings. For example, one PWS male had Klinefelter syndrome with segmental isodisomy identified in both chromosomes 15 and X. Thirty-five (40.2%) of 87 individuals showed typical larger 15q11-q13 Type I deletion and 52 individuals (59.8%) showed typical smaller Type II deletion. Twenty-four (38.7%) of 62 PWS individuals showed microarray patterns indicating either maternal heterodisomy 15 subclass or a rare non-deletion (epimutation) imprinting center defect. Segmental isodisomy 15 was seen in 34 PWS subjects (54.8%) with 15q26.3, 15q14 and 15q26.1 bands most commonly involved and total isodisomy 15 seen in four individuals (6.5%). In summary, we report on PWS participants consecutively enrolled internationally in a single clinical trial with high-resolution chromosome microarray analysis to determine and describe an unbiased estimate of the frequencies and types of genetic defects and address potential at-risk genetic disorders in those with maternal disomy 15 subclasses in the largest PWS cohort studied to date. Full article
Show Figures

Figure 1

17 pages, 3472 KiB  
Article
Secure Medical Data Collection in the Internet of Medical Things Based on Local Differential Privacy
by Jinpeng Wang and Xiaohui Li
Electronics 2023, 12(2), 307; https://doi.org/10.3390/electronics12020307 - 6 Jan 2023
Cited by 9 | Viewed by 2097
Abstract
As big data and data mining technology advance, research on the collection and analysis of medical data on the internet of medical things (IoMT) has gained increasing attention. Medical institutions often collect users’ signs and symptoms from their devices for analysis. However, the [...] Read more.
As big data and data mining technology advance, research on the collection and analysis of medical data on the internet of medical things (IoMT) has gained increasing attention. Medical institutions often collect users’ signs and symptoms from their devices for analysis. However, the process of data collection may pose a risk of privacy leakage without a trusted third party. To address this issue, we propose a medical data collection based on local differential privacy and Count Sketch (MDLDP). The algorithm first uses a random sampling technique to select only one symptom for perturbation by a single user. The perturbed data is then uploaded using Count Sketch. The third-party aggregates the user-submitted data to estimate the frequencies of the symptoms and the mean extent of their occurrence. This paper theoretically demonstrates that the designed algorithm satisfies local differential privacy and unbiased estimation. We also evaluated the algorithm experimentally with existing algorithms on a real medical dataset. The results show that the MDLDP algorithm has good utility for key-value type medical data collection statistics in the IoMT. Full article
(This article belongs to the Special Issue Security and Privacy Preservation in Big Data Age)
Show Figures

Figure 1

20 pages, 3944 KiB  
Article
Performance Enhancement of INS and UWB Fusion Positioning Method Based on Two-Level Error Model
by Zhonghan Li, Yongbo Zhang, Yutong Shi, Shangwu Yuan and Shihao Zhu
Sensors 2023, 23(2), 557; https://doi.org/10.3390/s23020557 - 4 Jan 2023
Cited by 9 | Viewed by 2781
Abstract
In GNSS-denied environments, especially when losing measurement sensor data, inertial navigation system (INS) accuracy is critical to the precise positioning of vehicles, and an accurate INS error compensation model is the most effective way to improve INS accuracy. To this end, a two-level [...] Read more.
In GNSS-denied environments, especially when losing measurement sensor data, inertial navigation system (INS) accuracy is critical to the precise positioning of vehicles, and an accurate INS error compensation model is the most effective way to improve INS accuracy. To this end, a two-level error model is proposed, which comprehensively utilizes the mechanism error model and propagation error model. Based on this model, the INS and ultra-wideband (UWB) fusion positioning method is derived relying on the extended Kalman filter (EKF) method. To further improve accuracy, the data prefiltering algorithm of the wavelet shrinkage method based on Stein’s unbiased risk estimate–Shrink (SURE-Shrink) threshold is summarized for raw inertial measurement unit (IMU) data. The experimental results show that by employing the SURE-Shrink wavelet denoising method, positioning accuracy is improved by 76.6%; by applying the two-level error model, the accuracy is further improved by 84.3%. More importantly, at the point when the vehicle motion state changes, adopting the two-level error model can provide higher computational stability and less fluctuation in trajectory curves. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

21 pages, 738 KiB  
Article
A Semiparametric Bayesian Joint Modelling of Skewed Longitudinal and Competing Risks Failure Time Data: With Application to Chronic Kidney Disease
by Melkamu Molla Ferede, Samuel Mwalili, Getachew Dagne, Simon Karanja, Workagegnehu Hailu, Mahmoud El-Morshedy and Afrah Al-Bossly
Mathematics 2022, 10(24), 4816; https://doi.org/10.3390/math10244816 - 18 Dec 2022
Cited by 3 | Viewed by 2316
Abstract
In clinical and epidemiological studies, when the time-to-event(s) and the longitudinal outcomes are associated, modelling them separately may give biased estimates. A joint modelling approach is required to obtain unbiased results and to evaluate their association. In the joint model, a subject may [...] Read more.
In clinical and epidemiological studies, when the time-to-event(s) and the longitudinal outcomes are associated, modelling them separately may give biased estimates. A joint modelling approach is required to obtain unbiased results and to evaluate their association. In the joint model, a subject may be exposed to more than one type of failure event (competing risks). Considering the competing event as an independent censoring of the time-to-event process may underestimate the true survival probability and give biased results. Within the joint model, longitudinal outcomes may have nonlinear (irregular) trajectories over time and exhibit skewness with heavy tails. Accordingly, fully parametric mixed-effect models may not be flexible enough to model this type of complex longitudinal data. In addition, assuming a Gaussian distribution for model errors may be too restrictive to adequately represent within-individual variations and may lack robustness against deviation from distributional assumptions. To simultaneously overcome these issues, in this paper, we presented semiparametric joint models for competing risks failure time and skewed-longitudinal data by using a smoothing spline approach and a multivariate skew-t distribution. We also considered different parameterization approaches in the formulation of joint models and used a Bayesian approach to make the statistical inference. We illustrated the proposed methods by analyzing real data on a chronic kidney disease. To evaluate the performance of the methods, we also carried out simulation studies. The results of both the application and simulation studies revealed that the joint modelling approach proposed in this study performed well when the semiparametric, random-effects parameterization, and skew-t distribution specifications were taken into account. Full article
(This article belongs to the Special Issue Current Developments in Theoretical and Applied Statistics)
Show Figures

Figure 1

Back to TopTop