Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = electronic billing machines

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 540 KiB  
Article
Comparison of Tree-Based Machine Learning Algorithms to Predict Reporting Behavior of Electronic Billing Machines
by Belle Fille Murorunkwere, Jean Felicien Ihirwe, Idrissa Kayijuka, Joseph Nzabanita and Dominique Haughton
Information 2023, 14(3), 140; https://doi.org/10.3390/info14030140 - 21 Feb 2023
Cited by 14 | Viewed by 5280
Abstract
Tax fraud is a common problem for many tax administrations, costing billions of dollars. Different tax administrations have considered several options to optimize revenue; among them, there is the so-called electronic billing machine (EBM), which aims to monitor all business transactions and, as [...] Read more.
Tax fraud is a common problem for many tax administrations, costing billions of dollars. Different tax administrations have considered several options to optimize revenue; among them, there is the so-called electronic billing machine (EBM), which aims to monitor all business transactions and, as a result, boost value added tax (VAT) revenue and compliance. Most of the current research has focused on the impact of EBMs on VAT revenue collection and compliance rather than understanding how EBM reporting behavior influences future compliance. The essential contribution of this study is that it leverages both EBM’s historical reporting behavior and actual business characteristics to understand and predict the future reporting behavior of EBMs. Herein, tree-based machine learning algorithms such as decision trees, random forest, gradient boost, and XGBoost are utilized, tested, and compared for better performance. The results exhibit the robustness of the random forest model, among others, with an accuracy of 92.3%. This paper clearly presents our approach contribution with respect to existing approaches through well-defined research questions, analysis mechanisms, and constructive discussions. Once applied, we believe that our approach could ultimately help the tax-collecting agency conduct timely interventions on EBM compliance, which will help achieve the EBM objective of improving VAT compliance. Full article
Show Figures

Figure 1

26 pages, 4374 KiB  
Review
A Review on Emerging Communication and Computational Technologies for Increased Use of Plug-In Electric Vehicles
by Vinay Simha Reddy Tappeta, Bhargav Appasani, Suprava Patnaik and Taha Selim Ustun
Energies 2022, 15(18), 6580; https://doi.org/10.3390/en15186580 - 8 Sep 2022
Cited by 28 | Viewed by 3772
Abstract
The electric vehicle (EV) industry is quickly growing in the present scenario, and will have more demand in the future. A sharp increase in the sales of EVs by 160% in 2021 represents 26% of new sales in the worldwide automotive market. EVs [...] Read more.
The electric vehicle (EV) industry is quickly growing in the present scenario, and will have more demand in the future. A sharp increase in the sales of EVs by 160% in 2021 represents 26% of new sales in the worldwide automotive market. EVs are deemed to be the transportation of the future, as they offer significant cost savings and reduce carbon emissions. However, their interactions with the power grid, charging stations, and households require new communication and control techniques. EVs show unprecedented behavior during vehicle battery charging, and sending the charge from the vehicle’s battery back to the grid via a charging station during peak hours has an impact on the grid operation. Balancing the load during peak hours, i.e., managing the energy between the grid and vehicle, requires efficient communication protocols, standards, and computational technologies that are essential for improving the performance, efficiency, and security of vehicle-to-vehicle, vehicle-to-grid (V2G), and grid-to-vehicle (G2V) communication. Machine learning and deep learning technologies are being used to manage EV-charging station interactions, estimate the charging behavior, and to use EVs in the load balancing and stability control of smart grids. Internet of Things (IoT) technology can be used for managing EV charging stations and monitoring EV batteries. Recently, much work has been presented in the EV communication and control domain. In order to categorize these efforts in a meaningful manner and highlight their contributions to advancing EV migration, a thorough survey is required. This paper presents existing literature on emerging protocols, standards, communication technologies, and computational technologies for EVs. Frameworks, standards, architectures, and protocols proposed by various authors are discussed in the paper to serve the need of various researchers for implementing the applications in the EV domain. Security plays a vital role in EV authentication and billing activities. Hackers may exploit the hardware, such as sensors and other electronic systems and software of the EV, for various malicious activities. Various authors proposed standards and protocols for mitigating cyber-attacks on security aspects in the complex EV ecosystem. Full article
Show Figures

Figure 1

17 pages, 2213 KiB  
Article
Early Detection of Septic Shock Onset Using Interpretable Machine Learners
by Debdipto Misra, Venkatesh Avula, Donna M. Wolk, Hosam A. Farag, Jiang Li, Yatin B. Mehta, Ranjeet Sandhu, Bipin Karunakaran, Shravan Kethireddy, Ramin Zand and Vida Abedi
J. Clin. Med. 2021, 10(2), 301; https://doi.org/10.3390/jcm10020301 - 15 Jan 2021
Cited by 36 | Viewed by 27209
Abstract
Background: Developing a decision support system based on advances in machine learning is one area for strategic innovation in healthcare. Predicting a patient’s progression to septic shock is an active field of translational research. The goal of this study was to develop a [...] Read more.
Background: Developing a decision support system based on advances in machine learning is one area for strategic innovation in healthcare. Predicting a patient’s progression to septic shock is an active field of translational research. The goal of this study was to develop a working model of a clinical decision support system for predicting septic shock in an acute care setting for up to 6 h from the time of admission in an integrated healthcare setting. Method: Clinical data from Electronic Health Record (EHR), at encounter level, were used to build a predictive model for progression from sepsis to septic shock up to 6 h from the time of admission; that is, T = 1, 3, and 6 h from admission. Eight different machine learning algorithms (Random Forest, XGBoost, C5.0, Decision Trees, Boosted Logistic Regression, Support Vector Machine, Logistic Regression, Regularized Logistic, and Bayes Generalized Linear Model) were used for model development. Two adaptive sampling strategies were used to address the class imbalance. Data from two sources (clinical and billing codes) were used to define the case definition (septic shock) using the Centers for Medicare & Medicaid Services (CMS) Sepsis criteria. The model assessment was performed using Area under Receiving Operator Characteristics (AUROC), sensitivity, and specificity. Model predictions for each feature window (1, 3 and 6 h from admission) were consolidated. Results: Retrospective data from April 2005 to September 2018 were extracted from the EHR, Insurance Claims, Billing, and Laboratory Systems to create a dataset for septic shock detection. The clinical criteria and billing information were used to label patients into two classes-septic shock patients and sepsis patients at three different time points from admission, creating two different case-control cohorts. Data from 45,425 unique in-patient visits were used to build 96 prediction models comparing clinical-based definition versus billing-based information as the gold standard. Of the 24 consolidated models (based on eight machine learning algorithms and three feature windows), four models reached an AUROC greater than 0.9. Overall, all the consolidated models reached an AUROC of at least 0.8820 or higher. Based on the AUROC of 0.9483, the best model was based on Random Forest, with a sensitivity of 83.9% and specificity of 88.1%. The sepsis detection window at 6 h outperformed the 1 and 3-h windows. The sepsis definition based on clinical variables had improved performance when compared to the sepsis definition based on only billing information. Conclusion: This study corroborated that machine learning models can be developed to predict septic shock using clinical and administrative data. However, the use of clinical information to define septic shock outperformed models developed based on only administrative data. Intelligent decision support tools can be developed and integrated into the EHR and improve clinical outcomes and facilitate the optimization of resources in real-time. Full article
Show Figures

Figure 1

19 pages, 1354 KiB  
Article
Optimized Identification of Advanced Chronic Kidney Disease and Absence of Kidney Disease by Combining Different Electronic Health Data Resources and by Applying Machine Learning Strategies
by Christoph Weber, Lena Röschke, Luise Modersohn, Christina Lohr, Tobias Kolditz, Udo Hahn, Danny Ammon, Boris Betz and Michael Kiehntopf
J. Clin. Med. 2020, 9(9), 2955; https://doi.org/10.3390/jcm9092955 - 12 Sep 2020
Cited by 13 | Viewed by 4033
Abstract
Automated identification of advanced chronic kidney disease (CKD ≥ III) and of no known kidney disease (NKD) can support both clinicians and researchers. We hypothesized that identification of CKD and NKD can be improved, by combining information from different electronic health record (EHR) [...] Read more.
Automated identification of advanced chronic kidney disease (CKD ≥ III) and of no known kidney disease (NKD) can support both clinicians and researchers. We hypothesized that identification of CKD and NKD can be improved, by combining information from different electronic health record (EHR) resources, comprising laboratory values, discharge summaries and ICD-10 billing codes, compared to using each component alone. We included EHRs from 785 elderly multimorbid patients, hospitalized between 2010 and 2015, that were divided into a training and a test (n = 156) dataset. We used both the area under the receiver operating characteristic (AUROC) and under the precision-recall curve (AUCPR) with a 95% confidence interval for evaluation of different classification models. In the test dataset, the combination of EHR components as a simple classifier identified CKD ≥ III (AUROC 0.96[0.93–0.98]) and NKD (AUROC 0.94[0.91–0.97]) better than laboratory values (AUROC CKD 0.85[0.79–0.90], NKD 0.91[0.87–0.94]), discharge summaries (AUROC CKD 0.87[0.82–0.92], NKD 0.84[0.79–0.89]) or ICD-10 billing codes (AUROC CKD 0.85[0.80–0.91], NKD 0.77[0.72–0.83]) alone. Logistic regression and machine learning models improved recognition of CKD ≥ III compared to the simple classifier if only laboratory values were used (AUROC 0.96[0.92–0.99] vs. 0.86[0.81–0.91], p < 0.05) and improved recognition of NKD if information from previous hospital stays was used (AUROC 0.99[0.98–1.00] vs. 0.95[0.92–0.97]], p < 0.05). Depending on the availability of data, correct automated identification of CKD ≥ III and NKD from EHRs can be improved by generating classification models based on the combination of different EHR components. Full article
Show Figures

Figure 1

Back to TopTop