Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Journal = J
Section = Computer Science & Mathematics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 360 KiB  
Article
Depicting Falsifiability in Algebraic Modelling
by Achim Schlather and Martin Schlather
J 2025, 8(3), 23; https://doi.org/10.3390/j8030023 - 4 Jul 2025
Viewed by 265
Abstract
This paper investigates how algebraic structures can encode epistemic limitations, with a focus on object properties and measurement. Drawing from philosophical concepts such as underdetermination, we argue that the weakening of algebraic laws can reflect foundational ambiguities in empirical access. Our approach supplies [...] Read more.
This paper investigates how algebraic structures can encode epistemic limitations, with a focus on object properties and measurement. Drawing from philosophical concepts such as underdetermination, we argue that the weakening of algebraic laws can reflect foundational ambiguities in empirical access. Our approach supplies instruments that are necessary and sufficient towards practical falsifiability. Besides introducing this new concept, we consider, exemplarily and as a starting point, the following two fundamental algebraic laws in more detail: the associative law and the commutative law. We explore and analyze weakened forms of these laws. As a mathematical feature, we demonstrate that the existence of a weak neutral element leads to the emergence of several transversal algebraic laws. Most laws are individually weaker than the combination of associativity and commutativity, but many pairs of two laws are equivalent to this combination. We also show that associativity and commutativity can be combined to a simple, single law, which we call cyclicity. We illustrate our approach with many tables and practical examples. Full article
(This article belongs to the Section Computer Science & Mathematics)
Show Figures

Figure 1

20 pages, 3398 KiB  
Article
A Novel Bio-Inspired Bird Flocking Node Scheduling Algorithm for Dependable Safety-Critical Wireless Sensor Network Systems
by Issam Al-Nader, Rand Raheem and Aboubaker Lasebae
J 2025, 8(2), 19; https://doi.org/10.3390/j8020019 - 20 May 2025
Viewed by 856
Abstract
The Multi-Objective Optimization Problem (MOOP) in Wireless Sensor Networks (WSNs) is a challenging issue that requires balancing multiple conflicting objectives, such as maintaining coverage, connectivity, and network lifetime all together. These objectives are important for a functioning WSN safety-critical applications, whether in environmental [...] Read more.
The Multi-Objective Optimization Problem (MOOP) in Wireless Sensor Networks (WSNs) is a challenging issue that requires balancing multiple conflicting objectives, such as maintaining coverage, connectivity, and network lifetime all together. These objectives are important for a functioning WSN safety-critical applications, whether in environmental monitoring, military surveillance, or smart cities. To address these challenges, we propose a novel bio-inspired Bird Flocking Node Scheduling algorithm, which takes inspiration from the natural flocking behavior of birds migrating over long distance to optimize sensor node activity in a distributed and energy-efficient manner. The proposed algorithm integrates the Lyapunov function to maintain connected coverage while optimizing energy efficiency, ensuring service availability and reliability. The effectiveness of the algorithm is evaluated through extensive simulations, namely MATLAB R2018b simulator coupled with a Pareto front, comparing its performance with our previously developed BAT node scheduling algorithm. The results demonstrate significant improvements across key performance metrics, specifically, enhancing network coverage by 8%, improving connectivity by 10%, and extending network lifetime by an impressive 80%. These findings highlight the potential of bio-inspired Bird Flocking optimization techniques in advancing WSN dependability, making them more sustainable and suitable for real-world WSN safety-critical systems. Full article
(This article belongs to the Section Computer Science & Mathematics)
Show Figures

Figure 1

18 pages, 1746 KiB  
Article
Energy Performance Analysis and Output Prediction Pipeline for East-West Solar Microgrids
by Khanh Nguyen, Kevin Koch, Swati Chandna and Binh Vu
J 2024, 7(4), 421-438; https://doi.org/10.3390/j7040025 - 21 Oct 2024
Viewed by 1722
Abstract
Local energy networks, known as microgrids, can operate independently or in conjunction with the main grid, offering numerous benefits such as enhanced reliability, sustainability, and efficiency. This study focuses on analyzing the factors that influence energy performance in East-West microgrids, which have the [...] Read more.
Local energy networks, known as microgrids, can operate independently or in conjunction with the main grid, offering numerous benefits such as enhanced reliability, sustainability, and efficiency. This study focuses on analyzing the factors that influence energy performance in East-West microgrids, which have the unique advantage of capturing solar radiation from both directions, maximizing energy production throughout the day. A predictive pipeline was also developed to assess the performance of various machine learning models in forecasting energy output. Key input data for the models included solar radiation levels, photovoltaic (DC) energy, and the losses incurred during the conversion from DC to AC energy. One of the study’s significant findings was that the east side of the microgrid received higher radiation and experienced fewer losses compared to the west side, illustrating the importance of orientation for efficiency. Another noteworthy result was the predicted total energy supplied to the grid, valued at €15,423. This demonstrates that the optimized energy generation not only meets grid demand but also generates economic value by enabling the sale of excess energy back to the grid. The machine learning models—Random Forest, Extreme Gradient Boosting, and Recurrent Neural Networks—showed superior performance in energy prediction, with mean squared errors of 0.000318, 0.000104, and 0.000081, respectively. The research concludes that East-West microgrids have substantial potential to generate significant energy and economic benefits. The developed energy prediction pipeline can serve as a useful tool for optimizing microgrid operations and improving their integration with the main grid. Full article
(This article belongs to the Section Computer Science & Mathematics)
Show Figures

Figure 1

12 pages, 320 KiB  
Article
Bias-Reduced Haebara and Stocking–Lord Linking
by Alexander Robitzsch
J 2024, 7(3), 373-384; https://doi.org/10.3390/j7030021 - 4 Sep 2024
Cited by 1 | Viewed by 1382
Abstract
Haebara and Stocking–Lord linking methods are frequently used to compare the distributions of two groups. Previous research has demonstrated that Haebara and Stocking–Lord linking can produce bias in estimated standard deviations and, to a smaller extent, in estimated means in the presence of [...] Read more.
Haebara and Stocking–Lord linking methods are frequently used to compare the distributions of two groups. Previous research has demonstrated that Haebara and Stocking–Lord linking can produce bias in estimated standard deviations and, to a smaller extent, in estimated means in the presence of differential item functioning (DIF). This article determines the asymptotic bias of the two linking methods for the 2PL model. A bias-reduced Haebara and bias-reduced Stocking–Lord linking method is proposed to reduce the bias due to uniform DIF effects. The performance of the new linking method is evaluated in a simulation study. In general, it turned out that Stocking–Lord linking had substantial advantages over Haebara linking in the presence of DIF effects. Moreover, bias-reduced Haebara and Stocking–Lord linking substantially reduced the bias in the estimated standard deviation. Full article
(This article belongs to the Section Computer Science & Mathematics)
17 pages, 4155 KiB  
Article
Enhancing Pulmonary Diagnosis in Chest X-rays through Generative AI Techniques
by Theodora Sanida, Maria Vasiliki Sanida, Argyrios Sideris and Minas Dasygenis
J 2024, 7(3), 302-318; https://doi.org/10.3390/j7030017 - 13 Aug 2024
Cited by 2 | Viewed by 3608
Abstract
Chest X-ray imaging is an essential tool in the diagnostic procedure for pulmonary conditions, providing healthcare professionals with the capability to immediately and accurately determine lung anomalies. This imaging modality is fundamental in assessing and confirming the presence of various lung issues, allowing [...] Read more.
Chest X-ray imaging is an essential tool in the diagnostic procedure for pulmonary conditions, providing healthcare professionals with the capability to immediately and accurately determine lung anomalies. This imaging modality is fundamental in assessing and confirming the presence of various lung issues, allowing for timely and effective medical intervention. In response to the widespread prevalence of pulmonary infections globally, there is a growing imperative to adopt automated systems that leverage deep learning (DL) algorithms. These systems are particularly adept at handling large radiological datasets and providing high precision. This study introduces an advanced identification model that utilizes the VGG16 architecture, specifically adapted for identifying various lung anomalies such as opacity, COVID-19 pneumonia, normal appearance of the lungs, and viral pneumonia. Furthermore, we address the issue of model generalizability, which is of prime significance in our work. We employed the data augmentation technique through CycleGAN, which, through experimental outcomes, has proven effective in enhancing the robustness of our model. The combined performance of our advanced VGG model with the CycleGAN augmentation technique demonstrates remarkable outcomes in several evaluation metrics, including recall, F1-score, accuracy, precision, and area under the curve (AUC). The results of the advanced VGG16 model showcased remarkable accuracy, achieving 98.58%. This study contributes to advancing generative artificial intelligence (AI) in medical imaging analysis and establishes a solid foundation for ongoing developments in computer vision technologies within the healthcare sector. Full article
(This article belongs to the Special Issue Integrating Generative AI with Medical Imaging)
Show Figures

Figure 1

18 pages, 7775 KiB  
Article
Enhancing Obscured Regions in Thermal Imaging: A Novel GAN-Based Approach for Efficient Occlusion Inpainting
by Mohammed Abuhussein, Iyad Almadani, Aaron L. Robinson and Mohammed Younis
J 2024, 7(3), 218-235; https://doi.org/10.3390/j7030013 - 27 Jun 2024
Viewed by 1534
Abstract
This research paper presents a novel approach for occlusion inpainting in thermal images to efficiently segment and enhance obscured regions within these images. The increasing reliance on thermal imaging in fields like surveillance, security, and defense necessitates the accurate detection of obscurants such [...] Read more.
This research paper presents a novel approach for occlusion inpainting in thermal images to efficiently segment and enhance obscured regions within these images. The increasing reliance on thermal imaging in fields like surveillance, security, and defense necessitates the accurate detection of obscurants such as smoke and fog. Traditional methods often struggle with these complexities, leading to the need for more advanced solutions. Our proposed methodology uses a Generative Adversarial Network (GAN) to fill occluded areas in thermal images. This process begins with an obscured region segmentation, followed by a GAN-based pixel replacement in these areas. The methodology encompasses building, training, evaluating, and optimizing the model to ensure swift real-time performance. One of the key challenges in thermal imaging is identifying effective strategies to mitigate critical information loss due to atmospheric interference. Our approach addresses this by employing sophisticated deep-learning techniques. These techniques segment, classify and inpaint these obscured regions in a patch-wise manner, allowing for more precise and accurate image restoration. We propose utilizing architectures similar to Pix2Pix and UNet networks for generative and segmentation tasks. These networks are known for their effectiveness in image-to-image translation and segmentation tasks. Our method enhances the segmentation and inpainting process by leveraging their architectural similarities. To validate our approach, we provide a quantitative analysis and performance comparison. We include a quantitative comparison between (Pix2Pix and UNet) and our combined architecture. The comparison focuses on how well each model performs in terms of accuracy and speed, highlighting the advantages of our integrated approach. This research contributes to advancing thermal imaging techniques, offering a more robust solution for dealing with obscured regions. The integration of advanced deep learning models holds the potential to significantly improve image analysis in critical applications like surveillance and security. Full article
(This article belongs to the Section Computer Science & Mathematics)
Show Figures

Figure 1

26 pages, 13253 KiB  
Article
Dependence on Tail Copula
by Paramahansa Pramanik
J 2024, 7(2), 127-152; https://doi.org/10.3390/j7020008 - 3 Apr 2024
Viewed by 2727
Abstract
In real-world scenarios, we encounter non-exchangeable dependence structures. Our primary focus is on identifying and quantifying non-exchangeability in the tails of joint distributions. The findings and methodologies presented in this study are particularly valuable for modeling bivariate dependence, especially in fields where understanding [...] Read more.
In real-world scenarios, we encounter non-exchangeable dependence structures. Our primary focus is on identifying and quantifying non-exchangeability in the tails of joint distributions. The findings and methodologies presented in this study are particularly valuable for modeling bivariate dependence, especially in fields where understanding dependence patterns in the tails is crucial, such as quantitative finance, quantitative risk management, and econometrics. To grasp the intricate relationship between the strength of dependence and various types of margins, we explore three fundamental tail behavior patterns for univariate margins. Capitalizing on the probabilistic features of tail non-exchangeability structures, we introduce graphical techniques and statistical tests designed for analyzing data that may manifest non-exchangeability in the joint tail. The effectiveness of the proposed approaches is illustrated through a simulation study and a practical example. Full article
(This article belongs to the Section Computer Science & Mathematics)
Show Figures

Figure 1

24 pages, 3065 KiB  
Article
An Advanced Deep Learning Framework for Multi-Class Diagnosis from Chest X-ray Images
by Maria Vasiliki Sanida, Theodora Sanida, Argyrios Sideris and Minas Dasygenis
J 2024, 7(1), 48-71; https://doi.org/10.3390/j7010003 - 22 Jan 2024
Cited by 7 | Viewed by 4852
Abstract
Chest X-ray imaging plays a vital and indispensable role in the diagnosis of lungs, enabling healthcare professionals to swiftly and accurately identify lung abnormalities. Deep learning (DL) approaches have attained popularity in recent years and have shown promising results in automated medical image [...] Read more.
Chest X-ray imaging plays a vital and indispensable role in the diagnosis of lungs, enabling healthcare professionals to swiftly and accurately identify lung abnormalities. Deep learning (DL) approaches have attained popularity in recent years and have shown promising results in automated medical image analysis, particularly in the field of chest radiology. This paper presents a novel DL framework specifically designed for the multi-class diagnosis of lung diseases, including fibrosis, opacity, tuberculosis, normal, viral pneumonia, and COVID-19 pneumonia, using chest X-ray images, aiming to address the need for efficient and accessible diagnostic tools. The framework employs a convolutional neural network (CNN) architecture with custom blocks to enhance the feature maps designed to learn discriminative features from chest X-ray images. The proposed DL framework is evaluated on a large-scale dataset, demonstrating superior performance in the multi-class diagnosis of the lung. In order to evaluate the effectiveness of the presented approach, thorough experiments are conducted against pre-existing state-of-the-art methods, revealing significant accuracy, sensitivity, and specificity improvements. The findings of the study showcased remarkable accuracy, achieving 98.88%. The performance metrics for precision, recall, F1-score, and Area Under the Curve (AUC) averaged 0.9870, 0.9904, 0.9887, and 0.9939 across the six-class categorization system. This research contributes to the field of medical imaging and provides a foundation for future advancements in DL-based diagnostic systems for lung diseases. Full article
(This article belongs to the Special Issue Integrating Generative AI with Medical Imaging)
Show Figures

Figure 1

27 pages, 461 KiB  
Article
Linking Error in the 2PL Model
by Alexander Robitzsch
J 2023, 6(1), 58-84; https://doi.org/10.3390/j6010005 - 11 Jan 2023
Cited by 3 | Viewed by 2926
Abstract
The two-parameter logistic (2PL) item response model is likely the most frequently applied item response model for analyzing dichotomous data. Linking errors quantify the variability in means or standard deviations due to the choice of items. Previous research presented analytical work for linking [...] Read more.
The two-parameter logistic (2PL) item response model is likely the most frequently applied item response model for analyzing dichotomous data. Linking errors quantify the variability in means or standard deviations due to the choice of items. Previous research presented analytical work for linking errors in the one-parameter logistic model. In this article, we present linking errors for the 2PL model using the general theory of M-estimation. Linking errors are derived in the case of log-mean-mean linking for linking two groups. The performance of the newly proposed formulas is evaluated in a simulation study. Furthermore, the linking error estimation in the 2PL model is also treated in more complex settings, such as chain linking, trend estimation, fixed item parameter calibration, and concurrent calibration. Full article
(This article belongs to the Section Computer Science & Mathematics)
Show Figures

Figure 1

11 pages, 283 KiB  
Article
The Good, the Bad, and the Invisible with Its Opportunity Costs: Introduction to the ‘J’ Special Issue on “the Impact of Artificial Intelligence on Law”
by Ugo Pagallo and Massimo Durante
J 2022, 5(1), 139-149; https://doi.org/10.3390/j5010011 - 19 Feb 2022
Cited by 3 | Viewed by 6558
Abstract
Scholars and institutions have been increasingly debating the moral and legal challenges of AI, together with the models of governance that should strike the balance between the opportunities and threats brought forth by AI, its ‘good’ and ‘bad’ facets. There are more than [...] Read more.
Scholars and institutions have been increasingly debating the moral and legal challenges of AI, together with the models of governance that should strike the balance between the opportunities and threats brought forth by AI, its ‘good’ and ‘bad’ facets. There are more than a hundred declarations on the ethics of AI and recent proposals for AI regulation, such as the European Commission’s AI Act, have further multiplied the debate. Still, a normative challenge of AI is mostly overlooked, and regards the underuse, rather than the misuse or overuse, of AI from a legal viewpoint. From health care to environmental protection, from agriculture to transportation, there are many instances of how the whole set of benefits and promises of AI can be missed or exploited far below its full potential, and for the wrong reasons: business disincentives and greed among data keepers, bureaucracy and professional reluctance, or public distrust in the era of no-vax conspiracies theories. The opportunity costs that follow this technological underuse is almost terra incognita due to the ‘invisibility’ of the phenomenon, which includes the ‘shadow prices’ of economy. This introduction provides metrics for such assessment and relates this work to the development of new standards for the field. We must quantify how much it costs not to use AI systems for the wrong reasons. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
13 pages, 267 KiB  
Article
Metrics, Explainability and the European AI Act Proposal
by Francesco Sovrano, Salvatore Sapienza, Monica Palmirani and Fabio Vitali
J 2022, 5(1), 126-138; https://doi.org/10.3390/j5010010 - 18 Feb 2022
Cited by 17 | Viewed by 12330
Abstract
On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not [...] Read more.
On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
28 pages, 965 KiB  
Article
Law, Socio-Legal Governance, the Internet of Things, and Industry 4.0: A Middle-Out/Inside-Out Approach
by Pompeu Casanovas, Louis de Koker and Mustafa Hashmi
J 2022, 5(1), 64-91; https://doi.org/10.3390/j5010005 - 21 Jan 2022
Cited by 9 | Viewed by 8187
Abstract
The Web of Data, the Internet of Things, and Industry 4.0 are converging, and society is challenged to ensure that appropriate regulatory responses can uphold the rule of law fairly and effectively in this emerging context. The challenge extends beyond merely submitting digital [...] Read more.
The Web of Data, the Internet of Things, and Industry 4.0 are converging, and society is challenged to ensure that appropriate regulatory responses can uphold the rule of law fairly and effectively in this emerging context. The challenge extends beyond merely submitting digital processes to the law. We contend that the 20th century notion of ‘legal order’ alone will not be suitable to produce the social order that the law should bring. The article explores the concepts of rule of law and of legal governance in digital and blockchain environments. We position legal governance from an empirical perspective, i.e., as an explanatory and validation concept to support the implementation of the rule of law in the new digital environments. As a novel contribution, this article (i) progresses some of the work done on the metarule of law and complements the SMART middle-out approach with an inside-out approach to digital regulatory systems and legal compliance models; (ii) sets the state-of-the-art and identifies the way to explain and validate legal information flows and hybrid agents’ behaviour; (iii) describes a phenomenological and historical approach to legal and political forms; and (iv) shows the utility of separating enabling and driving regulatory systems. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Show Figures

Figure 1

18 pages, 453 KiB  
Article
Argumentation and Defeasible Reasoning in the Law
by Marco Billi, Roberta Calegari, Giuseppe Contissa, Francesca Lagioia, Giuseppe Pisano, Galileo Sartor and Giovanni Sartor
J 2021, 4(4), 897-914; https://doi.org/10.3390/j4040061 - 18 Dec 2021
Cited by 3 | Viewed by 4719
Abstract
Different formalisms for defeasible reasoning have been used to represent knowledge and reason in the legal field. In this work, we provide an overview of the following logic-based approaches to defeasible reasoning: defeasible logic, Answer Set Programming, ABA+, ASPIC+, and DeLP. We compare [...] Read more.
Different formalisms for defeasible reasoning have been used to represent knowledge and reason in the legal field. In this work, we provide an overview of the following logic-based approaches to defeasible reasoning: defeasible logic, Answer Set Programming, ABA+, ASPIC+, and DeLP. We compare features of these approaches under three perspectives: the logical model (knowledge representation), the method (computational mechanisms), and the technology (available software resources). On top of that, two real examples in the legal domain are designed and implemented in ASPIC+ to showcase the benefit of an argumentation approach in real-world domains. The CrossJustice and Interlex projects are taken as a testbed, and experiments are conducted with the Arg2P technology. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Show Figures

Figure 1

10 pages, 236 KiB  
Article
Nothing to Be Happy about: Consumer Emotions and AI
by Mateja Durovic and Jonathon Watson
J 2021, 4(4), 784-793; https://doi.org/10.3390/j4040053 - 16 Nov 2021
Cited by 1 | Viewed by 4922
Abstract
Advancements in artificial intelligence and Big Data allow for a range of goods and services to determine and respond to a consumer’s emotional state of mind. Considerable potential surrounds the technological ability to detect and respond to an individual’s emotions, yet such technology [...] Read more.
Advancements in artificial intelligence and Big Data allow for a range of goods and services to determine and respond to a consumer’s emotional state of mind. Considerable potential surrounds the technological ability to detect and respond to an individual’s emotions, yet such technology is also controversial and raises questions surrounding the legal protection of emotions. Despite their highly sensitive and private nature, this article highlights the inadequate protection of emotions in aspects of data protection and consumer protection law, arguing that the contribution by recent proposal for an Artificial Intelligence Act is not only unsuitable to overcome such deficits but does little to support the assertion that emotions are highly sensitive. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
20 pages, 3088 KiB  
Article
A Novel True Random Number Generator in Near Field Communication as Memristive Wireless Power Transmission
by Colin Sokol Kuka, Yihua Hu, Quan Xu, James Chandler and Mohammed Alkahtani
J 2021, 4(4), 764-783; https://doi.org/10.3390/j4040052 - 11 Nov 2021
Viewed by 3597
Abstract
The security of powering systems has been a major problem over the last decade, leading to an increased interest in wireless power and data transfer. In this research paper, a new inductive Wireless Power Transfer (WPT) circuit topology has been used. In traditional [...] Read more.
The security of powering systems has been a major problem over the last decade, leading to an increased interest in wireless power and data transfer. In this research paper, a new inductive Wireless Power Transfer (WPT) circuit topology has been used. In traditional WPT circuits, the inverters are used to produce an oscillation for the transmitter coils. The classic WPT system includes intrinsic energy dissipation sources due to the use of switches, necessitating the need of an extra control circuit to ensure proper switching time. Furthermore, they have limited data encryption capabilities. As a result, an unique WPT system based on memristors has been developed, eliminating the need for switches. Furthermore, because this novel topology communicates a synchronised chaotic behaviour, it becomes highly beneficial. This circuit may be used in Near Field Communication (NFC), where chaotic true random numbers (TRNG) can be generated to increase security. The results of simulations indicate the functioning of the Memristor-based WPT (M-WPT) and its ability to generate random numbers. We experimentally proved the chaotic behaviour of the circuit and statistically demonstrated the development of the TRNG, using an Arduino board and the Chua circuit to build the M-WPT circuit. Full article
(This article belongs to the Section Computer Science & Mathematics)
Show Figures

Graphical abstract

Back to TopTop