Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (75)

Search Parameters:
Keywords = black box theory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2774 KiB  
Article
Complex Network Analytics for Structural–Functional Decoding of Neural Networks
by Jiarui Zhang, Dongxiao Zhang, Hu Lou, Yueer Li, Taijiao Du and Yinjun Gao
Appl. Sci. 2025, 15(15), 8576; https://doi.org/10.3390/app15158576 (registering DOI) - 1 Aug 2025
Abstract
Neural networks (NNs) achieve breakthroughs in computer vision and natural language processing,yet their “black box” nature persists. Traditional methods prioritise parameter optimisation and loss design, overlooking NNs’ fundamental structure as topologically organised nonlinear computational systems. This work proposes a complex network theory framework [...] Read more.
Neural networks (NNs) achieve breakthroughs in computer vision and natural language processing,yet their “black box” nature persists. Traditional methods prioritise parameter optimisation and loss design, overlooking NNs’ fundamental structure as topologically organised nonlinear computational systems. This work proposes a complex network theory framework decoding structure–function coupling by mapping convolutional layers, fully connected layers, and Dropout modules into graph representations. To overcome limitations of heuristic compression techniques, we develop a topology-sensitive adaptive pruning algorithm that evaluates critical paths via node strength centrality, preserving structural–functional integrity. On CIFAR-10, our method achieves 55.5% parameter reduction with only 7.8% accuracy degradation—significantly outperforming traditional approaches. Crucially, retrained pruned networks exceed original model accuracy by up to 2.63%, demonstrating that topology optimisation unlocks latent model potential. This research establishes a paradigm shift from empirical to topologically rationalised neural architecture design, providing theoretical foundations for deep learning optimisation dynamics. Full article
(This article belongs to the Special Issue Artificial Intelligence in Complex Networks (2nd Edition))
11 pages, 194 KiB  
Article
Green Paradox in the Carbon Neutrality Process: A Strategic Game About the Shipping Industry
by Peng Xu, Yukun Cao and Jingye Li
Sustainability 2025, 17(13), 5970; https://doi.org/10.3390/su17135970 - 29 Jun 2025
Viewed by 330
Abstract
The shipping industry plays a significant role in China’s and the global pursuit of carbon neutrality, and it is essential to be cautious about the potential risks associated with the green paradox. This study incorporates Goal Setting Theory and Value Expectations Theory into [...] Read more.
The shipping industry plays a significant role in China’s and the global pursuit of carbon neutrality, and it is essential to be cautious about the potential risks associated with the green paradox. This study incorporates Goal Setting Theory and Value Expectations Theory into the analytical framework of the green paradox and tests this framework through a strategic game research design. The study finds that, first, the green paradox of shipping companies presents hidden characteristics, and the loss caused by coping strategies is a necessary risk to be vigilant about. Second, the green paradox of shipping companies is mainly caused by the decision-makers’ goal perception of accessibility. Moreover, due to the motivation of long-term acceptance of green subsidies, decision-makers will delay the carbon neutrality process. Third, policies need to adopt a gradient increasing quota management strategy, and be accompanied by a variety of policy tools to reduce the risk of the green paradox. This study opens the theoretical “black box” of market expectations and provides a solution to reduce the risk of the green paradox. Full article
(This article belongs to the Special Issue Sustainable Maritime Logistics and Low-Carbon Transportation)
21 pages, 688 KiB  
Systematic Review
High Performance Work Systems in the Tourism Industry: A Systematic Review
by Javier Montañés-Sanchez, María Dolores Sánchez-Fernández, Jakson Renner Rodrigues Soares and José Ramón-Cardona
Adm. Sci. 2025, 15(6), 195; https://doi.org/10.3390/admsci15060195 - 22 May 2025
Viewed by 644
Abstract
In the business context, human resource management is essential to achieve maximum productivity, making it necessary to build high performance work systems. The aim of this study was to know the current state of human resources practices integrated into the high performance work [...] Read more.
In the business context, human resource management is essential to achieve maximum productivity, making it necessary to build high performance work systems. The aim of this study was to know the current state of human resources practices integrated into the high performance work systems of tourism companies and to understand the relationship between HPWSs and staff turnover, absenteeism, productivity and accident rate, as well as the black box variables that mediate this relationship. A systematic review of literature published between 2019 and 2024 (April) was carried out with PRISMA 2020 statement, based on a bibliographic search in databases and which, after applying eligibility criteria, allowed for the compilation of 18 studies. The academic interest of this review stands out because it is a novel topic, postulating it as a starting point for future theoretical and empirical research that would serve to develop a more robust theory and make visible a topic of great impact for companies in the tourism sector, many of them family businesses, as well as for their workers. Full article
Show Figures

Figure 1

42 pages, 3632 KiB  
Article
Decision-Making for Sustainable Digitalization Through Grey Systems Theory: A Bibliometric Overview
by Georgiana-Alina Crișan, Adrian Domenteanu, Mădălina Ecaterina Popescu and Camelia Delcea
Sustainability 2025, 17(10), 4615; https://doi.org/10.3390/su17104615 - 18 May 2025
Viewed by 431
Abstract
As the digitalization trend is progressively establishing a solid foundation in terms of both implementation and scientific research, its effects may be noticed across every sector of the economy. Therefore, offering sustainable solutions becomes essential for implementing digital transitions in a cohesive manner. [...] Read more.
As the digitalization trend is progressively establishing a solid foundation in terms of both implementation and scientific research, its effects may be noticed across every sector of the economy. Therefore, offering sustainable solutions becomes essential for implementing digital transitions in a cohesive manner. Additionally, the study of Grey systems is another topic that has relevance when investigating the implications of digitalization in sustainability. Grey systems theory is an elaborate decision-making technique that focuses on objects that incorporate both known and unknown information. This approach emerged from the notion of a “black box” in which “black objects” are defined by the absence of information. Grey systems address the gap between the “black objects” with unknown information and the “white objects” with complete knowledge. The interaction of these domains is centered on the requirement for a decision-making framework that facilitates a sustained digital transformation. The novelty of the paper consists of tackling the theory of Grey systems’ implications in the economy’s sustainable digitalization, where the literature review is rather scarce. Having considered a generous timespan of the investigation from 1997 to 2024, we gathered a large dataset of papers extracted from the ISI Web of Science database, which allows for relevant inferences in terms of research trends and thematic directions in the field. The analysis focused on emphasizing the research capabilities and landscape of this rapidly developing subject. The annual growth rate of published papers is 11.7%, indicating the increased interest of researchers in the study of this subject. The visualizations and tables used in the analysis were generated with the help of the “ Biblioshiny” (4.3.0) library from the R programming language and highlighted the main information related to topics, authors, journals, collaborations, and research networks. The present paper reviews the ten most cited publications in the dataset in order to provide a comprehensive assessment of the study on the concepts of Grey systems theory, digitalization, and sustainability to date. Full article
Show Figures

Figure 1

20 pages, 266 KiB  
Article
Code Word Cloud in Franz Kafka’s “Beim Bau der Chinesischen Mauer” [“The Great Wall of China”]
by Alex Mentzel
Humanities 2025, 14(4), 73; https://doi.org/10.3390/h14040073 - 25 Mar 2025
Viewed by 479
Abstract
Amidst the centenary reflections on Franz Kafka’s legacy, this article explores his work’s ongoing resonance with the digital age, particularly through the lens of generative AI and cloud computation. Anchored in a close reading of Kafka’s “Beim Bau der chinesischen Mauer”, this study [...] Read more.
Amidst the centenary reflections on Franz Kafka’s legacy, this article explores his work’s ongoing resonance with the digital age, particularly through the lens of generative AI and cloud computation. Anchored in a close reading of Kafka’s “Beim Bau der chinesischen Mauer”, this study interrogates how the spatial and temporal codes embedded in the narrative parallel the architectures of contemporary diffusion systems at the heart of AI models. Engaging with critical theory, media archaeology, and AI discourse, this article argues that the rise of large language models not only commodifies language but also recasts Kafka’s allegorical critiques of bureaucratic opacity and imperial command structures within a digital framework. The analysis leverages concepts like Kittler’s code, Benjamin’s figural cloud, and Hamacher’s linguistic dissemblance to position Kafka’s parables as proto-critical tools for examining AI’s black-box nature. Ultimately, the piece contends that Kafka’s text is less a metaphor for our technological present than a mirror reflecting the epistemological crises engendered by the collapse of semantic transparency in the era of algorithmic communication. This reframing invites a rethinking of how narrative, code, and digital architectures intersect, complicating our assumptions about clarity, control, and the digital regimes shaping contemporary culture. Full article
(This article belongs to the Special Issue Franz Kafka in the Age of Artificial Intelligence)
16 pages, 25849 KiB  
Article
A Hybrid Approach to Semantic Digital Speech: Enabling Gradual Transition in Practical Communication Systems
by Münif Zeybek, Bilge Kartal Çetin and Erkan Zeki Engin
Electronics 2025, 14(6), 1130; https://doi.org/10.3390/electronics14061130 - 13 Mar 2025
Viewed by 921
Abstract
Recent advances in deep learning have fostered a transition from the traditional, bit-centric paradigm of Shannon’s information theory to a semantic-oriented approach, emphasizing the transmission of meaningful information rather than mere data fidelity. However, black-box AI-based semantic communication lacks structured discretization and remains [...] Read more.
Recent advances in deep learning have fostered a transition from the traditional, bit-centric paradigm of Shannon’s information theory to a semantic-oriented approach, emphasizing the transmission of meaningful information rather than mere data fidelity. However, black-box AI-based semantic communication lacks structured discretization and remains dependent on analog modulation, which presents deployment challenges. This paper presents a new semantic-aware digital speech communication system, named Hybrid-DeepSCS, a stepping stone between traditional and fully end-to-end semantic communication. Our system comprises the following parts: a semantic encoder for extracting and compressing structured features, a standard transmitter for digital modulation including source and channel encoding, a standard receiver for recovering the bitstream, and a semantic decoder for expanding the features and reconstructing speech. By adding semantic encoding to a standard digital transmission, our system works with existing communication networks while exploring the potential of deep learning for feature representation and reconstruction. This hybrid method allows for gradual implementation, making it more practical for real-world uses like low-bandwidth speech, robust voice transmission over wireless networks, and AI-assisted speech on edge devices. The system’s compatibility with conventional digital infrastructure positions it as a viable solution for IoT deployments, where seamless integration with legacy systems and energy-efficient processing are critical. Furthermore, our approach addresses IoT-specific challenges such as bandwidth constraints in industrial sensor networks and latency-sensitive voice interactions in smart environments. We test the system under various channel conditions using Signal-to-Distortion Ratio (SDR), PESQ, and STOI metrics. The results show that our system delivers robust and clear speech, connecting traditional wireless systems with the future of AI-driven communication. The framework’s adaptability to edge computing architectures further underscores its relevance for IoT platforms, enabling efficient semantic processing in resource-constrained environments. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Wireless Communications)
Show Figures

Figure 1

25 pages, 18185 KiB  
Article
On the Conceptualization of the Active Site in Selective Oxidation over a Multimetal Oxide Catalyst: From Atomistic to Black-Box Approximation
by José F. Durán-Pérez, José G. Rivera de la Cruz, Martín Purino, Julio C. García-Martínez and Carlos O. Castillo-Araiza
Catalysts 2025, 15(2), 144; https://doi.org/10.3390/catal15020144 - 4 Feb 2025
Viewed by 1136
Abstract
Catalytic reactor engineering bridges the active-site scale and the industrial-reactor scale, with kinetics as the primary bottleneck in scale-up. The main challenge in kinetics is conceptualizing the active site and formulating the reaction mechanism, leading to multiple approaches without clear guidance on their [...] Read more.
Catalytic reactor engineering bridges the active-site scale and the industrial-reactor scale, with kinetics as the primary bottleneck in scale-up. The main challenge in kinetics is conceptualizing the active site and formulating the reaction mechanism, leading to multiple approaches without clear guidance on their reliability for industrial-reactor design. This work assesses different approaches to active-site conceptualization and reaction-mechanism formulation for selective oxidation over a complex multi-metal catalyst. It integrates atomistic-scale insights from periodic Density Functional Theory (DFT) calculations into kinetic-model development. This approach contrasts with the macroscopic classical method, which treats the catalyst as a black box, as well as with alternative atomistic methods that conceptualize the active site as a single metal atom on different catalytic-surface regions. As a case study, this work examines ethane oxidative dehydrogenation to ethylene over the multi-metal oxide catalyst MoVTeNbO, which has a complex structure. This analysis provides insights into the ability of DFT to accurately describe reactions on such materials. Additionally, it compares DFT predictions to experimental data obtained from a non-idealized MoVTeNbO catalyst synthesized and assessed under kinetic control at the laboratory scale. The findings indicate that while the black-box active-site conceptualization best describes observed trends, its reaction mechanism and parameters lack reliability compared to DFT calculations. Furthermore, atomistic active-site conceptualizations lead to different parameter sets depending on how the active site and reaction mechanism are defined. Unlike previous studies, our approach determines activation-energy profiles within the range predicted by DFT. The resulting kinetic model describes experimental trends while maintaining phenomenological and statistical reliability. The corrections required for primary parameters remain below 20 kJ mol1, consistent with the inherent uncertainties in DFT calculations. In summary, this work demonstrates the feasibility of integrating atomistic insights into kinetic modeling, offering different perspectives on active-site conceptualization and reaction-mechanism formulation, paving the way for future studies on rational catalyst and industrial-reactor design. Full article
(This article belongs to the Section Catalytic Reaction Engineering)
Show Figures

Figure 1

23 pages, 2766 KiB  
Article
Unveiling Patterns in Forecasting Errors: A Case Study of 3PL Logistics in Pharmaceutical and Appliance Sectors
by Maciej Wolny and Mariusz Kmiecik
Sustainability 2025, 17(1), 214; https://doi.org/10.3390/su17010214 - 31 Dec 2024
Cited by 1 | Viewed by 1702
Abstract
Purpose: The study aims to analyze forecast errors for various time series generated by a 3PL logistics operator across 10 distribution channels managed by the operator. Design/methodology/approach: This study examines forecasting errors across 10 distribution channels managed by a 3PL operator using Google [...] Read more.
Purpose: The study aims to analyze forecast errors for various time series generated by a 3PL logistics operator across 10 distribution channels managed by the operator. Design/methodology/approach: This study examines forecasting errors across 10 distribution channels managed by a 3PL operator using Google Cloud AI forecasting. The R environment was used in the study. The research centered on analyzing forecast error series, particularly decomposition analysis of the series, to identify trends and seasonality in forecast errors. Findings: The analysis of forecast errors reveals diverse patterns and characteristics of errors across individual channels. A systematic component was observed in all analyzed household appliance channels (seasonality in all channels, and no significant trend identified only in Channel 10). In contrast, significant trends were identified in one pharmaceutical channel (Channel 02), while no systematic components were detected in the remaining channels within this group. Research limitations: Logistics operations typically depend on numerous variables, which may affect forecast accuracy. Additionally, the lack of information on the forecasting models, mechanisms (black box), and input data limits a comprehensive understanding of the sources of errors. Value of the paper: The study highlights the valuable insights that can be derived from analyzing forecast errors in the time series within the context of logistics operations. The findings underscore the need for a tailored forecasting approach for each channel, the importance of enhancing the forecasting tool, and the potential for improving forecast accuracy by focusing on trends and seasonality. The findings also emphasize that customized forecasting tools can significantly enhance operational efficiency by improving demand planning accuracy and reducing resource misallocation. This analysis makes a significant contribution to the theory and practice of demand forecasting by logistics operators in distribution networks. The research offers valuable contributions to ongoing efforts in demand forecasting by logistics operators. Full article
(This article belongs to the Special Issue Advances in Business Model Innovation and Corporate Sustainability)
Show Figures

Figure 1

15 pages, 2811 KiB  
Article
Interpretability as Approximation: Understanding Black-Box Models by Decision Boundary
by Hangcheng Dong, Bingguo Liu, Dong Ye and Guodong Liu 
Electronics 2024, 13(22), 4339; https://doi.org/10.3390/electronics13224339 - 5 Nov 2024
Cited by 1 | Viewed by 2003
Abstract
Currently, interpretability methods focus more on less objective human-understandable semantics. To objectify and standardize interpretability research, in this study, we provide notions of interpretability based on approximation theory. We first define explainable models in terms of explicitness and then use completeness to define [...] Read more.
Currently, interpretability methods focus more on less objective human-understandable semantics. To objectify and standardize interpretability research, in this study, we provide notions of interpretability based on approximation theory. We first define explainable models in terms of explicitness and then use completeness to define interpretability, thereby turning interpretability into the process of approximating black-box models with interpretable models. In particular, we think that the decision boundary of a classification model is equivalent to its interpretability. Next, we implement this approximation interpretation on multilayer perceptrons (MLPs) and then propose to use the MLP as a universal interpreter to explain other complex black-box models. Compared to the LIME method, which can only extract local linear features, our method is global and therefore termed as GIME. Extensive experiments demonstrate the effectiveness of our approaches. Full article
Show Figures

Figure 1

18 pages, 466 KiB  
Article
Construction Project Organizational Capabilities Antecedent Model Construction Based on Digital Construction Context
by Qian Hu, Yonghong Chen, Linling Gao and Chenyongjun Ding
Buildings 2024, 14(11), 3471; https://doi.org/10.3390/buildings14113471 - 30 Oct 2024
Viewed by 1146
Abstract
In the context of high-quality development and the digital age, digital technology-enabled construction projects have become the only choice to promote organizational capabilities and innovation. However, the micro foundation of the organizational capabilities of construction projects has not been clarified, and its formation [...] Read more.
In the context of high-quality development and the digital age, digital technology-enabled construction projects have become the only choice to promote organizational capabilities and innovation. However, the micro foundation of the organizational capabilities of construction projects has not been clarified, and its formation path is even less clear. This paper focuses on the characteristics of the times when digital technology and engineering construction are deeply integrated, conducts in-depth research on typical projects in the context of digital construction, and uses the qualitative research method of grounded theory to explore the antecedents of the formation of organizational capabilities. The results of the study establish a systematic antecedent model framework, including value integration, data traction, resource integration, technology integration, digital collaboration, and digital routines, and find out the “black box” process of the formation of construction project organizational capacities under the digital construction context. The conclusion of this study provides a theoretical basis and practical enlightenment for the construction of organizational capabilities of construction projects to cope with technological turbulence. Full article
(This article belongs to the Special Issue Advances in Digital Construction Management)
Show Figures

Figure 1

12 pages, 238 KiB  
Essay
Time and Place for Counter-Storytelling as Liberatory Theory and Collective Healing Practice in Academia: A Case Example of a Black Feminist Psycho-Socio Cultural Scholar-Artivist
by Alexis D. Jemal
Genealogy 2024, 8(2), 69; https://doi.org/10.3390/genealogy8020069 - 30 May 2024
Cited by 2 | Viewed by 1651
Abstract
The institution of slavery engineered racialized gendered capitalism that locks Black women in multiple social identity-labeled boxes on the sociocultural and economic hierarchy. Acts of cultural invasion have produced controlling images and oppressive narratives to maintain the status quo of white male wealth [...] Read more.
The institution of slavery engineered racialized gendered capitalism that locks Black women in multiple social identity-labeled boxes on the sociocultural and economic hierarchy. Acts of cultural invasion have produced controlling images and oppressive narratives to maintain the status quo of white male wealth and power. Critical race theory scholars have offered counter-storytelling as a theorizing method to study the impact of intersectional oppression on Black women and to develop strategies for resistance and healing for those who are at the margins of society. This manuscript weaves the voices of Black feminists with a creative arts methodology to explore resistance and healing practice rooted in lived experience and provides a case example of counter-storytelling in a predominantly white academic space. For future directions, there is a need for guidelines on how to navigate the use of counter-storytelling to safely engage and protect the Black woman’s humanity and not be a tool for public displays of Black pain or for trauma voyeurism. Full article
17 pages, 3719 KiB  
Article
Predictions from Generative Artificial Intelligence Models: Towards a New Benchmark in Forecasting Practice
by Hossein Hassani and Emmanuel Sirimal Silva
Information 2024, 15(6), 291; https://doi.org/10.3390/info15060291 - 21 May 2024
Cited by 7 | Viewed by 3363
Abstract
This paper aims to determine whether there is a case for promoting a new benchmark for forecasting practice via the innovative application of generative artificial intelligence (Gen-AI) for predicting the future. Today, forecasts can be generated via Gen-AI models without the need for [...] Read more.
This paper aims to determine whether there is a case for promoting a new benchmark for forecasting practice via the innovative application of generative artificial intelligence (Gen-AI) for predicting the future. Today, forecasts can be generated via Gen-AI models without the need for an in-depth understanding of forecasting theory, practice, or coding. Therefore, using three datasets, we present a comparative analysis of forecasts from Gen-AI models against forecasts from seven univariate and automated models from the forecast package in R, covering both parametric and non-parametric forecasting techniques. In some cases, we find statistically significant evidence to conclude that forecasts from Gen-AI models can outperform forecasts from popular benchmarks like seasonal ARIMA, seasonal naïve, exponential smoothing, and Theta forecasts (to name a few). Our findings also indicate that the accuracy of forecasts from Gen-AI models can vary not only based on the underlying data structure but also on the quality of prompt engineering (thus highlighting the continued importance of forecasting education), with the forecast accuracy appearing to improve at longer horizons. Therefore, we find some evidence towards promoting forecasts from Gen-AI models as benchmarks in future forecasting practice. However, at present, users are cautioned against reliability issues and Gen-AI being a black box in some cases. Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

16 pages, 8438 KiB  
Article
A Study on the Frequency-Domain Black-Box Modeling Method for the Nonlinear Behavioral Level Conduction Immunity of Integrated Circuits Based on X-Parameter Theory
by Xi Chen, Shuguo Xie, Mengyuan Wei and Yan Yang
Micromachines 2024, 15(5), 658; https://doi.org/10.3390/mi15050658 - 17 May 2024
Cited by 3 | Viewed by 1379
Abstract
During circuit conduction immunity simulation assessments, the existing black-box modeling methods for chips generally involve the use of time-domain-based modeling methods or ICIM-CI binary decision models, which can provide approximate immunity assessments but require a high number of tests to be performed when [...] Read more.
During circuit conduction immunity simulation assessments, the existing black-box modeling methods for chips generally involve the use of time-domain-based modeling methods or ICIM-CI binary decision models, which can provide approximate immunity assessments but require a high number of tests to be performed when carrying out broadband immunity assessments, as well as having a long modeling time and demonstrating poor reproducibility and insufficient accuracy in capturing the complex electromagnetic response in the frequency domain. To address these issues, in this paper, we propose a novel frequency-domain broadband model (Sensi-Freq-Model) of IC conduction susceptibility that accurately quantifies the conduction immunity of components in the frequency domain and builds a model of the IC based on the quantized data. The method provides high fitting accuracy in the frequency domain, which significantly improves the accuracy of circuit broadband design. The generated model retains as much information within the frequency-domain broadband as possible and reduces the need to rebuild the model under changing electromagnetic environments, thereby enhancing the portability and repeatability of the model. The ability to reduce the modeling time of the chip greatly improves modeling efficiency and circuit design. The results of this study show that the “Sensi-Freq-Model” reduces the broadband modeling time by about 90% compared to the traditional ICIM-CI method and improves the normalized mean square error (NMSE) by 18.5 dB. Full article
(This article belongs to the Special Issue Latest Advancements in Semiconductor Materials, Devices, and Systems)
Show Figures

Figure 1

19 pages, 604 KiB  
Article
Climate Change Misinformation in the United States: An Actor–Network Analysis
by Neelam Thapa Magar, Binay Jung Thapa and Yanan Li
Journal. Media 2024, 5(2), 595-613; https://doi.org/10.3390/journalmedia5020040 - 14 May 2024
Cited by 7 | Viewed by 8362
Abstract
Climate change misinformation refers to inaccurate, incomplete, or misleading climate change-related information created and spread in the public domain. Despite substantial consensus among the scientific community on the reality of anthropogenic climate change, public opinion still remains divided. Combating the climate crisis requires [...] Read more.
Climate change misinformation refers to inaccurate, incomplete, or misleading climate change-related information created and spread in the public domain. Despite substantial consensus among the scientific community on the reality of anthropogenic climate change, public opinion still remains divided. Combating the climate crisis requires immediate and meaningful actions; however, various actors generate and propagate climate change misinformation, with vested interests in sowing doubts in the public sphere about the reality and urgency of climate impacts. The United States of America, where public opinion holds a strong sway in many social and political spheres, acts as a pertinent case in point, where the prevalence of climate denial fueled by persistent climate change misinformation contributes to this divided public perspective. For this reason, it is imperative to enhance the understanding of the subtle ways climate change misinformation exists and functions. This article employs actor–network theory and the concept of black-boxing to explore a case of climate change misinformation in the United States, with the aim of comprehending the workings of climate change misinformation within its network. Full article
Show Figures

Figure 1

18 pages, 1113 KiB  
Article
A Section Location Method of Single-Phase Short-Circuit Faults for Distribution Networks Containing Distributed Generators Based on Fusion Fault Confidence of Short-Circuit Current Vectors
by Shoudong Xu, Jinxin Ouyang, Jiyu Chen and Xiaofu Xiong
Electronics 2024, 13(9), 1741; https://doi.org/10.3390/electronics13091741 - 1 May 2024
Cited by 2 | Viewed by 1390
Abstract
To ensure safe and stable operation, accurate fault localization within active distribution networks is required, and this has attracted much attention. Influenced by many factors such as the control strategy, control performance, initial state of the distributed generators, and distribution network topology, it [...] Read more.
To ensure safe and stable operation, accurate fault localization within active distribution networks is required, and this has attracted much attention. Influenced by many factors such as the control strategy, control performance, initial state of the distributed generators, and distribution network topology, it is still difficult to reliably locate complex and variable single-phase short-circuit faults relying only on a single feature quantity, while localization methods incorporating intelligent algorithms are affected by the choice of a priori samples and the fact that the solution process is a black-box model. To address this challenge, in this work, an expression for the single-phase short-circuit current vector of a distribution network containing distributed generators is derived, and the differences in magnitude and phase angle of the short-circuit current vectors upstream and downstream of the fault point are analyzed. Based on measurement theory, a fault confidence distribution function that reacts to the relative size of the current magnitude difference and phase angle difference is established, and the fusion fault confidence of the short-circuit current vector is constructed with the help of evidence theory. Finally, a method of locating single-phase short-circuit faults in distribution networks that contain distributed generators is proposed. The simulation results show that the ratio of the fusion fault confidence of the short-circuit current vector between faulted and non-faulted sections under the influence of different distributed generator capacities, fault locations, and transition resistances differ significantly. The proposed single-phase short-circuit fault localization method is both adaptive and physically interpretable and has clear boundaries, sound sensitivity, and engineering practicability. Full article
Show Figures

Figure 1

Back to TopTop