Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = Bayesian unfolding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2856 KB  
Article
Modeling Dynamic Risk Perception Using Large Language Model (LLM) Agents
by He Wen, Mojtaba Parsaee and Zaman Sajid
AI 2025, 6(11), 296; https://doi.org/10.3390/ai6110296 - 19 Nov 2025
Viewed by 1421
Abstract
Background: Understanding how accident risk escalates during unfolding industrial events is essential for developing intelligent safety systems. This study proposes a large language model (LLM)-based framework that simulates human-like risk reasoning over sequential accident precursors. Methods: Using 100 investigation reports from [...] Read more.
Background: Understanding how accident risk escalates during unfolding industrial events is essential for developing intelligent safety systems. This study proposes a large language model (LLM)-based framework that simulates human-like risk reasoning over sequential accident precursors. Methods: Using 100 investigation reports from the U.S. Chemical Safety Board (CSB), two Generative Pre-trained Transformer (GPT) agents were developed: (1) an Accident Precursor Extractor to identify and classify time-ordered events, and (2) a Subjective Probability Estimator to update perceived accident likelihood as precursors unfold. Results: The subjective accident probability increases near-linearly, with an average escalation of 8.0% ± 0.9% per precursor (p<0.05). A consistent tipping point occurs at the fourth precursor, marking a perceptual shift to high-risk awareness. Across 90 analyzed cases, Agent 1 achieved 0.88 precision and 0.84 recall, while Agent 2 reproduced human-like probabilistic reasoning within ±0.08 of expert baselines. The magnitude of escalation differed across precursor types. Organizational factors were perceived as the highest risk (median = 0.56), followed by human error (median = 0.47). Technical and environmental factors demonstrated comparatively smaller effects. Conclusions: These findings confirm that LLM agents can emulate Bayesian-like updating in dynamic risk perception, offering a scalable and explainable foundation for adaptive, sequence-aware safety monitoring in safety-critical systems. Full article
Show Figures

Figure 1

17 pages, 1722 KB  
Article
Effect of Alpha-1 Antitrypsin Deficiency on Zinc Homeostasis Gene Regulation and Interaction with Endoplasmic Reticulum Stress Response-Associated Genes
by Juan P. Liuzzi, Samantha Gonzales, Manuel A. Barbieri, Rebecca Vidal and Changwon Yoo
Nutrients 2025, 17(11), 1913; https://doi.org/10.3390/nu17111913 - 2 Jun 2025
Viewed by 1588
Abstract
Background: Alpha-1 antitrypsin deficiency (AATD) is a genetic disorder caused by mutations in the SERPINA1 gene, leading to reduced levels or impaired alpha-1 antitrypsin (AAT) function. This condition predominantly affects the lungs and liver. The Z allele, a specific mutation in the SERPINA1 [...] Read more.
Background: Alpha-1 antitrypsin deficiency (AATD) is a genetic disorder caused by mutations in the SERPINA1 gene, leading to reduced levels or impaired alpha-1 antitrypsin (AAT) function. This condition predominantly affects the lungs and liver. The Z allele, a specific mutation in the SERPINA1 gene, is the most severe form and results in the production of misfolded AAT proteins. The misfolded proteins accumulate in the endoplasmic reticulum (ER) of liver cells, triggering ER stress and activating the unfolded protein response (UPR), a cellular mechanism designed to restore ER homeostasis. Currently, there is limited knowledge regarding specific nutritional recommendations for patients with AATD. The liver is essential for the regulation of zinc homeostasis, with zinc widely recognized for its hepatoprotective properties. However, the effects of AATD on zinc metabolism remain poorly understood. Similarly, the potential benefits of zinc supplementation for individuals with AATD have not been thoroughly investigated. Objective: This study explored the relationship between AATD and zinc metabolism through a combination of in vitro experiments and computational analysis. Results: The expression of the mutant Z variant of ATT (ATZ) in cultured mouse hepatocytes was associated with decreased labile zinc levels in cells and dysregulation of zinc homeostasis genes. Analysis of two data series from the Gene Expression Omnibus (GEO) revealed that mice expressing ATZ (PiZ mice), a murine model of AATD, exhibited significant differences in mRNA levels related to zinc homeostasis and UPR when compared to wildtype mice. Bayesian network analysis of GEO data uncovered novel gene-to-gene interactions among zinc transporters, as well as between zinc homeostasis, UPR, and other associated genes. Conclusions: The findings provide valuable insights into the role of zinc homeostasis genes in UPR processes linked to AATD. Full article
(This article belongs to the Section Nutrigenetics and Nutrigenomics)
Show Figures

Figure 1

25 pages, 14985 KB  
Article
High-Speed Target HRRP Reconstruction Based on Fast Mean-Field Sparse Bayesian Unrolled Network
by Hang Dong, Fengzhou Dai and Juan Zhang
Remote Sens. 2025, 17(1), 8; https://doi.org/10.3390/rs17010008 - 24 Dec 2024
Cited by 1 | Viewed by 954
Abstract
The rapid and accurate reconstruction of the high-resolution range profiles (HRRPs) of high-speed targets from incomplete wideband radar echoes is a critical component in space target recognition tasks (STRTs). However, state-of-the-art HRRP reconstruction algorithms based on sparse Bayesian learning (SBL) are computationally expensive [...] Read more.
The rapid and accurate reconstruction of the high-resolution range profiles (HRRPs) of high-speed targets from incomplete wideband radar echoes is a critical component in space target recognition tasks (STRTs). However, state-of-the-art HRRP reconstruction algorithms based on sparse Bayesian learning (SBL) are computationally expensive and require the manual selection of prior scale parameters. To address these challenges, this paper proposes a model-driven deep network based on fast mean-field SBL (FMFSBL-Net) for the HRRP reconstruction of high-speed targets under missing data conditions. Specifically, we integrate a precise velocity compensation and HRRP reconstruction into the mean-field SBL framework, which introduces a unified SBL objective function and a mean-field variational family to avoid matrix inversion operations. To reduce the performance loss caused by mismatched prior scale parameters, we unfold the limited FMFSBL iterative process into a deep network, learning the optimal global prior scale parameters through training. Additionally, we introduce a sparsity-enhanced loss function to improve the quality and noise robustness of HRRPs. In addition, simulation and measurement experimental results show that the proposed FMFSBL-Net has a superior reconstruction performance and computational efficiency compared to FMFSBL and existing state-of-the-art SBL framework type algorithms. Full article
Show Figures

Figure 1

14 pages, 4541 KB  
Communication
A Bayesian Deep Unfolded Network for the Off-Grid Direction-of-Arrival Estimation via a Minimum Hole Array
by Ninghui Li, Xiaokuan Zhang, Fan Lv, Binfeng Zong and Weike Feng
Electronics 2024, 13(11), 2139; https://doi.org/10.3390/electronics13112139 - 30 May 2024
Cited by 1 | Viewed by 1597
Abstract
As an important research focus in radar detection and localization, direction-of-arrival (DOA) estimation has advanced significantly owing to deep learning techniques with powerful fitting and classifying abilities in recent years. However, deep learning inevitably requires substantial data to ensure learning and generalization abilities [...] Read more.
As an important research focus in radar detection and localization, direction-of-arrival (DOA) estimation has advanced significantly owing to deep learning techniques with powerful fitting and classifying abilities in recent years. However, deep learning inevitably requires substantial data to ensure learning and generalization abilities and lacks reasonable interpretability. Recently, a deep unfolding technique has attracted widespread concern due to the more explainable perspective and weaker data dependency. More importantly, it has been proven that deep unfolding enables convergence acceleration when applied to iterative algorithms. On this basis, we rigorously deduce an iterative sparse Bayesian learning (SBL) algorithm and construct a Bayesian deep unfolded network in a one-to-one correspondence. Moreover, the common but intractable off-grid errors, caused by grid mismatch, are directly considered in the signal model and computed in the iterative process. In addition, minimum hole array, little considered in deep unfolding, is adopted to further improve estimation performance owing to the maximized array degrees of freedom (DOFs). Extensive simulation results are presented to illustrate the superiority of the proposed method beyond other state-of-the-art methods. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

23 pages, 6158 KB  
Article
Results and Perspectives of Timepix Detectors in Space—From Radiation Monitoring in Low Earth Orbit to Astroparticle Physics
by Benedikt Bergmann, Stefan Gohl, Declan Garvey, Jindřich Jelínek and Petr Smolyanskiy
Instruments 2024, 8(1), 17; https://doi.org/10.3390/instruments8010017 - 29 Feb 2024
Cited by 6 | Viewed by 4281
Abstract
In space application, hybrid pixel detectors of the Timepix family have been considered mainly for the measurement of radiation levels and dosimetry in low earth orbits. Using the example of the Space Application of Timepix Radiation Monitor (SATRAM), we demonstrate the unique capabilities [...] Read more.
In space application, hybrid pixel detectors of the Timepix family have been considered mainly for the measurement of radiation levels and dosimetry in low earth orbits. Using the example of the Space Application of Timepix Radiation Monitor (SATRAM), we demonstrate the unique capabilities of Timepix-based miniaturized radiation detectors for particle separation. We present the incident proton energy spectrum in the geographic location of SAA obtained by using Bayesian unfolding of the stopping power spectrum measured with a single-layer Timepix. We assess the measurement stability and the resiliency of the detector to the space environment, thereby demonstrating that even though degradation is observed, data quality has not been affected significantly over more than 10 years. Based on the SATRAM heritage and the capabilities of the latest-generation Timepix series chips, we discuss their applicability for use in a compact magnetic spectrometer for a deep space mission or in the Jupiter radiation belts, as well as their capability for use as single-layer X- and γ-ray polarimeters. The latter was supported by the measurement of the polarization of scattered radiation in a laboratory experiment, where a modulation of 80% was found. Full article
Show Figures

Figure 1

19 pages, 4803 KB  
Article
Pseudo-L0-Norm Fast Iterative Shrinkage Algorithm Network: Agile Synthetic Aperture Radar Imaging via Deep Unfolding Network
by Wenjiao Chen, Jiwen Geng, Fanjie Meng and Li Zhang
Remote Sens. 2024, 16(4), 671; https://doi.org/10.3390/rs16040671 - 13 Feb 2024
Cited by 1 | Viewed by 1834
Abstract
A novel compressive sensing (CS) synthetic-aperture radar (SAR) called AgileSAR has been proposed to increase swath width for sparse scenes while preserving azimuthal resolution. AgileSAR overcomes the limitation of the Nyquist sampling theorem so that it has a small amount of data and [...] Read more.
A novel compressive sensing (CS) synthetic-aperture radar (SAR) called AgileSAR has been proposed to increase swath width for sparse scenes while preserving azimuthal resolution. AgileSAR overcomes the limitation of the Nyquist sampling theorem so that it has a small amount of data and low system complexity. However, traditional CS optimization-based algorithms suffer from manual tuning and pre-definition of optimization parameters, and they generally involve high time and computational complexity for AgileSAR imaging. To address these issues, a pseudo-L0-norm fast iterative shrinkage algorithm network (pseudo-L0-norm FISTA-net) is proposed for AgileSAR imaging via the deep unfolding network in this paper. Firstly, a pseudo-L0-norm regularization model is built by taking an approximately fair penalization rule based on Bayesian estimation. Then, we unfold the operation process of FISTA into a data-driven deep network to solve the pseudo-L0-norm regularization model. The network’s parameters are automatically learned, and the learned network significantly increases imaging speed, so that it can improve the accuracy and efficiency of AgileSAR imaging. In addition, the nonlinearly sparsifying transform can learn more target details than the traditional sparsifying transform. Finally, the simulated and data experiments demonstrate the superiority and efficiency of the pseudo-L0-norm FISTA-net for AgileSAR imaging. Full article
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)
Show Figures

Figure 1

40 pages, 59561 KB  
Article
Real-Time Epidemiology and Acute Care Need Monitoring and Forecasting for COVID-19 via Bayesian Sequential Monte Carlo-Leveraged Transmission Models
by Xiaoyan Li, Vyom Patel, Lujie Duan, Jalen Mikuliak, Jenny Basran and Nathaniel D. Osgood
Int. J. Environ. Res. Public Health 2024, 21(2), 193; https://doi.org/10.3390/ijerph21020193 - 7 Feb 2024
Cited by 6 | Viewed by 2930
Abstract
COVID-19 transmission models have conferred great value in informing public health understanding, planning, and response. However, the pandemic also demonstrated the infeasibility of basing public health decision-making on transmission models with pre-set assumptions. No matter how favourably evidenced when built, a model with [...] Read more.
COVID-19 transmission models have conferred great value in informing public health understanding, planning, and response. However, the pandemic also demonstrated the infeasibility of basing public health decision-making on transmission models with pre-set assumptions. No matter how favourably evidenced when built, a model with fixed assumptions is challenged by numerous factors that are difficult to predict. Ongoing planning associated with rolling back and re-instituting measures, initiating surge planning, and issuing public health advisories can benefit from approaches that allow state estimates for transmission models to be continuously updated in light of unfolding time series. A model being continuously regrounded by empirical data in this way can provide a consistent, integrated depiction of the evolving underlying epidemiology and acute care demand, offer the ability to project forward such a depiction in a fashion suitable for triggering the deployment of acute care surge capacity or public health measures, and support quantitative evaluation of tradeoffs associated with prospective interventions in light of the latest estimates of the underlying epidemiology. We describe here the design, implementation, and multi-year daily use for public health and clinical support decision-making of a particle-filtered COVID-19 compartmental model, which served Canadian federal and provincial governments via regular reporting starting in June 2020. The use of the Bayesian sequential Monte Carlo algorithm of particle filtering allows the model to be regrounded daily and adapt to new trends within daily incoming data—including test volumes and positivity rates, endogenous and travel-related cases, hospital census and admissions flows, daily counts of dose-specific vaccinations administered, measured concentration of SARS-CoV-2 in wastewater, and mortality. Important model outputs include estimates (via sampling) of the count of undiagnosed infectives, the count of individuals at different stages of the natural history of frankly and pauci-symptomatic infection, the current force of infection, effective reproductive number, and current and cumulative infection prevalence. Following a brief description of the model design, we describe how the machine learning algorithm of particle filtering is used to continually reground estimates of the dynamic model state, support a probabilistic model projection of epidemiology and health system capacity utilization and service demand, and probabilistically evaluate tradeoffs between potential intervention scenarios. We further note aspects of model use in practice as an effective reporting tool in a manner that is parameterized by jurisdiction, including the support of a scripting pipeline that permits a fully automated reporting pipeline other than security-restricted new data retrieval, including automated model deployment, data validity checks, and automatic post-scenario scripting and reporting. As demonstrated by this multi-year deployment of the Bayesian machine learning algorithm of particle filtering to provide industrial-strength reporting to inform public health decision-making across Canada, such methods offer strong support for evidence-based public health decision-making informed by ever-current articulated transmission models whose probabilistic state and parameter estimates are continually regrounded by diverse data streams. Full article
(This article belongs to the Special Issue Machine Learning and Public Health)
Show Figures

Figure 1

12 pages, 2923 KB  
Communication
Deep Unfolding Sparse Bayesian Learning Network for Off-Grid DOA Estimation with Nested Array
by Zhenghui Gong, Xiaolong Su, Panhe Hu, Shuowei Liu and Zhen Liu
Remote Sens. 2023, 15(22), 5320; https://doi.org/10.3390/rs15225320 - 10 Nov 2023
Cited by 8 | Viewed by 2872
Abstract
Recently, deep unfolding networks have been widely used in direction of arrival (DOA) estimation because of their improved estimation accuracy and reduced computational cost. However, few have considered the existence of a nested array (NA) with off-grid DOA estimation. In this study, we [...] Read more.
Recently, deep unfolding networks have been widely used in direction of arrival (DOA) estimation because of their improved estimation accuracy and reduced computational cost. However, few have considered the existence of a nested array (NA) with off-grid DOA estimation. In this study, we present a deep sparse Bayesian learning (DSBL) network to solve this problem. We first establish the signal model for off-grid DOA with NA. Then, we transform the array output into a real domain for neural networks. Finally, we construct and train the DSBL network to determine the on-grid spatial spectrum and off-grid value, where the loss function is calculated using reconstruction error and the sparsity of network output, and the layers correspond to the steps of the sparse Bayesian learning algorithm. We demonstrate that the DSBL network can achieve better generalization ability without training labels and large-scale training data. The simulation results validate the effectiveness of the DSBL network when compared with those of existing methods. Full article
Show Figures

Figure 1

17 pages, 660 KB  
Article
Estimating the Multidimensional Generalized Graded Unfolding Model with Covariates Using a Bayesian Approach
by Naidan Tu, Bo Zhang, Lawrence Angrave, Tianjun Sun and Mathew Neuman
J. Intell. 2023, 11(8), 163; https://doi.org/10.3390/jintelligence11080163 - 14 Aug 2023
Cited by 6 | Viewed by 2432
Abstract
Noncognitive constructs are commonly assessed in educational and organizational research. They are often measured by summing scores across items, which implicitly assumes a dominance item response process. However, research has shown that the unfolding response process may better characterize how people respond to [...] Read more.
Noncognitive constructs are commonly assessed in educational and organizational research. They are often measured by summing scores across items, which implicitly assumes a dominance item response process. However, research has shown that the unfolding response process may better characterize how people respond to noncognitive items. The Generalized Graded Unfolding Model (GGUM) representing the unfolding response process has therefore become increasingly popular. However, the current implementation of the GGUM is limited to unidimensional cases, while most noncognitive constructs are multidimensional. Fitting a unidimensional GGUM separately for each dimension and ignoring the multidimensional nature of noncognitive data may result in suboptimal parameter estimation. Recently, an R package bmggum was developed that enables the estimation of the Multidimensional Generalized Graded Unfolding Model (MGGUM) with covariates using a Bayesian algorithm. However, no simulation evidence is available to support the accuracy of the Bayesian algorithm implemented in bmggum. In this research, two simulation studies were conducted to examine the performance of bmggum. Results showed that bmggum can estimate MGGUM parameters accurately, and that multidimensional estimation and incorporating relevant covariates into the estimation process improved estimation accuracy. The effectiveness of two Bayesian model selection indices, WAIC and LOO, were also investigated and found to be satisfactory for model selection. Empirical data were used to demonstrate the use of bmggum and its performance was compared with three other GGUM software programs: GGUM2004, GGUM, and mirt. Full article
(This article belongs to the Topic Psychometric Methods: Theory and Practice)
Show Figures

Figure 1

12 pages, 6565 KB  
Communication
Charged Particle Pseudorapidity Distributions Measured with the STAR EPD
by Mátyás Molnár
Universe 2023, 9(7), 335; https://doi.org/10.3390/universe9070335 - 15 Jul 2023
Cited by 1 | Viewed by 2130
Abstract
In 2018, in preparation for the Beam Energy Scan II, the STAR detector was upgraded with the Event Plane Detector (EPD). The instrument enhanced STAR’s capabilities in centrality determination for fluctuation measurements, event plane resolution for flow measurements, and in triggering overall. Due [...] Read more.
In 2018, in preparation for the Beam Energy Scan II, the STAR detector was upgraded with the Event Plane Detector (EPD). The instrument enhanced STAR’s capabilities in centrality determination for fluctuation measurements, event plane resolution for flow measurements, and in triggering overall. Due to its fine radial granularity, it can also be utilized to measure pseudorapidity distributions of the produced charged primary particles, in EPD’s pseudorapidity coverage of 2.15<|η|<5.09. As such a measurement cannot be done directly, the response of the detector to the primary particles has to be understood well. The detector response matrix was determined via Monte Carlo simulations, and corrected charged particle pseudorapidity distributions were obtained in Au + Au collisions at the center of mass collision energies sNN = 19.6 and 27.0 GeV using an iterative unfolding procedure. Several systematic checks of the method were also done. Full article
(This article belongs to the Special Issue Zimányi School – Heavy Ion Physics)
Show Figures

Figure 1

17 pages, 1208 KB  
Article
Ecogeographic Drivers of the Spatial Spread of Highly Pathogenic Avian Influenza Outbreaks in Europe and the United States, 2016–Early 2022
by Jonathon D. Gass, Nichola J. Hill, Lambodhar Damodaran, Elena N. Naumova, Felicia B. Nutter and Jonathan A. Runstadler
Int. J. Environ. Res. Public Health 2023, 20(11), 6030; https://doi.org/10.3390/ijerph20116030 - 1 Jun 2023
Cited by 13 | Viewed by 4475
Abstract
H5Nx highly pathogenic avian influenza (HPAI) viruses of clade 2.3.4.4 have caused outbreaks in Europe among wild and domestic birds since 2016 and were introduced to North America via wild migratory birds in December 2021. We examined the spatiotemporal extent of HPAI viruses [...] Read more.
H5Nx highly pathogenic avian influenza (HPAI) viruses of clade 2.3.4.4 have caused outbreaks in Europe among wild and domestic birds since 2016 and were introduced to North America via wild migratory birds in December 2021. We examined the spatiotemporal extent of HPAI viruses across continents and characterized ecological and environmental predictors of virus spread between geographic regions by constructing a Bayesian phylodynamic generalized linear model (phylodynamic-GLM). The findings demonstrate localized epidemics of H5Nx throughout Europe in the first several years of the epizootic, followed by a singular branching point where H5N1 viruses were introduced to North America, likely via stopover locations throughout the North Atlantic. Once in the United States (US), H5Nx viruses spread at a greater rate between US-based regions as compared to prior spread in Europe. We established that geographic proximity is a predictor of virus spread between regions, implying that intercontinental transport across the Atlantic Ocean is relatively rare. An increase in mean ambient temperature over time was predictive of reduced H5Nx virus spread, which may reflect the effect of climate change on declines in host species abundance, decreased persistence of the virus in the environment, or changes in migratory patterns due to ecological alterations. Our data provide new knowledge about the spread and directionality of H5Nx virus dispersal in Europe and the US during an actively evolving intercontinental outbreak, including predictors of virus movement between regions, which will contribute to surveillance and mitigation strategies as the outbreak unfolds, and in future instances of uncontained avian spread of HPAI viruses. Full article
(This article belongs to the Special Issue 2nd Edition: Infectious Disease Modeling in the Era of Complex Data)
Show Figures

Figure 1

18 pages, 10628 KB  
Article
Research of the Ball Burnishing Impact over Cold-Rolled Sheets of AISI 304 Steel Fatigue Life Considering Their Anisotropy
by Stoyan Slavov, Diyan Dimitrov, Mariya Konsulova-Bakalova and Lyubomir Si Bao Van
Materials 2023, 16(10), 3684; https://doi.org/10.3390/ma16103684 - 11 May 2023
Cited by 4 | Viewed by 2252
Abstract
The present work focusses on the research of the plastic deformation accumulated effect obtained after two different plastic deformation treatments, over the fatigue life of AISI 304 austenitic stainless steel. The research is focused on ball burnishing as a finishing process to form [...] Read more.
The present work focusses on the research of the plastic deformation accumulated effect obtained after two different plastic deformation treatments, over the fatigue life of AISI 304 austenitic stainless steel. The research is focused on ball burnishing as a finishing process to form specific, so-called “regular micro-reliefs” (RMRs) on a pre-rolled stainless-steel sheet. RMRs are formed using a CNC (Computerized Numerically Controlled) milling machine and toolpaths with the shortest unfolded length, generated by an improved algorithm, based on the Euclidean Distance calculation. The effect of the predominant tool trajectory direction during the ball burnishing process (which can be coinciding or transverse with the rolling direction), the magnitude of applied deforming force, and feed-rate is subjected to evaluation using Bayesian rule analyses of experimentally obtained results for the fatigue life of AISI 304 steel. The obtained results give us reason to conclude that the fatigue life of researched steel is increased when directions of pre-rolled plastic deformation and the tool movement during ball burnishing are coincident. It also been found that the magnitude of deforming force has a stronger impact over the fatigue life, than the feed-rate of the ball tool. Full article
(This article belongs to the Special Issue Study on Cyclic Mechanical Behaviors of Materials – 2nd Edition)
Show Figures

Figure 1

15 pages, 307 KB  
Perspective
Cognition as Morphological/Morphogenetic Embodied Computation In Vivo
by Gordana Dodig-Crnkovic
Entropy 2022, 24(11), 1576; https://doi.org/10.3390/e24111576 - 31 Oct 2022
Cited by 11 | Viewed by 4340
Abstract
Cognition, historically considered uniquely human capacity, has been recently found to be the ability of all living organisms, from single cells and up. This study approaches cognition from an info-computational stance, in which structures in nature are seen as information, and processes (information [...] Read more.
Cognition, historically considered uniquely human capacity, has been recently found to be the ability of all living organisms, from single cells and up. This study approaches cognition from an info-computational stance, in which structures in nature are seen as information, and processes (information dynamics) are seen as computation, from the perspective of a cognizing agent. Cognition is understood as a network of concurrent morphological/morphogenetic computations unfolding as a result of self-assembly, self-organization, and autopoiesis of physical, chemical, and biological agents. The present-day human-centric view of cognition still prevailing in major encyclopedias has a variety of open problems. This article considers recent research about morphological computation, morphogenesis, agency, basal cognition, extended evolutionary synthesis, free energy principle, cognition as Bayesian learning, active inference, and related topics, offering new theoretical and practical perspectives on problems inherent to the old computationalist cognitive models which were based on abstract symbol processing, and unaware of actual physical constraints and affordances of the embodiment of cognizing agents. A better understanding of cognition is centrally important for future artificial intelligence, robotics, medicine, and related fields. Full article
20 pages, 7695 KB  
Article
Pseudo-Gamma Spectroscopy Based on Plastic Scintillation Detectors Using Multitask Learning
by Byoungil Jeon, Junha Kim, Eunjoong Lee, Myungkook Moon and Gyuseong Cho
Sensors 2021, 21(3), 684; https://doi.org/10.3390/s21030684 - 20 Jan 2021
Cited by 17 | Viewed by 4823
Abstract
Although plastic scintillation detectors possess poor spectroscopic characteristics, they are extensively used in various fields for radiation measurement. Several methods have been proposed to facilitate their application of plastic scintillation detectors for spectroscopic measurement. However, most of these detectors can only be used [...] Read more.
Although plastic scintillation detectors possess poor spectroscopic characteristics, they are extensively used in various fields for radiation measurement. Several methods have been proposed to facilitate their application of plastic scintillation detectors for spectroscopic measurement. However, most of these detectors can only be used for identifying radioisotopes. In this study, we present a multitask model for pseudo-gamma spectroscopy based on a plastic scintillation detector. A deep- learning model is implemented using multitask learning and trained through supervised learning. Eight gamma-ray sources are used for dataset generation. Spectra are simulated using a Monte Carlo N-Particle code (MCNP 6.2) and measured using a polyvinyl toluene detector for dataset generation based on gamma-ray source information. The spectra of single and multiple gamma-ray sources are generated using the random sampling technique and employed as the training dataset for the proposed model. The hyperparameters of the model are tuned using the Bayesian optimization method with the generated dataset. To improve the performance of the deep learning model, a deep learning module with weighted multi-head self-attention is proposed and used in the pseudo-gamma spectroscopy model. The performance of this model is verified using the measured plastic gamma spectra. Furthermore, a performance indicator, namely the minimum required count for single isotopes, is defined using the mean absolute percentage error with a criterion of 1% as the metric to verify the pseudo-gamma spectroscopy performance. The obtained results confirm that the proposed model successfully unfolds the full-energy peaks and predicts the relative radioactivity, even in spectra with statistical uncertainties. Full article
Show Figures

Figure 1

19 pages, 5624 KB  
Article
Extending the Fully Bayesian Unfolding with Regularization Using a Combined Sampling Method
by Petr Baroň and Jiří Kvita
Symmetry 2020, 12(12), 2100; https://doi.org/10.3390/sym12122100 - 17 Dec 2020
Cited by 2 | Viewed by 2723
Abstract
Regularization extensions to the Fully Bayesian Unfolding are implemented and studied with an algorithm of combined sampling to find, in a reasonable computational time, an optimal value of the regularization strength parameter in order to obtain an unfolded result of a desired property, [...] Read more.
Regularization extensions to the Fully Bayesian Unfolding are implemented and studied with an algorithm of combined sampling to find, in a reasonable computational time, an optimal value of the regularization strength parameter in order to obtain an unfolded result of a desired property, like smoothness. Three regularization conditions using the curvature, entropy and derivatives are applied, as a model example, to several simulated spectra of top-pair quark pairs that are produced in high energy pp collisions. The existence of a minimum of a χ2 between the unfolded and particle-level spectra is discussed, with recommendations on the checks and validity of the usage of the regularization feature in Fully Bayesian Unfolding (FBU). Full article
(This article belongs to the Special Issue Particle Physics and Symmetry)
Show Figures

Figure 1

Back to TopTop