Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,140)

Search Parameters:
Keywords = graphical model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1105 KB  
Article
Exact Soliton Structures and Modulation Instability in Extended Kadomtsev–Petviashvili–Boussinesq Equation
by Nadiyah Hussain Alharthi, Rubayyi T. Alqahtani and Melike Kaplan
Symmetry 2026, 18(4), 626; https://doi.org/10.3390/sym18040626 - 8 Apr 2026
Abstract
In this study, we consider an extended form of the Kadomtsev–Petviashvili–Boussinesq equation motivated by wave propagation phenomena in dissipative media. The primary aim of this work is to construct exact analytical solutions and clarify the types of nonlinear wave structure admitted by the [...] Read more.
In this study, we consider an extended form of the Kadomtsev–Petviashvili–Boussinesq equation motivated by wave propagation phenomena in dissipative media. The primary aim of this work is to construct exact analytical solutions and clarify the types of nonlinear wave structure admitted by the considered model. For this purpose, the Riccati equation expansion method is applied for the first time within this framework. This method allows us to obtain several distinct families of solitary wave solutions whose qualitative behaviors and physical characteristics are illustrated through graphical representations. In addition, modulation instability analysis is carried out to assess the stability of continuous wave solutions and further elucidate the underlying nonlinear dynamics of the system. Full article
Show Figures

Figure 1

30 pages, 919 KB  
Article
A Comprehensive Analysis of Proportional Caputo-Hybrid Fractional Inequalities and Numerical Verification via Artificial Neural Networks
by Ayed R. A. Alanzi, Mariem Al-Hazmy, Raouf Fakhfakh, Wedad Saleh, Abdellatif Ben Makhlouf and Abdelghani Lakhdari
Fractal Fract. 2026, 10(4), 247; https://doi.org/10.3390/fractalfract10040247 - 8 Apr 2026
Abstract
Accuracy in fractional numerical integration is often limited by the regularity of the integrand. This work proposes a flexible error estimation framework for proportional Caputo-hybrid integral operators based on s-convexity. We introduce a parametric Newton–Cotes formula ( [...] Read more.
Accuracy in fractional numerical integration is often limited by the regularity of the integrand. This work proposes a flexible error estimation framework for proportional Caputo-hybrid integral operators based on s-convexity. We introduce a parametric Newton–Cotes formula (ν[0,1]) that bridges the gap between classical quadrature rules, recovering the fractional Trapezoidal, Midpoint, and Simpson’s methods as specific instances. In order to confirm the correctness of our results, we provide an illustrative example with graphical representations. Furthermore, we provide some additional results using Hölder’s and power mean inequalities and employ a verification strategy based on an Artificial Neural Networks (ANNs) model. The ANN approach allows for high-dimensional parameter space exploration, demonstrating that the proposed inequalities provide robust and precise error estimates. Full article
(This article belongs to the Special Issue Fractional Integral Inequalities and Applications, 3rd Edition)
6 pages, 591 KB  
Proceeding Paper
Decomposition of Large-Scale Quadratic Unconstrained Binary Optimization Problems for Quantum Annealers and Quantum-Inspired Annealers
by Jehn-Ruey Jiang and Qiao-Yi Lin
Eng. Proc. 2026, 134(1), 29; https://doi.org/10.3390/engproc2026134029 - 7 Apr 2026
Abstract
We study the decomposition of large-scale Quadratic Unconstrained Binary Optimization Problems (QUBO) formulations for quantum and quantum-inspired annealers and propose two decomposition mechanisms. The first is one-way-one-hot (1W1H), which replaces a linear inequality with exactly one indicator bank and naturally decomposes the model [...] Read more.
We study the decomposition of large-scale Quadratic Unconstrained Binary Optimization Problems (QUBO) formulations for quantum and quantum-inspired annealers and propose two decomposition mechanisms. The first is one-way-one-hot (1W1H), which replaces a linear inequality with exactly one indicator bank and naturally decomposes the model into many small, parallel subproblems. The second is slack variable range search (SVRS), which introduces a binary-encoded slack and scans restricted windows to balance the number of subproblems and the per-subproblem variable count. Evaluation results using the P08 knapsack problem instance on the Compal Graphic Processing Unit Annealer (CGA) show that SVRS provides a favorable scalability–quality trade-off, while 1W1H remains attractive when the admissible range is small to medium and massive parallelism is available. These results motivate integrating both mechanisms into the National Central University Annealer (NCUA). Full article
Show Figures

Figure 1

34 pages, 1064 KB  
Article
On Multiparameter Post-Quantum Fractional Quadrature Inequalities with Simulation
by Sobia Rafeeq, Sabir Hussain, Mariyam Aslam and Youngsoo Seol
Fractal Fract. 2026, 10(4), 242; https://doi.org/10.3390/fractalfract10040242 - 6 Apr 2026
Abstract
This paper introduces a comprehensive class of multiparameter post-quantum fractional quadrature inequalities, unifying classical error bounds within the setting of the post-quantum Riemann–Liouville fractional integral. By incorporating multiple parameters, we derive a flexible family of inequalities that generalize well-known quadrature rules such as [...] Read more.
This paper introduces a comprehensive class of multiparameter post-quantum fractional quadrature inequalities, unifying classical error bounds within the setting of the post-quantum Riemann–Liouville fractional integral. By incorporating multiple parameters, we derive a flexible family of inequalities that generalize well-known quadrature rules such as the Boole-type, Bullen–Simpson-type, Maclaurin-type, corrected Euler–Maclaurin-type, 38-Simpson-type, and companion Ostrowski-type estimates. Under assumptions of s-convexity, log-convexity, power mean inequality, and Holder inequality, we establish novel error bounds. Our results provide a unified framework for designing and analyzing post-quantum fractional quadrature inequalities. Applications to special means and numerical and graphic examples are presented to illustrate the applicability and generality of the derived inequalities. This work lays a theoretical foundation for the development of post-quantum fractional quadrature inequalities and offers new tools for error estimation in post-quantum fractional-order models arising in applied sciences and engineering. Full article
Show Figures

Figure 1

13 pages, 744 KB  
Entry
Spatiotemporal Data Science
by Chaowei Yang, Anusha Srirenganathan Malarvizhi, Manzhu Yu, Qunying Huang, Lingbo Liu, Zifu Wang, Daniel Q. Duffy, Siqin Wang, Seren Smith, Shuming Bao and Nan Ding
Encyclopedia 2026, 6(4), 84; https://doi.org/10.3390/encyclopedia6040084 - 6 Apr 2026
Viewed by 8
Definition
The world evolves continuously across space and time. Massive volumes of data are generated through sensing, simulation, remote observation, and human activities, capturing dynamic processes in environmental, social, economic, and engineered systems. Critical insights are embedded within these large-scale spatiotemporal datasets. Spatiotemporal Data [...] Read more.
The world evolves continuously across space and time. Massive volumes of data are generated through sensing, simulation, remote observation, and human activities, capturing dynamic processes in environmental, social, economic, and engineered systems. Critical insights are embedded within these large-scale spatiotemporal datasets. Spatiotemporal Data Science provides a conceptual and methodological framework for analyzing such data by integrating spatiotemporal thinking, computational infrastructure, artificial intelligence, and domain knowledge. The field advances methods for data acquisition, harmonization, modeling, visualization, and decision support, enabling applications in natural disaster response, public health, climate adaptation, infrastructure resilience, and geopolitical analysis. By leveraging emerging technologies—including generative Artificial Intelligence (AI), large-scale cloud platforms, Graphics Processing Unit (GPU) acceleration, and digital twin systems—Spatiotemporal Data Science enables scalable, interoperable, and solution-oriented research and innovation. It represents a critical frontier for scientific discovery, engineering advancement, technological innovation, education, and societal benefit. Spatiotemporal Data Science is a transdisciplinary field that studies and models dynamic phenomena across space and time by integrating spatial theory, temporal reasoning, artificial intelligence, and scalable computational infrastructure. It enables the development of adaptive, predictive, and increasingly autonomous systems for understanding and managing complex real-world processes. Full article
(This article belongs to the Collection Data Science)
Show Figures

Figure 1

27 pages, 439 KB  
Article
Bayesian Versus Frequentist Inference in Structural Equation Modeling: Finite-Sample Properties and Economic Applications
by Bojan Baškot, Andrej Ševa, Vesna Lešević and Bogdan Ubiparipović
Mathematics 2026, 14(7), 1198; https://doi.org/10.3390/math14071198 - 3 Apr 2026
Viewed by 186
Abstract
Structural Equation Modeling (SEM) is a key framework for analyzing complex economic relationships involving latent variables, mediation effects, and endogeneity, yet the choice between frequentist and Bayesian estimation remains theoretically and practically contested, especially in settings with non-stationary data and small samples. This [...] Read more.
Structural Equation Modeling (SEM) is a key framework for analyzing complex economic relationships involving latent variables, mediation effects, and endogeneity, yet the choice between frequentist and Bayesian estimation remains theoretically and practically contested, especially in settings with non-stationary data and small samples. This study provides a formal comparison of the two approaches by formulating SEM as a probabilistic graphical model and deriving the corresponding estimation procedures, identifiability conditions, and uncertainty measures. We examine asymptotic properties of frequentist estimators and posterior consistency in Bayesian SEM, with particular attention to integrated time-series SEM applications such as shadow economy estimation. The analysis shows that while both approaches converge under large-sample conditions, important differences arise in finite samples. Bayesian methods exhibit more stable point estimates through coherent uncertainty quantification, particularly when prior information regularizes an otherwise ill-conditioned likelihood. Under model misspecification, Bayesian posteriors concentrate around the pseudo-true parameter defined by the Kullback-Leibler projection, providing a probabilistic representation of misspecification uncertainty through posterior spread—an advantage over frequentist inference, which typically conditions on the maintained model as exact. These findings carry direct implications for empirical economic modeling under realistic data constraints. In settings where sample sizes are small, identification is weak, and model uncertainty is substantial, conditions that routinely characterize macroeconomic research, the choice of inferential framework is not a matter of philosophical preference but a determinant of whether policy-relevant conclusions can be credibly defended. Bayesian SEM offers a principled and transparent path forward in precisely these conditions. Full article
5 pages, 3652 KB  
Editorial
Causal Graphical Models and Their Applications
by Luis Enrique Sucar and David Danks
Entropy 2026, 28(4), 405; https://doi.org/10.3390/e28040405 - 3 Apr 2026
Viewed by 166
Abstract
The study of causality [...] Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications)
33 pages, 2402 KB  
Review
Toward Advanced Sensing and Data-Driven Approaches for Maturity Assessment of Indeterminate Peanut Cropping Systems: Review of Current State and Prospects
by Sathish Raymond Emmanuel Sahayaraj, Abhilash K. Chandel, Pius Jjagwe, Ranadheer Reddy Vennam, Maria Balota and Arunachalam Manimozhian
Sensors 2026, 26(7), 2208; https://doi.org/10.3390/s26072208 - 2 Apr 2026
Viewed by 377
Abstract
Determining the optimal harvest time is among the most critical economic decisions for peanut (Arachis hypogaea L.) growers, directly influencing yield, quality, and market value. Unlike many other crops, peanuts are indeterminate, continuing to flower and produce pods throughout their life cycle. [...] Read more.
Determining the optimal harvest time is among the most critical economic decisions for peanut (Arachis hypogaea L.) growers, directly influencing yield, quality, and market value. Unlike many other crops, peanuts are indeterminate, continuing to flower and produce pods throughout their life cycle. As a result, pod development and maturation are asynchronous, making harvest timing particularly challenging. Conventional maturity estimation techniques, including the hull scrape method, pod blasting, and visual maturity profiling, are invasive, labor-intensive, time-consuming, and spatially limited. Moreover, differences in cultivar maturity rates and agroclimatic conditions exacerbate inconsistencies in maturity prediction. These challenges highlight the urgent need for scalable, objective, and data-driven methods to support growers in achieving optimal harvest outcomes. This review synthesizes the current understanding of peanut pod maturity and evaluates existing traditional and non-invasive approaches for maturity estimation. It aims to identify the limitations of conventional techniques and explore the integration of advanced sensing technologies, artificial intelligence (AI), and geospatial analytics to enhance precision and scalability in peanut maturity assessment and harvest decision-making. This review examines traditional destructive techniques such as the hull scrape method and pod blasting, followed by emerging non-invasive methods employing proximal and remote sensing platforms. Applications of vegetation indices, multispectral and hyperspectral imaging, and AI-based data analytics are discussed in the context of maturity prediction. Additionally, the potential of multimodal remote sensing data fusion and digital frameworks integrating spatial big data analytics, centralized data management, and cloud-based graphical interfaces is explored as a pathway toward end-to-end decision-support systems. Recent advances in non-invasive sensing and AI-assisted modeling have demonstrated significant improvements in scalability, precision, and automation compared with traditional manual approaches. However, their effectiveness remains constrained by the limited inclusion of agroclimatic, phenological, and cultivar-specific variables. Furthermore, the translation of model outputs into actionable, field-level harvest decisions is still underdeveloped, underscoring the need for integrated, user-centric digital infrastructure. Achieving a robust and transferable digital peanut maturity estimation system will require comprehensive ground-truth data across cultivars, regions, and growing seasons. Multidisciplinary collaborations among agronomists, data scientists, growers, and technology providers will be essential for developing practical, field-ready solutions. Integrating AI, multimodal sensing, and geospatial analytics holds immense potential to transform peanut maturity estimation. Such innovations promise to enhance harvest precision, economic returns, and sustainability while reducing manual effort and uncertainty, ultimately improving the efficiency and quality of life for peanut producers worldwide. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2026)
Show Figures

Figure 1

15 pages, 8086 KB  
Article
Exploring the Interplay Between Soaked Time, Exposed Area, and Solution Volume on Mineral Loss in Enamel and Dentin
by Boyu Ning, Xuefei Chen, Go Inoue, Ling Yu, Heba Elsubeihi, Morihiro Takamatsu, Lin Fan and Yasushi Shimada
Crystals 2026, 16(4), 238; https://doi.org/10.3390/cryst16040238 - 2 Apr 2026
Viewed by 203
Abstract
Soaking bovine tooth blocks in demineralization solution is a widely used method to simulate caries-like demineralization for further experimental studies. The objective of this study was to evaluate the degree and depth of mineral loss in bovine enamel and dentin blocks under various [...] Read more.
Soaking bovine tooth blocks in demineralization solution is a widely used method to simulate caries-like demineralization for further experimental studies. The objective of this study was to evaluate the degree and depth of mineral loss in bovine enamel and dentin blocks under various controlled conditions and to investigate the relationships between these factors and mineral loss, providing guidance for researchers to achieve targeted demineralization outcomes. A total of 54 enamel blocks and 54 dentin blocks were divided into 18 groups according to the exposed area and solution volume and then immersed in demineralization solution. Micro-CT scans were performed before immersion, as well as after 1, 2, 3, 7, and 10 days of immersion. The results were analyzed using data analysis software and subsequently summarized into graphical representations. The analysis revealed that soaking time and solution volume showed positive correlations with mineral loss, whereas the exposed area was negatively correlated with mineral loss. Mean mineral loss increased significantly with immersion time in all groups (e.g., from 6314 to 25,670 vol%·μm in the dentin 3 × 3 mm2, 50 mL group, p < 0.05). After 7 days, specimens immersed in larger solution volumes showed significantly greater mineral loss than those immersed in smaller volumes (p < 0.05). In addition, larger exposed areas resulted in greater mineral loss after 3 days of immersion. Mean mineral loss followed a power function relationship with time when the solution volume was sufficiently high relative to the exposed surface area. In contrast, when the solution volume was limited, a logarithmic relationship between time and mineral loss was observed. Given its superior stability, the mean mineral loss appears to be a more reliable indicator for assessing tooth demineralization. Based on our results, more controlled and reproducible demineralization conditions can be achieved, which may contribute to improving the reliability of in vitro caries models and facilitating the evaluation of preventive and therapeutic strategies. Full article
(This article belongs to the Special Issue Novel Dental Materials for Caries Prevention)
Show Figures

Figure 1

23 pages, 3981 KB  
Article
A Multidisciplinary Methodological Framework for the Enhancement of Underground Built Heritage Through Graphic Design: From Documentation to Interpretation
by Andrés Galera-Rodríguez, Mario Algarín-Comino and Francisco Pinto-Puerto
Heritage 2026, 9(4), 146; https://doi.org/10.3390/heritage9040146 - 1 Apr 2026
Viewed by 308
Abstract
Often, research on the enhancement of heritage is limited to exhaustive geometric documentation of its elements. This is even more the case with Underground Built Heritage (UBH), where heritage information differs greatly from other types of heritage and is often so complex that [...] Read more.
Often, research on the enhancement of heritage is limited to exhaustive geometric documentation of its elements. This is even more the case with Underground Built Heritage (UBH), where heritage information differs greatly from other types of heritage and is often so complex that many efforts focus on geometrically describing its three-dimensional volume and compiling heritage data. However, precisely because of its uniqueness, these actions are not always sufficient to understand underground heritage assets. This research proposes an itinerary of specific actions, an experimental methodology that not only focuses on the above, but also addresses the issue of UBH through sensory interpretation, using a taxonomy together with a graphic and plastic proposal of physical objects that can be handled and touched, allowing vision and touch to explore the tectonic qualities of these spaces, which are usually hidden or difficult to perceive due to inadequate lighting. The proposed workflow aims to be replicable in other research and applicable, either partially or fully, to other case studies of UBH. Full article
(This article belongs to the Special Issue Exploring Underground Built Heritage)
Show Figures

Figure 1

13 pages, 756 KB  
Article
H2Avatar: Expressive Whole-Body Avatars from Monocular Video via Hierarchical Geometry and Hybrid Rendering
by Jinsong Zhang, Cheng Guan, Zhihua Lin and Yuqin Lin
Big Data Cogn. Comput. 2026, 10(4), 105; https://doi.org/10.3390/bdcc10040105 - 1 Apr 2026
Viewed by 304
Abstract
Reconstructing photorealistic and animatable whole-body avatars from monocular videos is a hot topic in computer vision and computer graphics. However, existing methods still face challenges due to the limited frequency response of single-scale geometry encodings and the instability of appearance modeling without an [...] Read more.
Reconstructing photorealistic and animatable whole-body avatars from monocular videos is a hot topic in computer vision and computer graphics. However, existing methods still face challenges due to the limited frequency response of single-scale geometry encodings and the instability of appearance modeling without an explicit surface anchor. In this paper, we present H2Avatar, a real-time framework that builds on a mesh-embedded 3D Gaussian representation guided by SMPL-X and disentangles geometry and appearance into hierarchical and hybrid components. For geometry, we propose a semantic-aware hierarchical encoding based on a multi-scale tri-plane pyramid, where features at different resolutions capture both global structure and high-frequency surface details such as clothing wrinkles. For appearance, we introduce a hybrid rendering strategy that anchors canonical colors using a learnable UV texture map, and complements it with a neural residual color branch conditioned on tri-plane features, pose embedding, and surface normals to model pose- and view-dependent shading variations. This design improves temporal stability and preserves identity details while enhancing photorealism under complex motions. Experiments on the NeuMan dataset demonstrate that H2Avatar consistently outperforms representative baselines across multiple sequences, outperforming ExAvatar by up to 0.66 dB in PSNR and reducing LPIPS by up to 16.3%. These results validate the effectiveness of hierarchical geometry encoding and texture-anchored hybrid appearance modeling. Full article
(This article belongs to the Special Issue Application of Pattern Recognition and Machine Learning)
Show Figures

Figure 1

25 pages, 2236 KB  
Article
On the Unambiguous, Traceable and Dimensionally Homogeneous Calculation of Per-Unit Parameters for the Two-Mass Drive Train Model of a Set of Reference Wind Turbines
by Joel Rodríguez-Guillén, Rubén Salas-Cabrera, Bárbara María-Esther García-Morales, Miguel A. García-Morales and Juan Frausto-Solís
Math. Comput. Appl. 2026, 31(2), 51; https://doi.org/10.3390/mca31020051 - 1 Apr 2026
Viewed by 149
Abstract
The Bond Graph (BG) methodology, a multi-domain graphical description formalism, is used to study a horizontal-axis two-mass drive train of a wind turbine. The main contribution of this work is to address the lack of wind energy literature dealing with fully unambiguous, traceable, [...] Read more.
The Bond Graph (BG) methodology, a multi-domain graphical description formalism, is used to study a horizontal-axis two-mass drive train of a wind turbine. The main contribution of this work is to address the lack of wind energy literature dealing with fully unambiguous, traceable, and dimensionally homogeneous per-unit quantities for two-mass drive train models. Data in real quantities for the drive train are collected from open-access datasheets and their corresponding design information files. Wind turbines that may serve as Reference Wind Turbines (RWTs), with traceable calculations, are carefully selected. A lumped-parameter order-reduction method is employed to convert data from higher-order models into data for a reduced-order two-mass model. The BG methodology is then used to formally derive the per-unit drive train model and its corresponding dimensionally homogeneous per-unit parameters for a set of six representative Reference Wind Turbines, covering a nominal power range from 0.75 MW to 5 MW. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2025)
Show Figures

Figure 1

48 pages, 27526 KB  
Article
Skipping Energy Simulation with S-TCML: A Surrogate Machine Learning Sustainable Framework for Real-Time Thermal Comfort Evaluation in Office Buildings
by Mayar El-Sayed Moeat, Naglaa Ali Megahed, Rehab F. Abdel-Kader and Dina Samy Noaman
Sustainability 2026, 18(7), 3381; https://doi.org/10.3390/su18073381 - 31 Mar 2026
Viewed by 311
Abstract
The digital and green transitions in the AEC sector require rapid, data-driven workflows to redefine sustainability through real-time performance evaluation. However, the high computational cost of traditional energy simulations often lacks evidence-based feedback during early-stage design. This study introduces a surrogate machine learning [...] Read more.
The digital and green transitions in the AEC sector require rapid, data-driven workflows to redefine sustainability through real-time performance evaluation. However, the high computational cost of traditional energy simulations often lacks evidence-based feedback during early-stage design. This study introduces a surrogate machine learning framework (S-TCML) designed to bypass traditional energy simulation by providing an instantaneous assessment of thermal comfort. Using a parametric Grasshopper–Honeybee environment, a dataset of 3072 configurations was generated for an office room in Cairo, Egypt. Six machine learning algorithms were benchmarked, with Gradient Boosting and Random Forest demonstrating superior performance in capturing non-linear thermal physics. Validation against the EnergyPlus engine confirmed that S-TCML models deliver predictions in milliseconds—a 99.9% reduction in computational time. The Gradient Boosting model achieved exceptional accuracy with an R2 of 0.999 and RMSE of 0.013 for PMV and an R2 of 0.995 and RMSE of 0.46% for PPD prediction. Feature importance analysis proved that a tree-based ML model can capture the underlying physical relationship between variables. To bridge the feedback gap, a web-based graphical user interface (GUI) was developed to facilitate proactive design exploration. This framework supports sustainable decision-making and design efficiency, offering scalable, user-friendly tools that protect occupant health and ensure thermal resilience in hot–arid environments. Full article
Show Figures

Figure 1

34 pages, 556 KB  
Article
Finsler-Randers-Bianchi Type-V Cosmological Model and Modified f(R,T) Gravity in Lyra Geometry
by Sachin Kumar, Praduman Kumar Dwivedi, Chayan Kumar Mishra, Ioannis Ampazis and Panayiotis C. Stavrinos
Universe 2026, 12(4), 100; https://doi.org/10.3390/universe12040100 - 31 Mar 2026
Viewed by 337
Abstract
In this research paper, we investigate a Finsler-Randers spacetime in the context of a Bianchi type-V model of universe within the framework of Lyra geometry, employing a modified f(R,T) gravity theory that incorporates a cosmological constant Λ [...] Read more.
In this research paper, we investigate a Finsler-Randers spacetime in the context of a Bianchi type-V model of universe within the framework of Lyra geometry, employing a modified f(R,T) gravity theory that incorporates a cosmological constant Λ. We have derived the corresponding anisotropic Friedmann equations for the Finsler–Randers Bianchi type-V model of universe with modified f(R,T) gravity in Lyra geometry, including the contributions of the cosmological constant and Randers anisotropic terms b0(t) and obtained analytical solutions. Further, we have examined the behavior of various dynamical parameters, commonly used in cosmological analysis, both geometrical and graphical interpretations have been provided. Furthermore, we have derived the Raychaudhuri equation in terms of the cosmological constant as a function of the cosmic time t. Our analysis reveals that the shear scalar σ2 and the scalar expansion θ decrease with cosmic time and tend to zero at late times, indicating the isotropization of the universe in the presence of the cosmological constant; however, the Hubble parameter approaches a constant value rather than vanishing, while the energy density ρ, pressure P, and the Lyra gauge function β remain finite and non-zero even at large cosmic times. Ultimately, we conclude that the universe described by this framework exhibits continuous acceleration, as indicated by the negative value of the deceleration parameter q. Full article
Show Figures

Figure 1

22 pages, 3971 KB  
Article
A Multi-Scale Dense Perception and Scale-Adaptive Approach for Blueberry Ripeness Detection
by Shutao Guo, Ning Yang and Shanchen Pang
Foods 2026, 15(7), 1161; https://doi.org/10.3390/foods15071161 - 30 Mar 2026
Viewed by 237
Abstract
Accurate blueberry ripeness detection is crucial for intelligent harvesting but is challenged by complex orchard environments involving small, dense fruit clusters. This study proposes BBYOLOv12, an improved YOLOv12 model, to address missed detections and ripeness misjudgments. The method integrates a lightweight RepGhost backbone [...] Read more.
Accurate blueberry ripeness detection is crucial for intelligent harvesting but is challenged by complex orchard environments involving small, dense fruit clusters. This study proposes BBYOLOv12, an improved YOLOv12 model, to address missed detections and ripeness misjudgments. The method integrates a lightweight RepGhost backbone for efficient multi-scale feature extraction, a modified SimAM attention mechanism to enhance feature capture in dense regions, and an improved WIoU loss function to optimize small object localization. Evaluated on a self-built dataset, BBYOLOv12 achieved a mAP@0.5 of 98.97%, mAP@0.5:0.95 of 83.55%, precision of 97.55%, and recall of 97.27%, outperforming baseline and mainstream lightweight models. The model maintains high accuracy with only 2.36 million parameters and 5.59 GFLOPs, reducing complexity relative to the baseline. A practical Graphical User Interface was also developed for real-time detection and statistical analysis. This research provides an effective technical solution for multi-scale, dense perception tasks in agricultural applications. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

Back to TopTop