Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (278)

Search Parameters:
Keywords = geometric criterion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 7332 KiB  
Article
Analytical Derivation of the q-Factor for Slender Masonry Structures Under Out-of-Plane Seismic Action
by Simona Coccia
Buildings 2025, 15(15), 2622; https://doi.org/10.3390/buildings15152622 - 24 Jul 2025
Viewed by 219
Abstract
Slender masonry structures, in the absence of disintegration phenomena, can be idealized as rigid bodies subjected to seismic excitation. In this study, a closed-form expression for the behavior factor (q-factor) associated with overturning collapse under out-of-plane seismic loading is derived. The [...] Read more.
Slender masonry structures, in the absence of disintegration phenomena, can be idealized as rigid bodies subjected to seismic excitation. In this study, a closed-form expression for the behavior factor (q-factor) associated with overturning collapse under out-of-plane seismic loading is derived. The analysis considers five-step pulse seismic inputs. In the proposed approach, valid for slender masonry structures, sliding failure is neglected, and collapse is assumed to occur when, at the end of the seismic excitation, the rotation of the structure reaches a value equal to its slenderness. Based on this criterion, it is possible to derive a formulation for the q-factor as a function of a dimensionless parameter that combines the geometric characteristics of the slender structure and the period of the applied accelerogram. To validate the proposed formulation, a comparative analysis is conducted against the results obtained from a numerical integration of the motion equation using a set of 20 natural accelerograms recorded in Italy. The characteristic period of each accelerogram is evaluated through different methodologies, with the aim of identifying the most suitable approach for application in simplified seismic assessment procedures. Full article
(This article belongs to the Special Issue Seismic Assessment of Unreinforced Masonry Buildings)
Show Figures

Figure 1

18 pages, 960 KiB  
Article
Hybrid Algorithm via Reciprocal-Argument Transformation for Efficient Gauss Hypergeometric Evaluation in Wireless Networks
by Jianping Cai and Zuobin Ying
Mathematics 2025, 13(15), 2354; https://doi.org/10.3390/math13152354 - 23 Jul 2025
Viewed by 123
Abstract
The rapid densification of wireless networks demands efficient evaluation of special functions underpinning system-level performance metrics. To facilitate research, we introduce a computational framework tailored for the zero-balanced Gauss hypergeometric function [...] Read more.
The rapid densification of wireless networks demands efficient evaluation of special functions underpinning system-level performance metrics. To facilitate research, we introduce a computational framework tailored for the zero-balanced Gauss hypergeometric function Ψ(x,y)F12(1,x;1+x;y), a fundamental mathematical kernel emerging in Signal-to-Interference-plus-Noise Ratio (SINR) coverage analysis of non-uniform cellular deployments. Specifically, we propose a novel Reciprocal-Argument Transformation Algorithm (RTA), derived rigorously from a Mellin–Barnes reciprocal-argument identity, achieving geometric convergence with O1/y. By integrating RTA with a Pfaff-series solver into a hybrid algorithm guided by a golden-ratio switching criterion, our approach ensures optimal efficiency and numerical stability. Comprehensive validation demonstrates that the hybrid algorithm reliably attains machine-precision accuracy (1016) within 1 μs per evaluation, dramatically accelerating calculations in realistic scenarios from hours to fractions of a second. Consequently, our method significantly enhances the feasibility of tractable optimization in ultra-dense non-uniform cellular networks, bridging the computational gap in large-scale wireless performance modeling. Full article
(This article belongs to the Special Issue Advances in High-Performance Computing, Optimization and Simulation)
Show Figures

Figure 1

21 pages, 4581 KiB  
Article
Deformation Response and Load Transfer Mechanism of Collar Monopile Foundations in Saturated Cohesive Soils
by Zhuang Liu, Lunliang Duan, Yankun Zhang, Linhong Shen and Pei Yuan
Buildings 2025, 15(14), 2392; https://doi.org/10.3390/buildings15142392 - 8 Jul 2025
Viewed by 286
Abstract
Collar monopile foundation is a new type of offshore wind power foundation. This paper explores the horizontal bearing performance of collar monopile foundation in saturated cohesive soil through a combination of physical experiments and numerical simulations. After analyzing the deformation characteristics of the [...] Read more.
Collar monopile foundation is a new type of offshore wind power foundation. This paper explores the horizontal bearing performance of collar monopile foundation in saturated cohesive soil through a combination of physical experiments and numerical simulations. After analyzing the deformation characteristics of the pile–soil system under horizontal load through static load tests, horizontal cyclic loading tests were conducted at different cycles to study the cumulative deformation law of the collar monopile. Based on a stiffness degradation model for soft clay, a USDFLD subroutine was developed in Fortran and embedded in ABAQUS. Coupled with the Mohr–Coulomb criterion, it was used to simulate the deformation behavior of the collar monopile under horizontal cyclic loading. The numerical model employed the same geometric dimensions and boundary conditions as the physical test, and the simulated cumulative pile–head displacement under 4000 load cycles showed good agreement with the experimental results, thereby verifying the rationality and reliability of the proposed simulation method. Through numerical simulation, the distribution characteristics of bending moment and the shear force of collar monopile foundation were studied, and the influence of pile shaft and collar on the horizontal bearing capacity of collar monopile foundation at different loading stages was analyzed. The results show that as the horizontal load increases, cracks gradually appear at the bottom of the collar and in the surrounding soil. The soil disturbance caused by the sliding and rotation of the collar will gradually increase, leading to plastic failure of the surrounding soil and reducing the bearing capacity. The excess pore water pressure in shallow soil increases rapidly in the early cycle and then gradually decreases with the formation of drainage channels. Deep soil may experience negative pore pressure, indicating the presence of a suction effect. This paper can provide theoretical support for the design optimization and performance evaluation of collar monopile foundations in offshore wind power engineering applications. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

26 pages, 7731 KiB  
Article
A Finite Element Approach to the Upper-Bound Bearing Capacity of Shallow Foundations Using Zero-Thickness Interfaces
by Yu-Lin Lee, Yu-Tang Huang, Chi-Min Lee, Tseng-Hsing Hsu and Ming-Long Zhu
Appl. Sci. 2025, 15(14), 7635; https://doi.org/10.3390/app15147635 - 8 Jul 2025
Viewed by 252
Abstract
This study presents a robust numerical framework for evaluating the upper-bound ultimate bearing capacity of shallow foundations in cohesive and C-phi soils using a self-developed finite element method. The model incorporates multi-segment zero-thickness interface elements to accurately simulate soil discontinuities and progressive failure [...] Read more.
This study presents a robust numerical framework for evaluating the upper-bound ultimate bearing capacity of shallow foundations in cohesive and C-phi soils using a self-developed finite element method. The model incorporates multi-segment zero-thickness interface elements to accurately simulate soil discontinuities and progressive failure mechanisms, based on the Mohr–Coulomb failure criterion. In contrast to optimization-based methods such as discontinuity layout optimization (DLO) or traditional finite element limit analysis (FELA), the proposed approach uses predefined failure mechanisms to improve computational transparency and efficiency. A variety of geometric failure mechanisms are analyzed, including configurations with triangular, circular, and logarithmic spiral slip surfaces. Particular focus is given to the transition zone, which is discretized into multiple blocks to enhance accuracy and convergence. The method is developed for two-dimensional problems under the assumption of elastic deformable-plastic behavior and homogeneous isotropic soil, with limitations in automatically detecting failure mechanisms. The proposed approach is validated against classical theoretical solutions, demonstrating excellent agreement. For friction angles ranging from 0° to 40°, the computed bearing capacity factors Nc and Nq show minimal deviation from the analytical results, with errors as low as 0.04–0.19% and 0.12–2.43%, respectively. The findings confirm the method’s effectiveness in capturing complex failure behavior, providing a practical and accurate tool for geotechnical stability assessment and foundation design. Full article
Show Figures

Figure 1

26 pages, 6597 KiB  
Article
A Comparative Study of Three-Dimensional Flow Based, Geometric, and Empirical Tortuosity Models in Carbonate and Sandstone Reservoirs
by Benedicta Loveni Melkisedek, Yoevita Emeliana and Irwan Ary Dharmawan
Appl. Sci. 2025, 15(13), 7467; https://doi.org/10.3390/app15137467 - 3 Jul 2025
Viewed by 377
Abstract
Understanding tortuosity is essential for accurately modeling fluid flow in complex porous media, particularly in the sub-surface reservoir rock; therefore, tortuosity estimation was evaluated using three approaches: Streamline streamline simulations via the Lattice Boltzmann Method (LBM), geometric pathfinding using Dijkstra’s algorithm, and empirical [...] Read more.
Understanding tortuosity is essential for accurately modeling fluid flow in complex porous media, particularly in the sub-surface reservoir rock; therefore, tortuosity estimation was evaluated using three approaches: Streamline streamline simulations via the Lattice Boltzmann Method (LBM), geometric pathfinding using Dijkstra’s algorithm, and empirical modeling based on pore-structure parameters. The analysis encompassed 1963 micro-Computed Tomography (micro-CT) images of Brazilian pre-salt carbonate and sandstone samples, with the effective porosity extracted from LBM velocity fields, isolating flow-contributing pores, establishing streamline tortuosity as the reference standard. Sandstones exhibited relatively narrow tortuosity ranges (Dijkstra: 1.29–1.75; Streamline: 1.18–2.61; Empirical: 1.18–4.42), whereas carbonates display greater heterogeneity (Dijkstra: 1.00–3.18; Streamline: 1.00–3.68; Empirical: 1.59–4.93). Model performance assessed using the corrected Akaike Information Criterion (AICc) revealed that the best agreement with the data was achieved by the semi-empirical model incorporating coordination number and minimum throat length (AICc = −113.11), followed by the Dijkstra-based geometrical approach (−99.74) and the empirical porosity-based model (202.23). There was a nonlinear inverse correlation between tortuosity and effective porosity across lithologies. This comprehensive comparison underscores the importance of incorporating multiple pore-scale parameters for robust tortuosity prediction, improving the understanding of flow behavior in heterogeneous reservoir rocks. Full article
(This article belongs to the Section Fluid Science and Technology)
Show Figures

Figure 1

16 pages, 4986 KiB  
Article
Numerical Impact Assessment Based on Experiments for Steel Stiffened Panels with and Without Prior Dent
by Sara A. Eltawosy, Heba S. El-Kilani, Elsayed Fathallah and S. Saad-Eldeen
J. Mar. Sci. Eng. 2025, 13(7), 1263; https://doi.org/10.3390/jmse13071263 - 29 Jun 2025
Viewed by 336
Abstract
Many ship panels may be subjected to operational or accidental impact loads, and increased crashworthiness is a desirable design feature. A designer may reach this goal using different structural configurations that are available nowadays. However, the selection of the appropriate design parameters is [...] Read more.
Many ship panels may be subjected to operational or accidental impact loads, and increased crashworthiness is a desirable design feature. A designer may reach this goal using different structural configurations that are available nowadays. However, the selection of the appropriate design parameters is not simple, due to the complexity of predicting impact response. This research is based on published experimental crashworthiness results of a steel stiffened panel tested under low-velocity impact loading. A series of finite element analyses is performed to develop a master model that can be applied to different parameters. The results showed good agreement between the developed finite element model and the experimental results, which confirms the accuracy and reliability of the numerical model. A parametric study is carried out to investigate the effect of design parameters such as plating thickness, stiffener section modulus, stiffener spacing, and stiffener profiles on the crashworthiness characteristics of the calibrated model, and the geometrical configurations that offer the best crashworthiness without considerable increased weight may be then determined based on a proposed criterion. To cover complex realistic scenarios during operation, pre-existing mechanical damage consisting of a specified dent is applied to the intact panel, to check the survivability of the proposed model with respect to the intact one. Finally, simplified design guidelines are proposed to improve both the safety and structural integrity characteristics of the structural configurations considered. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

16 pages, 2567 KiB  
Article
LEO-Enhanced BDS-3 PPP Performance Based on B2b Signal
by Ju Hong, Rui Tu, Yangyang Liu, Yulong Ge and Fangxin Li
Remote Sens. 2025, 17(13), 2183; https://doi.org/10.3390/rs17132183 - 25 Jun 2025
Viewed by 317
Abstract
Since 2020, the BDS-3 has been providing real-time corrections via the B2b signal, enabling users in China and its neighboring regions to achieve kinematic positioning accuracy at the decimeter level. The rapid geometric changes of Low-Earth-Orbit (LEO) satellites facilitate the rapid resolution of [...] Read more.
Since 2020, the BDS-3 has been providing real-time corrections via the B2b signal, enabling users in China and its neighboring regions to achieve kinematic positioning accuracy at the decimeter level. The rapid geometric changes of Low-Earth-Orbit (LEO) satellites facilitate the rapid resolution of phase ambiguities and accelerate the convergence of Precise Point Positioning (PPP). Therefore, this study proposes an LEO-enhanced BDS-3 PPP-B2b positioning model. Firstly, a novel BDS-3 PPP model accounting for satellite clock bias characteristics is proposed, and experimental validation confirms its efficacy. Subsequently, an LEO-enhanced BDS-3 PPP model is developed. Finally, the positioning performance is rigorously evaluated using combined LEO simulation observations and BDS-3 observations. The results indicate that, compared with the traditional PPP model, the new model yields an average convergence time of 25.1 min for experiments where the convergence criterion is jointly satisfied, representing a 35.6% improvement in convergence speed, while maintaining the same positioning accuracy after convergence. When augmented with LEO satellites, the convergence time of the BDS-3 PPP-B2b solution is reduced to less than 2 min. Furthermore, when more than three LEO satellites are available, the mean convergence time is shortened to within 1 min. Full article
Show Figures

Figure 1

17 pages, 3252 KiB  
Article
Calculation of Activity Concentration Index for an Internal Space in a Concrete Structure
by Stamatia Gavela, Georgios Papadakos and Nikolaos Nikoloutsopoulos
Buildings 2025, 15(12), 2075; https://doi.org/10.3390/buildings15122075 - 16 Jun 2025
Viewed by 958
Abstract
The Activity Concentration Index (ACI), defined in Directive 2013/59/Euratom, serves as a criterion for the radiological significance of Naturally Occurring Radioactive Materials (NORMs) concentrated in building materials, considering related exposures due to the external gamma radiation field but not due to radon concentration [...] Read more.
The Activity Concentration Index (ACI), defined in Directive 2013/59/Euratom, serves as a criterion for the radiological significance of Naturally Occurring Radioactive Materials (NORMs) concentrated in building materials, considering related exposures due to the external gamma radiation field but not due to radon concentration levels. This study proposes a simple way of applying the ACI to interior spaces when concrete is the dominant construction material. Three calculation methods were examined, using four spaces within existing buildings, namely Method A, using the building elements’ mass proportions as a weighting factor; Method B, using only the geometrical characteristics of the internal space; and Method C, combining the mass proportions and inverse square distances. This methodology proposes a way of calculating the ACI based on data provided by existing studies about NORM concentrations in building materials and, thus, no sampling and subsequent NORM concentration measurements were required. The spatial data could be easily determined using either building plans or in situ measurements, using a handheld laser distance meter. The advantages and disadvantages of all three methods were analyzed, along with a comparison to in situ gamma radiation field measurements, performed with a portable Geiger–Müller detector. All the methods showed proportionality to the measured values. Method C was found to be the most suitable, especially for existing buildings, and Method A is recommended for early-stage design assessments. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

14 pages, 2309 KiB  
Article
Multiscale and Failure Analysis of Periodic Lattice Structures
by Young Kwon and Matthew Minck
Appl. Sci. 2025, 15(12), 6701; https://doi.org/10.3390/app15126701 - 14 Jun 2025
Viewed by 367
Abstract
A full-cycle, multiscale analysis technique was developed for periodic lattice structures with geometric repetition, aiming for more efficient modeling to predict their failure loads. The full-cycle analysis includes both upscaling and downscaling procedures. The objective of the upscaling procedure is to obtain the [...] Read more.
A full-cycle, multiscale analysis technique was developed for periodic lattice structures with geometric repetition, aiming for more efficient modeling to predict their failure loads. The full-cycle analysis includes both upscaling and downscaling procedures. The objective of the upscaling procedure is to obtain the effective material properties of the lattice structures such that the lattice structures can be analyzed as continuum models. The continuum models are analyzed to determine the structures’ displacements or buckling failure loads. Then, the downscaling process is applied to the continuum models to determine the stresses in actual lattice members, which were applied to the stress and stress gradient based failure criterion to predict failure. Example problems were presented to demonstrate the accuracy and reliability of the proposed multiscale analysis technique. The results from the multiscale analysis were compared to those of the discrete finite element analysis without any homogenization. Furthermore, physical experiments were also conducted to determine the failure loads. Then, multiscale analysis was undertaken in conjunction with the failure criterion, based on both stress and stress gradient conditions, to compare the predicted failure loads to the experimental data. Full article
Show Figures

Figure 1

20 pages, 4858 KiB  
Article
Sensitive Multispectral Variable Screening Method and Yield Prediction Models for Sugarcane Based on Gray Relational Analysis and Correlation Analysis
by Shimin Zhang, Huojuan Qin, Xiuhua Li, Muqing Zhang, Wei Yao, Xuegang Lyu and Hongtao Jiang
Remote Sens. 2025, 17(12), 2055; https://doi.org/10.3390/rs17122055 - 14 Jun 2025
Viewed by 440
Abstract
Sugarcane yield prediction plays a pivotal role in enabling farmers to monitor crop development and optimize cultivation practices, guiding harvesting operations for sugar mills. In this study, we established three experimental fields, which were planted with three main sugarcane cultivars in Guangxi, China, [...] Read more.
Sugarcane yield prediction plays a pivotal role in enabling farmers to monitor crop development and optimize cultivation practices, guiding harvesting operations for sugar mills. In this study, we established three experimental fields, which were planted with three main sugarcane cultivars in Guangxi, China, respectively, implementing a multi-gradient fertilization design with 39 plots and 810 sampling grids. Multispectral imagery was acquired by unmanned aerial vehicles (UAVs) during five critical growth stages: mid-tillering (T1), late-tillering (T2), mid-elongation (T3), late-elongation (T4), and maturation (T5). Following rigorous image preprocessing (including stitching, geometric correction, and radiometric correction), 16 VIs were extracted. To identify yield-sensitive vegetation indices (VIs), a spectral feature selection criterion combining gray relational analysis and correlation analysis (GRD-r) was proposed. Subsequently, three supervised learning algorithms—Gradient Boosting Decision Tree (GBDT), Random Forest (RF), and Support Vector Machine (SVM)—were employed to develop both single-stage and multi-stage yield prediction models. Results demonstrated that multi-stage models consistently outperformed their single-stage counterparts. Among the single-stage models, the RF model using T3-stage features achieved the highest accuracy (R2 = 0.78, RMSEV = 7.47 t/hm2). The best performance among multi-stage models was obtained using a GBDT model constructed from a combination of DVI (T1), NDVI (T2), TDVI (T3), NDVI (T4), and SRPI (T5), yielding R2 = 0.83 and RMSEV = 6.63 t/hm2. This study highlights the advantages of integrating multi-temporal spectral features and advanced machine learning techniques for improving sugarcane yield prediction, providing a theoretical foundation and practical guidance for precision agriculture and harvest logistics. Full article
(This article belongs to the Special Issue Proximal and Remote Sensing for Precision Crop Management II)
Show Figures

Figure 1

15 pages, 6334 KiB  
Article
Strawberry Fruit Deformity Detection and Symmetry Quantification Using Deep Learning and Geometric Feature Analysis
by Lili Jiang, Yunfei Wang, Haohao Yan, Yingzi Yin and Chong Wu
Horticulturae 2025, 11(6), 652; https://doi.org/10.3390/horticulturae11060652 - 9 Jun 2025
Cited by 1 | Viewed by 467
Abstract
The external appearance of strawberry fruits serves as a critical criterion for their commercial value and grading standards. However, current research primarily emphasizes ripeness and surface defects, with limited attention given to the quantitative analysis of geometric characteristics such as deformity and symmetry. [...] Read more.
The external appearance of strawberry fruits serves as a critical criterion for their commercial value and grading standards. However, current research primarily emphasizes ripeness and surface defects, with limited attention given to the quantitative analysis of geometric characteristics such as deformity and symmetry. To address this gap, this study proposes a comprehensive evaluation framework that integrates deep learning-based segmentation with geometric analysis for strawberry appearance quality assessment. First, an enhanced YOLOv11 segmentation model incorporating a Squeeze-and-Excitation attention mechanism was developed to enable high-precision extraction of individual fruits, achieving Precision, Recall, AP50, and F1 scores of 91.11%, 87.46%, 92.90%, and 88.45%, respectively. Second, a deformity quantification method was designed based on the number of deformity points (Nd), deformity rate (Rd), and spatial distance metrics (Gmin and Gmax). Experimental results demonstrated significant differences in Rd and Gmax between deformed and normal strawberries, indicating strong classification capability. Finally, principal component analysis (PCA) was employed to extract the primary axis direction, and morphological symmetry was quantitatively evaluated using Intersection over Union (IoU) and Area Difference Ratio (AreaD_Ratio). The results revealed that most samples fell within an IoU range of 0.6–0.8 and AreaD_Ratio below 0.4, indicating noticeable inter-individual differences in fruit symmetry. This study aims to establish a three-stage analytical framework—segmentation, deformity quantification, and symmetry evaluation—for assessing strawberry appearance quality, with the goal of supporting key applications in automated grading and precision quality inspection. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

24 pages, 1839 KiB  
Article
Relic Gravitational Waves in the Noncommutative Foliated Riemannian Quantum Gravity
by César A. Zen Vasconcellos, Peter O. Hess, José A. de Freitas Pacheco, Fridolin Weber, Remo Ruffini, Dimiter Hadjimichef, Moisés Razeira, Benno August Ludwig Bodmann, Marcelo Netz-Marzola, Geovane Naysinger, Rodrigo Fraga da Silva and João G. G. Gimenez
Universe 2025, 11(6), 179; https://doi.org/10.3390/universe11060179 - 31 May 2025
Viewed by 911
Abstract
We present a study of relic gravitational waves based on a foliated gauge field theory defined over a spacetime endowed with a noncommutative algebraic–geometric structure. As an ontological extension of general relativity—concerning manifolds, metrics, and fiber bundles—the conventional space and time coordinates, typically [...] Read more.
We present a study of relic gravitational waves based on a foliated gauge field theory defined over a spacetime endowed with a noncommutative algebraic–geometric structure. As an ontological extension of general relativity—concerning manifolds, metrics, and fiber bundles—the conventional space and time coordinates, typically treated as classical numbers, are replaced by complementary quantum dual fields. Within this framework, consistent with the Bekenstein criterion and the Hawking–Hertog multiverse conception, singularities merge into a helix-like cosmic scale factor that encodes the topological transition between the contraction and expansion phases of the universe analytically continued into the complex plane. This scale factor captures the essence of an intricate topological quantum-leap transition between two phases of the branching universe: a contraction phase preceding the now-surpassed conventional concept of a primordial singularity and a subsequent expansion phase, whose transition region is characterized by a Riemannian topological foliated structure. The present linearized formulation, based on a slight gravitational field perturbation, also reveals a high sensitivity of relic gravitational wave amplitudes to the primordial matter and energy content during the universe’s phase transition. It further predicts stochastic homogeneous distributions of gravitational wave intensities arising from the interplay of short- and long-spacetime effects within the non-commutative algebraic framework. These results align with the anticipated future observations of relic gravitational waves, expected to pervade the universe as a stochastic, homogeneous background. Full article
(This article belongs to the Section Foundations of Quantum Mechanics and Quantum Gravity)
Show Figures

Figure 1

16 pages, 2065 KiB  
Article
An Information-Extreme Algorithm for Universal Nuclear Feature-Driven Automated Classification of Breast Cancer Cells
by Taras Savchenko, Ruslana Lakhtaryna, Anastasiia Denysenko, Anatoliy Dovbysh, Sarah E. Coupland and Roman Moskalenko
Diagnostics 2025, 15(11), 1389; https://doi.org/10.3390/diagnostics15111389 - 30 May 2025
Viewed by 499
Abstract
Background/Objectives: Breast cancer diagnosis heavily relies on histopathological assessment, which is prone to subjectivity and inefficiency, especially with whole-slide imaging (WSI). This study addressed these limitations by developing an automated breast cancer cell classification algorithm using an information-extreme machine learning approach and universal [...] Read more.
Background/Objectives: Breast cancer diagnosis heavily relies on histopathological assessment, which is prone to subjectivity and inefficiency, especially with whole-slide imaging (WSI). This study addressed these limitations by developing an automated breast cancer cell classification algorithm using an information-extreme machine learning approach and universal cytological features, aiming for objective and generalized histopathological diagnosis. Methods: Digitized histological images were processed to identify hyperchromatic cells. A set of 21 cytological features (10 geometric and 11 textural), chosen for their potential universality across cancers, were extracted from individual cells. These features were then used to classify cells as normal or malignant using an information-extreme algorithm. This algorithm optimizes an information criterion within a binary Hamming space to achieve robust recognition with minimal input features. The architectural innovation lies in the application of this information-extreme approach to cytological feature analysis for cancer cell classification. Results: The algorithm’s functional efficiency was evaluated on a dataset of 176 labeled cell images, yielding promising results: an accuracy of 89%, a precision of 85%, a recall of 84%, and an F1-score of 88%. These metrics demonstrate a balanced and effective model for automated breast cancer cell classification. Conclusions: The proposed information-extreme algorithm utilizing universal cytological features offers a potentially objective and computationally efficient alternative to traditional methods and may mitigate some limitations of deep learning in histopathological analysis. Future work will focus on validating the algorithm on larger datasets and exploring its applicability to other cancer types. Full article
Show Figures

Figure 1

25 pages, 3473 KiB  
Article
Stability-Preserving Model Reduction and Interpolation for Vibration Optimization in Beam-like Structures
by István Gozse and Róbert Szabolcsi
Appl. Sci. 2025, 15(10), 5669; https://doi.org/10.3390/app15105669 - 19 May 2025
Viewed by 328
Abstract
High-fidelity finite element models offer precise vibration analysis. However, their use in iterative optimization is computationally demanding. To overcome this, a stability-preserving framework for parametric model reduction and interpolation is developed. The interpolation approach is based on geometrically interpolating the poles of a [...] Read more.
High-fidelity finite element models offer precise vibration analysis. However, their use in iterative optimization is computationally demanding. To overcome this, a stability-preserving framework for parametric model reduction and interpolation is developed. The interpolation approach is based on geometrically interpolating the poles of a system. As the scalar parameter varies, the poles follow specific trajectories, but their samples are only known at discrete points. Since the exact paths between these samples are unknown, a set of artificial trajectories is proposed. It is demonstrated that this set of trajectories guarantees stability and a bounded deviation in the H-infinite sense for multi-input multi-output linear time-invariant systems. Furthermore, all the trajectories that satisfy this bound are identified and presented. This approach introduces a new sampling criterion that assesses the parameter space sampling given on second-order models. In order to apply this criterion, a practical model order reduction strategy tailored to Bernoulli beam dynamics is presented. A numerical case study on a cantilever beam demonstrates the model order reduction method. Full article
Show Figures

Figure 1

31 pages, 11135 KiB  
Article
Method to Select Variables for Estimating the Parameters of Equations That Describe Average Vehicle Travel Speed in Downtown City Areas
by José Gerardo Carrillo-González, Guillermo López-Maldonado, Karla Lorena Sánchez-Sánchez and Yuri Reyes
Sustainability 2025, 17(10), 4441; https://doi.org/10.3390/su17104441 - 13 May 2025
Viewed by 443
Abstract
A lack of public vehicular traffic data for a city limits our understanding of the traffic occurring in the street networks of that city; however, there are free tools to extract street network graphs from digital maps and to assess the static properties [...] Read more.
A lack of public vehicular traffic data for a city limits our understanding of the traffic occurring in the street networks of that city; however, there are free tools to extract street network graphs from digital maps and to assess the static properties associated with those graphs. This study proposes a two-stage modeling method to describe dynamic traffic data with static street network features. A quadratic polynomial is used to fit the average travel speed (ATS) pattern observed in the city center. Then, the relationship between the polynomial parameters and street network variables is analyzed through multiple linear regression. Descriptive geometric and topological measurements of downtown areas are obtained with the OSMnx tool (from OpenStreetMap), and with these data, independent variables are defined. The speed of vehicles, assessed every 15 min (from 6:00 a.m. to 10:00 p.m.) on the downtown street networks of twelve major cities, is obtained with the distance_matrix service of GoogleMaps, and with these data, the ATS (the dependent variable) is calculated. The ATS (presenting a U-shape) is modeled with a polynomial equation of order two, so there are three parameters for each city; in turn, each parameter is modeled with a multiple linear regression equation with the independent variables. For training purposes, the ATS equation parameters of ten cities are calculated, and the parameters, in turn, are explained with the proposed method. For validation purposes, the parameters of two cities not considered in the training process are calculated with the multiple linear regression equations. The ATS equation parameters of the twelve cities are correctly modeled so that each city’s ATS can be adequately described. It was concluded that the method selects the independent variables that are suitable to explain the ATS equation parameters. In addition, with the Akaike information criterion, the variable selection case presenting the best trade-off between accuracy and complexity is identified. Full article
Show Figures

Figure 1

Back to TopTop