Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline

Article Types

Countries / Regions

Search Results (187)

Search Parameters:
Keywords = ambiguity elimination

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 14822 KiB  
Article
Partial Ambiguity Resolution Strategy for Single-Frequency GNSS RTK/INS Tightly Coupled Integration in Urban Environments
by Dashuai Chai, Xiqi Wang, Yipeng Ning and Wengang Sang
Electronics 2025, 14(13), 2712; https://doi.org/10.3390/electronics14132712 - 4 Jul 2025
Viewed by 173
Abstract
Single-frequency global navigation satellite system/inertial navigation system (GNSS/INS) integration has wide application prospects in urban environments; however, correct integer ambiguity is the major challenge because of GNSS-blocked environments. In this paper, a sequential strategy of partial ambiguity resolution (PAR) of GNSS/INS for tightly [...] Read more.
Single-frequency global navigation satellite system/inertial navigation system (GNSS/INS) integration has wide application prospects in urban environments; however, correct integer ambiguity is the major challenge because of GNSS-blocked environments. In this paper, a sequential strategy of partial ambiguity resolution (PAR) of GNSS/INS for tightly coupled integration based on the robust posteriori residual, elevation angle, and azimuth in the body frame using INS aids is presented. First, the satellite is eliminated if the maximum absolute value of the robust posteriori residuals exceeds the set threshold. Otherwise, the satellites with a minimum elevation angle of less than or equal to 35° are successively eliminated. If satellites have elevation angles greater than 35°, these satellites are divided into different quadrants based on their azimuths calculated in body frame. The satellite with the maximum azimuth in each quadrant is selected as the candidate satellite, the candidate satellites are eliminated one by one, and the remaining satellites are used to calculate the position dilution of the precision (PDOP). Finally, the candidate satellite with the lowest PDOP is eliminated. Two sets of vehicle-borne data with a low-cost GNSS/INS integrated system are used to analyze the performance of the proposed algorithm. These experiments demonstrate that the proposed algorithm has the highest ambiguity fixing rates among all the designed PAR methods, and the fixing rates for these two sets of data are 99.40% and 98.74%, respectively. Additionally, among all the methods compared in this paper, the proposed algorithm demonstrates the best positioning performance in GNSS-blocked environments. Full article
Show Figures

Figure 1

26 pages, 21316 KiB  
Article
MultS-ORB: Multistage Oriented FAST and Rotated BRIEF
by Shaojie Zhang, Yinghui Wang, Jiaxing Ma, Jinlong Yang, Liangyi Huang and Xiaojuan Ning
Mathematics 2025, 13(13), 2189; https://doi.org/10.3390/math13132189 - 4 Jul 2025
Viewed by 181
Abstract
Feature matching is crucial in image recognition. However, blurring caused by illumination changes often leads to deviations in local appearance-based similarity, resulting in ambiguous or false matches—an enduring challenge in computer vision. To address this issue, this paper proposes a method named MultS-ORB [...] Read more.
Feature matching is crucial in image recognition. However, blurring caused by illumination changes often leads to deviations in local appearance-based similarity, resulting in ambiguous or false matches—an enduring challenge in computer vision. To address this issue, this paper proposes a method named MultS-ORB (Multistage Oriented FAST and Rotated BRIEF). The proposed method preserves all the advantages of the traditional ORB algorithm while significantly improving feature matching accuracy under illumination-induced blurring. Specifically, it first generates initial feature matching pairs using KNN (K-Nearest Neighbors) based on descriptor similarity in the Hamming space. Then, by introducing a local motion smoothness constraint, GMS (Grid-Based Motion Statistics) is applied to filter and optimize the matches, effectively reducing the interference caused by blurring. Afterward, the PROSAC (Progressive Sampling Consensus) algorithm is employed to further eliminate false correspondences resulting from illumination changes. This multistage strategy yields more accurate and reliable feature matches. Experimental results demonstrate that for blurred images affected by illumination changes, the proposed method improves matching accuracy by an average of 75%, reduces average error by 33.06%, and decreases RMSE (Root Mean Square Error) by 35.86% compared to the traditional ORB algorithm. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

18 pages, 2721 KiB  
Article
Experimental Study on Glass Deformation Calculation Using the Holographic Interferometry Double-Exposure Method
by Yucheng Li, Yang Zhang, Deyu Jia, Song Gao and Muqun Zhang
Appl. Sci. 2025, 15(12), 6938; https://doi.org/10.3390/app15126938 - 19 Jun 2025
Viewed by 251
Abstract
This study systematically compares the metrological characteristics of single- exposure, double-exposure, and continuous-exposure holographic interferometry for micro-deformation detection. Results demonstrate that the double-exposure method achieves optimal balance across critical performance metrics through its ideal cosine fringe field modulation. This approach (1) eliminates object [...] Read more.
This study systematically compares the metrological characteristics of single- exposure, double-exposure, and continuous-exposure holographic interferometry for micro-deformation detection. Results demonstrate that the double-exposure method achieves optimal balance across critical performance metrics through its ideal cosine fringe field modulation. This approach (1) eliminates object wave amplitude interference via dual-exposure superposition, establishing submicron linear mapping between fringe displacement and deformation amplitude; (2) introduces a fringe gradient-based direction detection algorithm resolving deformation vector ambiguity; and (3) implements an error-compensated fusion framework integrating theoretical modeling, MATLAB 2015b simulations, and experimental validation. Experiments on drilled glass samples confirm their superior performance in terms of near-ideal fringe contrast (1.0) and noise suppression (0.06). The technique significantly improves real-time capability and anti-interference robustness in micro-deformation monitoring, providing a validated solution for MEMS and material mechanics characterization. Full article
Show Figures

Figure 1

24 pages, 6561 KiB  
Article
Simultaneous Vibration and Nonlinearity Compensation for One-Period Triangular FMCW Ladar Signal Based on MSST
by Wei Li, Ruihua Shi, Qinghai Dong, Juanying Zhao, Bingnan Wang and Maosheng Xiang
Remote Sens. 2025, 17(10), 1689; https://doi.org/10.3390/rs17101689 - 11 May 2025
Viewed by 394
Abstract
When frequency-modulated continuous-wave (FMCW) laser radar (Ladar) is employed for three-dimensional imaging, the echo signal is susceptible to modulation nonlinearity and platform vibration due to modulation and the short wavelength. These effects cause main-lobe widening, side-lobe elevation, and positional shift, which degrades distance [...] Read more.
When frequency-modulated continuous-wave (FMCW) laser radar (Ladar) is employed for three-dimensional imaging, the echo signal is susceptible to modulation nonlinearity and platform vibration due to modulation and the short wavelength. These effects cause main-lobe widening, side-lobe elevation, and positional shift, which degrades distance detection accuracy. To solve these problems, this paper proposes a compensation method combining multiple synchrosqueezing transform (MSST), equal-phase interval resampling, and high-order ambiguity function (HAF). Firstly, variational mode decomposition (VMD) is applied to the optical prism signal to eliminate low-frequency noise and harmonic peaks. MSST is used to extract the time–frequency curve of the optical prism. The nonlinearity in the transmitted signal is estimated by two-step integration. An internal calibration signal containing nonlinearity is constructed at a higher sampling rate to resample the actual signal at an equal-phase interval. Then, HAF compensates for high-order vibration and residual phase error after resampling. Finally, symmetrical triangle wave modulation is used to remove constant-speed vibration. Verifying by actual data, the proposed method can enhance the main lobe and suppress the side lobe about 1.5 dB for a strong reflection target signal. Natural-target peaks can also be enhanced and the remaining peaks are suppressed, which is helpful to extract an accurate target distance. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

17 pages, 3403 KiB  
Article
Reduced Genetic Diversity of Key Fertility and Vector Competency Related Genes in Anopheles gambiae s.l. Across Sub-Saharan Africa
by Fatoumata Seck, Mouhamadou Fadel Diop, Karim Mané, Amadou Diallo, Idrissa Dieng, Moussa Namountougou, Abdoulaye Diabate, Alfred Amambua-Ngwa, Ibrahima Dia and Benoit Sessinou Assogba
Genes 2025, 16(5), 543; https://doi.org/10.3390/genes16050543 - 30 Apr 2025
Viewed by 811
Abstract
Background: Insecticide resistance challenges the vector control efforts towards malaria elimination and proving the development of complementary tools. Targeting the genes that are involved in mosquito fertility and susceptibility to Plasmodium with small molecule inhibitors has been a promising alternative to curb the [...] Read more.
Background: Insecticide resistance challenges the vector control efforts towards malaria elimination and proving the development of complementary tools. Targeting the genes that are involved in mosquito fertility and susceptibility to Plasmodium with small molecule inhibitors has been a promising alternative to curb the vector population and drive the transmission down. However, such an approach would require a comprehensive knowledge of the genetic diversity of the targeted genes to ensure the broad efficacy of new tools across the natural vector populations. Methods: Four fertility and parasite susceptibility genes were identified from a systematic review of the literature. The Single Nucleotide Polymorphisms (SNPs) found within the regions spanned by these four genes, genotyped across 2784 wild-caught Anopheles gambiae s.l. from 19 sub-Saharan African (SSA) countries, were extracted from the whole genome SNP data of the Ag1000G project (Ag3.0). The population genetic analysis on gene-specific data included the determination of the population structure, estimation of the differentiation level between the populations, evaluation of the linkage between the non-synonymous SNPs (nsSNPs), and a few statistical tests. Results: As potential targets for small molecule inhibitors to reduce malaria transmission, our set of four genes associated with Anopheles fertility and their susceptibility to Plasmodium comprises the mating-induced stimulator of oogenesis protein (MISO, AGAP002620), Vitellogenin (Vg, AGAP004203), Lipophorin (Lp, AGAP001826), and Haem-peroxidase 15 (HPX15, AGAP013327). The analyses performed on these potential targets of small inhibitor molecules revealed that the genes are conserved within SSA populations of An. gambiae s.l. The overall low Fst values and low clustering of principal component analysis between species indicated low genetic differentiation at all the genes (MISO, Vg, Lp and HPX15). The low nucleotide diversity (>0.10), negative Tajima’s D values, and heterozygosity analysis provided ecological insights into the purifying selection that acts to remove deleterious mutations, maintaining genetic diversity at low levels within the populations. None of MISO nsSNPs were identified in linkage disequilibrium, whereas a few weakly linked nsSNPs with ambiguous haplotyping were detected at other genes. Conclusions: This integrated finding on the genetic features of major malaria vectors’ biological factors across natural populations offer new insights for developing sustainable malaria control tools. These loci were reasonably conserved, allowing for the design of effective targeting with small molecule inhibitors towards controlling vector populations and lowering global malaria transmission. Full article
(This article belongs to the Section Microbial Genetics and Genomics)
Show Figures

Figure 1

23 pages, 6006 KiB  
Article
Collaborative Modeling of BPMN and HCPN: Formal Mapping and Iterative Evolution of Process Models for Scenario Changes
by Zhaoqi Zhang, Feng Ni, Jiang Liu, Niannian Chen and Xingjun Zhou
Information 2025, 16(4), 323; https://doi.org/10.3390/info16040323 - 18 Apr 2025
Viewed by 425
Abstract
Dynamic and changeable business scenarios pose significant challenges to the adaptability and verifiability of process models. Despite its widespread adoption as an ISO-standard modeling language, Business Process Model and Notation (BPMN) faces inherent limitations in formal semantics and verification capabilities, hindering the mathematical [...] Read more.
Dynamic and changeable business scenarios pose significant challenges to the adaptability and verifiability of process models. Despite its widespread adoption as an ISO-standard modeling language, Business Process Model and Notation (BPMN) faces inherent limitations in formal semantics and verification capabilities, hindering the mathematical validation of process evolution behaviors under scenario changes. To address these challenges, this paper proposes a collaborative modeling framework integrating BPMN with hierarchical colored Petri nets (HCPNs), enabling the efficient iterative evolution and correctness verification of process change through formal mapping and localized evolution mechanism. First, hierarchical mapping rules are established with subnet-based modular decomposition, transforming BPMN elements into an HCPN executable model and effectively resolving semantic ambiguities; second, atomic evolution operations (addition, deletion, and replacement) are defined to achieve partial HCPN updates, eliminating the computational overhead of global remapping. Furthermore, an automated verification pipeline is constructed by analyzing state spaces, validating critical properties such as deadlock freeness and behavioral reachability. Evaluated through an intelligent AI-driven service scenario involving multi-gateway processes, the framework demonstrates behavioral effectiveness. This work provides a pragmatic solution for scenario-driven process evolution in domains requiring agile iteration, such as fintech and smart manufacturing. Full article
Show Figures

Figure 1

22 pages, 5056 KiB  
Article
SAAS-Net: Self-Supervised Sparse Synthetic Aperture Radar Imaging Network with Azimuth Ambiguity Suppression
by Zhiyi Jin, Zhouhao Pan, Zhe Zhang and Xiaolan Qiu
Remote Sens. 2025, 17(6), 1069; https://doi.org/10.3390/rs17061069 - 18 Mar 2025
Viewed by 437
Abstract
Sparse Synthetic Aperture Radar (SAR) imaging has garnered significant attention due to its ability to suppress azimuth ambiguity in under-sampled conditions, making it particularly useful for high-resolution wide-swath (HRWS) SAR systems. Traditional compressed sensing-based sparse SAR imaging algorithms are hindered by range–azimuth coupling [...] Read more.
Sparse Synthetic Aperture Radar (SAR) imaging has garnered significant attention due to its ability to suppress azimuth ambiguity in under-sampled conditions, making it particularly useful for high-resolution wide-swath (HRWS) SAR systems. Traditional compressed sensing-based sparse SAR imaging algorithms are hindered by range–azimuth coupling induced by range cell migration (RCM), which results in high computational cost and limits their applicability to large-scale imaging scenarios. To address this challenge, the approximated observation-based sparse SAR imaging algorithm was developed, which decouples the range and azimuth directions, significantly reducing computational and temporal complexities to match the performance of conventional matched filtering algorithms. However, this method requires iterative processing and manual adjustment of parameters. In this paper, we propose a novel deep neural network-based sparse SAR imaging method, namely the Self-supervised Azimuth Ambiguity Suppression Network (SAAS-Net). Unlike traditional iterative algorithms, SAAS-Net directly learns the parameters from data, eliminating the need for manual tuning. This approach not only improves imaging quality but also accelerates the imaging process. Additionally, SAAS-Net retains the core advantage of sparse SAR imaging—azimuth ambiguity suppression in under-sampling conditions. The method introduces self-supervision to achieve orientation ambiguity suppression without altering the hardware architecture. Simulations and real data experiments using Gaofen-3 validate the effectiveness and superiority of the proposed approach. Full article
Show Figures

Figure 1

16 pages, 294 KiB  
Article
The Principle of Maximum Conformality Correctly Resolves the Renormalization-Scheme-Dependence Problem
by Jiang Yan, Stanley J. Brodsky, Leonardo Di Giustino, Philip G. Ratcliffe, Shengquan Wang and Xinggang Wu
Symmetry 2025, 17(3), 411; https://doi.org/10.3390/sym17030411 - 9 Mar 2025
Cited by 3 | Viewed by 588
Abstract
In this paper, we clarify a serious misinterpretation and consequent misuse of the Principle of Maximum Conformality (PMC), which also can serve as a mini-review of PMC. In a recently published article, P. M. Stevenson has claimed that “the PMC is ineffective and [...] Read more.
In this paper, we clarify a serious misinterpretation and consequent misuse of the Principle of Maximum Conformality (PMC), which also can serve as a mini-review of PMC. In a recently published article, P. M. Stevenson has claimed that “the PMC is ineffective and does nothing to resolve the renormalization-scheme-dependence problem”, concluding incorrectly that the success of PMC predictions is due to the PMC being a “laborious, ad hoc, and back-door” version of the Principle of Minimal Sensitivity (PMS). We show that such conclusions are incorrect, deriving from a misinterpretation of the PMC and an overestimation of the applicability of the PMS. The purpose of the PMC is to achieve precise fixed-order pQCD predictions, free from conventional renormalization schemes and scale ambiguities. We demonstrate that the PMC predictions satisfy all the self-consistency conditions of the renormalization group and standard renormalization-group invariance; the PMC predictions are thus independent of any initial choice of renormalization scheme and scale. The scheme independence of the PMC is also ensured by commensurate scale relations, which relate different observables to each other. Moreover, in the Abelian limit, the PMC dovetails into the well-known Gell-Mann–Low framework, a method universally revered for its precision in QED calculations. Due to the elimination of factorially divergent renormalon terms, the PMC series not only attains a convergence behavior far superior to that of its conventional counterparts but also deftly curtails any residual scale dependence caused by the unknown higher-order terms. This refined convergence, coupled with its robust suppression of residual uncertainties, furnishes a sound and reliable foundation for estimating the contributions from unknown higher-order terms. Anchored in the bedrock of standard renormalization-group invariance, the PMC simultaneously eradicates the factorial divergences and eliminates superfluous systematic errors, which inversely provides a good foundation for achieving high-precision pQCD predictions. Consequently, owing to its rigorous theoretical underpinnings, the PMC is eminently applicable to virtually all high-energy hadronic processes. Full article
(This article belongs to the Section Physics)
20 pages, 58910 KiB  
Article
A 3D Blur Suppression Method for High-Resolution and Wide-Swath Blurred Images Based on Estimating and Eliminating Defocused Point Clouds
by Yuling Liu, Fubo Zhang, Longyong Chen and Tao Jiang
Remote Sens. 2025, 17(5), 928; https://doi.org/10.3390/rs17050928 - 5 Mar 2025
Viewed by 690
Abstract
Traditional single-channel Synthetic Aperture Radar (SAR) cannot achieve high-resolution and wide-swath (HRWS) imaging due to the constraint of the minimum antenna area. Distributed HRWS SAR can realize HRWS imaging and also possesses the resolution ability in the height dimension by arranging multiple satellites [...] Read more.
Traditional single-channel Synthetic Aperture Radar (SAR) cannot achieve high-resolution and wide-swath (HRWS) imaging due to the constraint of the minimum antenna area. Distributed HRWS SAR can realize HRWS imaging and also possesses the resolution ability in the height dimension by arranging multiple satellites in the elevation direction. Nevertheless, due to the excessively high pulse repetition frequency (PRF) of the distributed SAR system, range ambiguity will occur in large detection scenarios. When directly performing 3D-imaging processing on SAR images with range ambiguity, both focused point clouds and blurred point clouds will exist simultaneously in the generated 3D point clouds, which affects the quality of the generated 3D-imaging point clouds. To address this problem, this paper proposes a 3D blur suppression method for HRWS blurred images, which estimates and eliminates defocused point clouds based on focused targets. The echoes with range ambiguity are focused in the near area and the far area, respectively. Then, through image registration, amplitude and phase correction, and height-direction focusing, the point clouds in the near area and the far area are obtained. The strongest points in the two sets of point clouds are iteratively selected to estimate and eliminate the defocused point clouds in the other set of point clouds until all the ambiguity is eliminated. Simulation experiments based on airborne measured data verified the capability to achieve HRWS 3D blur suppression of this method. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

19 pages, 946 KiB  
Article
Efficient Ensemble of Deep Neural Networks for Multimodal Punctuation Restoration and the Spontaneous Informal Speech Dataset
by Homayoon Beigi and Xing Yi Liu
Electronics 2025, 14(5), 973; https://doi.org/10.3390/electronics14050973 - 28 Feb 2025
Viewed by 1093
Abstract
Punctuation restoration plays an essential role in the postprocessing procedure of automatic speech recognition, but model efficiency is a key requirement for this task. To that end, we present EfficientPunct, an ensemble method with a multimodal time-delay neural network that outperforms the [...] Read more.
Punctuation restoration plays an essential role in the postprocessing procedure of automatic speech recognition, but model efficiency is a key requirement for this task. To that end, we present EfficientPunct, an ensemble method with a multimodal time-delay neural network that outperforms the current best model by 1.0 F1 point while using less than a tenth of its network parameters for inference. This work further streamlines a speech recognizer and a BERT implementation to efficiently output hidden layer acoustic embeddings and text embeddings in the context of punctuation restoration. Here, forced alignment and temporal convolutions are used to eliminate the need for attention-based fusion, greatly increasing computational efficiency and improving performance. EfficientPunct sets a new state of the art with an ensemble that weighs BERT’s purely language-based predictions slightly more than the multimodal network’s predictions. Although EfficientPunct shows great promise, from a different perspective, to date, another important challenge in the field has been the fact that punctuation restoration models have been evaluated almost solely on well-structured, scripted corpora. However, real-world ASR systems and postprocessing pipelines typically apply to spontaneous speech with significant irregularities, stutters, and deviations from perfect grammar. To address this important discrepancy, we also introduce SponSpeech, a punctuation restoration dataset derived from informal speech sources, which includes punctuation and casing information. In addition to publicly releasing the dataset, the authors have contributed by providing a filtering pipeline that can be used to generate more data. This filtering pipeline examines the quality of both the speech audio and the transcription text. A challenging test set is also carefully constructed, aimed at evaluating the models’ ability to leverage audio information to predict, otherwise grammatically ambiguous, punctuation. SponSpeech has been made available to the public, along with all code for dataset building and model runs. Full article
(This article belongs to the Special Issue Future Technologies for Data Management, Processing and Application)
Show Figures

Figure 1

20 pages, 436 KiB  
Article
Data-Driven Distributionally Robust Optimal Power Flow for Distribution Grids Under Wasserstein Ambiguity Sets
by Fangzhou Liu, Jincheng Huo, Fengfeng Liu, Dongliang Li and Dong Xue
Electronics 2025, 14(4), 822; https://doi.org/10.3390/electronics14040822 - 19 Feb 2025
Viewed by 1026
Abstract
The increasing integration of distributed energy resources into distribution feeders introduces significant uncertainties, stemming from volatile renewable sources and other fluctuating electrical elements, which pose substantial challenges for optimal power flow (OPF) analysis. This paper introduces a data-driven distributionally robust chance-constrained (DRCC) approach [...] Read more.
The increasing integration of distributed energy resources into distribution feeders introduces significant uncertainties, stemming from volatile renewable sources and other fluctuating electrical elements, which pose substantial challenges for optimal power flow (OPF) analysis. This paper introduces a data-driven distributionally robust chance-constrained (DRCC) approach to address the stochastic Alternating Current (AC) OPF problem in distribution grids, where the exact probability distributions of uncertainties are unknown. The proposed method utilizes the Wasserstein metric to construct an ambiguity set based on empirical distributions derived from historical data, eliminating the need for prior knowledge of the underlying probability distributions. Notably, the size of the Wasserstein ball within the ambiguity set is inversely related to the volume of available data, allowing for adaptive robustness. Moreover, a computationally efficient reformulation of the DRCC-OPF model is developed using the LinDistFlow AC power flow approximation. The effectiveness and precision of the developed method are validated through multiple IEEE distribution test cases, demonstrating higher reliability of the security constraints compared with other methods. As more data become available, this reliability is systematically and securely adjusted to achieve greater economic efficiency. Full article
Show Figures

Figure 1

22 pages, 9566 KiB  
Article
IDS Standard and bSDD Service as Tools for Automating Information Exchange and Verification in Projects Implemented in the BIM Methodology
by Magdalena Kładź and Andrzej Szymon Borkowski
Buildings 2025, 15(3), 378; https://doi.org/10.3390/buildings15030378 - 25 Jan 2025
Viewed by 1478
Abstract
The era of openBIM is ongoing, and the open standards IDS (Information Delivery Specification) and bSDD (BuildingSMART Data Dictionary) are significantly impacting the automation of information exchange and verification in projects, using predefined data, enabling quick updates and combining it with other data. [...] Read more.
The era of openBIM is ongoing, and the open standards IDS (Information Delivery Specification) and bSDD (BuildingSMART Data Dictionary) are significantly impacting the automation of information exchange and verification in projects, using predefined data, enabling quick updates and combining it with other data. IDS and bSDD complement the widely used open IFC (Industry Foundation Classes) format, which solves the issue of purchasing both the appropriate hardware and software to work with native files from different sources. As a result, external assignments or internal tasks have the potential to precisely define the desired product, speeding up the entire process carried out according to the BIM (Building Information Modeling) methodology, reducing the number of questions about ambiguous requirements, and eliminating the need for continuous feedback on the model. Both files can be used on the developer’s side as an attachment to BIM documents, as well as on the construction site or during the bidding process. Digital IDS and bSDD files can be interpreted not only by humans but also by machines, bringing added value and usability. An identified research gap is the lack of a clear procedure for applying the mentioned standards, and thus, the common problem of purchasing software to check the quality of the model for information content. This article demonstrates the possibility of creating IDS and bSDD files in tools based on filling in specific fields and their interrelation, as well as their practical use in the process of verifying the information content of BIM models. By adopting open standards, teams can improve communication, increase productivity, and ensure continuity in data exchanges. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

15 pages, 370 KiB  
Article
Are Women More Risk Averse? A Sequel
by Christos I. Giannikos and Efstathia D. Korkou
Risks 2025, 13(1), 12; https://doi.org/10.3390/risks13010012 - 15 Jan 2025
Viewed by 1822
Abstract
This paper reexamines the question of gender differences in financial relative risk aversion using updated methods and data. Specifically, the paper revisits the 1998 work “Are women more risk averse?” by Jianakoplos and Bernasek, suggests refinements in their model in relation to the [...] Read more.
This paper reexamines the question of gender differences in financial relative risk aversion using updated methods and data. Specifically, the paper revisits the 1998 work “Are women more risk averse?” by Jianakoplos and Bernasek, suggests refinements in their model in relation to the database used, namely the U.S. Federal Reserve Board’s Survey of Consumer Finances (SCF), and performs new tests on the latest SCF from 2022. The suggested refinements pertain first to an enhanced computation of wealth, which includes additional categories of assets such as 401(k)s or other thrift savings accounts, and second to the more subtle handling and consideration of specific demographic data of the SCF respondents. Unlike the original study, which also included married couples, the new study focuses exclusively on single-headed (never-married) households. This eliminates ambiguity about the actual financial decision maker in households, enabling a clearer assessment of individual gendered behavior. Following the refinements, the new tests reveal a continuing pattern of decreasing relative risk aversion; however, contrary to the 1998 findings, there is no significant gender difference in financial relative risk aversion in 2022. This study also documents that education levels strongly influence risk-taking: single women with higher education levels are more likely to hold risky assets, while for men, higher education correlates with less risk-taking. The paper concludes by informing policymakers and financial educators so as to further tailor their strategies for promoting gender equality in financial decision-making. Full article
11 pages, 450 KiB  
Article
Equation for Calculation of Critical Current Density Using the Bean’s Model with Self-Consistent Magnetic Units to Prevent Unit Conversion Errors
by Massimiliano Polichetti, Armando Galluzzi, Rohit Kumar and Amit Goyal
Materials 2025, 18(2), 269; https://doi.org/10.3390/ma18020269 - 9 Jan 2025
Cited by 4 | Viewed by 1317
Abstract
This study analyzes the calculation of the critical current density Jc,mag by means of Bean’s critical state model, using the equation formulated by Gyorgy et al. and other similar equations derived from it reported in the literature. While estimations of Jc,mag [...] Read more.
This study analyzes the calculation of the critical current density Jc,mag by means of Bean’s critical state model, using the equation formulated by Gyorgy et al. and other similar equations derived from it reported in the literature. While estimations of Jc,mag using Bean’s model are widely performed, improper use of different equations with different magnetic units and pre-factors leads to confusion and to significant errors in the reported values of Jc,mag. In this work, a SINGLE general equation is proposed for the calculation of Jc,mag for a rectangular parallelepiped sample in perpendicular field using Bean’s critical state model, underlying how the simple conversion of magnetic units can lead to a Jc,mag in the desired units, without the need to introduce any other correction or use other specific equations depending on the units of Jc,mag. In this equation, the numerical pre-factor is dimensionless, independent of the unit system used. A comparison between the expression reported in the literature is done, showing how they can lead to different results depending on the used units, and that these results can be at least one order of magnitude different from the correct results obtained with the general equation proposed in this work. This resolves all ambiguities and aligns with the correct dimensional analysis, eliminates discrepancies in the calculated Jc,mag, and will avoid further propagation of errors in the literature. Full article
(This article belongs to the Section Materials Physics)
Show Figures

Figure 1

15 pages, 4029 KiB  
Article
GPS Phase Integer Ambiguity Resolution Based on Eliminating Coordinate Parameters and Ant Colony Algorithm
by Ning Liu, Shuangcheng Zhang, Xiaoli Wu and Yu Shen
Sensors 2025, 25(2), 321; https://doi.org/10.3390/s25020321 - 8 Jan 2025
Viewed by 1026
Abstract
Correctly fixing the integer ambiguity of GNSS is the key to realizing the application of GNSS high-precision positioning. When solving the float solution of ambiguity based on the double-difference model epoch by epoch, the common method for resolving the integer ambiguity needs to [...] Read more.
Correctly fixing the integer ambiguity of GNSS is the key to realizing the application of GNSS high-precision positioning. When solving the float solution of ambiguity based on the double-difference model epoch by epoch, the common method for resolving the integer ambiguity needs to solve the coordinate parameter information, due to the influence of limited GNSS phase data observations. This type of method will lead to an increase in the ill-posedness of the double-difference solution equation, so that the fixed success rate of the integer ambiguity is not high. Therefore, a new integer ambiguity resolution method based on eliminating coordinate parameters and ant colony algorithm is proposed in this paper. The method eliminates the coordinate parameters in the observation equation using QR decomposition transformation, and only estimates the ambiguity parameters using the Kalman filter. On the basis that the Kalman filter will obtain the float solution of ambiguity, the decorrelation processing is carried out based on continuous Cholesky decomposition, and the optimal solution of integer ambiguity is searched using the ant colony algorithm. Two sets of static and dynamic GPS experimental data are used to verify the method and compared with conventional least squares and LAMBDA methods. The results show that the new method has good decorrelation effect, which can correctly and effectively realize the integer ambiguity resolution. Full article
(This article belongs to the Special Issue Advances in GNSS Signal Processing and Navigation)
Show Figures

Figure 1

Back to TopTop