Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,369)

Search Parameters:
Keywords = vector space models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 9179 KB  
Article
GA-HRNet: High-Precision Building Extraction for Individualization of Oblique Photogrammetry 3D Models
by Jiacui Zou, Yongchuan Zhang, Feng Li, Ruibing Wang, Jiajun Wu and Yang Qiao
Appl. Sci. 2026, 16(3), 1486; https://doi.org/10.3390/app16031486 - 2 Feb 2026
Abstract
Building individualization is a critical preprocessing step for refined applications of oblique photogrammetry 3D models, yet existing semantic segmentation methods encounter accuracy bottlenecks when applied to ultra-high-resolution orthophotos. To overcome this challenge, this study constructs an automated technical framework following a workflow from [...] Read more.
Building individualization is a critical preprocessing step for refined applications of oblique photogrammetry 3D models, yet existing semantic segmentation methods encounter accuracy bottlenecks when applied to ultra-high-resolution orthophotos. To overcome this challenge, this study constructs an automated technical framework following a workflow from orthophoto generation to high-precision semantic segmentation, and finally to dynamic 3D rendering. The framework comprises three stages: (1) converting the 3D model into a 2D orthophoto to ensure that the extracted building contours can be precisely registered with the original 3D model in space; (2) utilizing the proposed Gated-ASPP High-Resolution Network (GA-HRNet) to extract building contours, enhancing segmentation accuracy by synergizing HRNet’s spatial detail preservation capability with ASPP’s multi-scale context awareness; (3) mapping the extracted 2D vector contours back to the 3D model and achieving interactive building individualization via dynamic rendering technology. Evaluated on a custom-built Hong Kong urban building dataset, GA-HRNet achieved an Intersection over Union (IoU) of 91.25%, an F1-Score of 95.41%, a Precision of 93.31%, and a Recall of 97.70%. Its performance surpassed that of various comparative models, including FCN, U-Net, MBR-HRNet, and others, with an IoU lead of 1.46 to 5.62 percentage points. This method enables precise building extraction and dynamic highlighting within 3D scenes, providing an efficient and reliable technical path for the refined application of large-scale urban oblique photogrammetry models. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 9162 KB  
Article
Multi-Domain Incremental Learning for Semantic Segmentation via Visual Domain Prompt in Remote Sensing Data
by Junxi Li, Zhiyuan Yan, Wenhui Diao, Yidan Zhang, Zicong Zhu, Yichen Tian and Xian Sun
Remote Sens. 2026, 18(3), 464; https://doi.org/10.3390/rs18030464 - 1 Feb 2026
Abstract
Domain incremental learning for semantic segmentation has gained lots of attention due to its importance for many fields including urban planning and autonomous driving. The catastrophic forgetting problem caused by domain shift has been alleviated by structure expansion of the model or data [...] Read more.
Domain incremental learning for semantic segmentation has gained lots of attention due to its importance for many fields including urban planning and autonomous driving. The catastrophic forgetting problem caused by domain shift has been alleviated by structure expansion of the model or data rehearsal. However, these methods ignore similar contextual knowledge between the new and the old data domain and assume that new knowledge and old knowledge are completely mutually exclusive, which cause the model to be trained in a suboptimal direction. Motivated by the prompt learning, we proposed a new domain incremental learning framework named RS-VDP. The key innovation of RS-VDP is to utilize a visual domain prompt to change the optimization direction from input data space and feature space. First, we designed a domain prompt based on a dynamic location module, which applied a visual domain prompt according to a local entropy map to update the distribution of the input images. Second, in order to filter the feature vectors with high confidence, a representation feature alignment based on an entropy map module is proposed. This module ensures the accuracy and stability of the feature vectors involved in the regularization loss, alleviating the problem of semantic drift. Finally, we introduced a new evaluation metric to measure the overall performance of the incremental learning models, solving the problem that the traditional evaluation metric is affected by the single-task accuracy. Comprehensive experiments demonstrated the effectiveness of the proposed method by significantly reducing the degree of catastrophic forgetting. Full article
Show Figures

Figure 1

23 pages, 5359 KB  
Article
Surrogate-Based Reconstruction of Structural Damage in Train Collisions: A Systematic Optimization Framework
by Hui Zhao, Dehong Zhang and Ping Xu
Systems 2026, 14(2), 156; https://doi.org/10.3390/systems14020156 - 31 Jan 2026
Viewed by 50
Abstract
Accurate reconstruction of train collision accidents is essential for understanding impact conditions, assessing crashworthiness, and supporting safety improvements. This study proposes a surrogate-based optimization framework for reconstructing structural damage in train collisions from post-accident observations. The pre-impact kinematic state, expressed by a six-dimensional [...] Read more.
Accurate reconstruction of train collision accidents is essential for understanding impact conditions, assessing crashworthiness, and supporting safety improvements. This study proposes a surrogate-based optimization framework for reconstructing structural damage in train collisions from post-accident observations. The pre-impact kinematic state, expressed by a six-dimensional vector of relative offsets, rotations, and impact velocity, is formulated as an inverse problem in which a Sum of Squared Relative Deviations (SSRD) between measured and simulated residual deformations serves as the objective function. A reduced two-vehicle finite element (FE) model is developed to capture the dominant impact dynamics, an Optimal Latin Hypercube Design is used to sample the parameter space, and a Kriging surrogate model is constructed to approximate the response. A simulated annealing algorithm is applied to search for the global minimum. The framework is demonstrated on a real high-speed rear-end collision of electric multiple units. The Kriging model achieves a coefficient of determination of about 0.85, and the optimized kinematic state yields FE-predicted residual deformations that agree with field measurements at key locations to within about 5%. The results show that the method can efficiently reconstruct physically plausible collision scenarios and provide insight into parameter sensitivity and identifiability for railway safety analysis. Full article
17 pages, 797 KB  
Article
Continued Electromagnetic Signal Classification Based on Vector Space Separation
by Lu Jia, Yan Zhao, Shichuan Chen and Zhijin Zhao
Electronics 2026, 15(3), 613; https://doi.org/10.3390/electronics15030613 - 30 Jan 2026
Viewed by 114
Abstract
Incremental electromagnetic signal classification is crucial in realistic wireless environments where new signal types continuously emerge and historical training data are often unavailable. This paper proposes a model-based incremental learning method driven by vector space separation to mitigate catastrophic forgetting without accessing old-task [...] Read more.
Incremental electromagnetic signal classification is crucial in realistic wireless environments where new signal types continuously emerge and historical training data are often unavailable. This paper proposes a model-based incremental learning method driven by vector space separation to mitigate catastrophic forgetting without accessing old-task samples or requiring semantic information. We show that forgetting is largely caused by insufficient separation between old and new classes in the classifier weight space. To address this issue, we jointly introduce weight normalization, a cosine-similarity separation loss, and regularization, together with cross-entropy supervision for new classes. Based on these designs, we propose an incremental learning method based on vector space separation for electromagnetic signal classification, enabling the model to continually recognize modulation signals without requiring semantic information or access to raw data from previous tasks during incremental updates. Experiments on two simulated modulation datasets under multiple task sequences demonstrate that the proposed method consistently alleviates catastrophic forgetting and achieves stable incremental performance, outperforming baselines while avoiding data rehearsal. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

24 pages, 861 KB  
Article
Distinguishability-Driven Voice Generation for Speaker Anonymization via Random Projection and GMM
by Chunxia Wang, Qiuyu Zhang, Yingjie Hu and Huiyi Wei
Big Data Cogn. Comput. 2026, 10(2), 43; https://doi.org/10.3390/bdcc10020043 - 29 Jan 2026
Viewed by 61
Abstract
Speaker anonymization effectively conceals speaker identity in speech signals to protect privacy. To address issues in existing anonymization systems, including reduced voice distinguishability, limited anonymized voices, reliance on an external speaker pool, and vulnerability to privacy leakage against strong attackers, a novel distinguishability-driven [...] Read more.
Speaker anonymization effectively conceals speaker identity in speech signals to protect privacy. To address issues in existing anonymization systems, including reduced voice distinguishability, limited anonymized voices, reliance on an external speaker pool, and vulnerability to privacy leakage against strong attackers, a novel distinguishability-driven voice generation for speaker anonymization via random projection and the Gaussian Mixture Model (GMM) is proposed. This method first applies the random projection to lower the dimensionality of the X-vectors from an external speaker pool, and then constructs a GMM in the reduced dimensional space to fit the generative model. By sampling from this generative model, anonymous speaker identity representations are generated, ultimately synthesizing anonymized speech that maintains both intelligibility and distinguishability. To ensure the anonymized speech remains sufficiently distinguishable from the original and prevents excessive similarity, a cosine similarity check is implemented between the original X-vector and pseudo-X-vector. Experimental results on the VoicePrivacy Challenge datasets demonstrate that the proposed method not only effectively protects speaker privacy across different attack scenarios but also preserves speech content integrity while significantly enhancing speaker distinguishability between original speakers and their corresponding pseudo-speakers, as well as among different pseudo-speakers. Full article
(This article belongs to the Topic Generative AI and Interdisciplinary Applications)
Show Figures

Figure 1

27 pages, 1408 KB  
Article
A Fuzzy Granular K-Means Clustering Method Driven by Gaussian Membership Functions
by Junjie Huang, Biyun Lan, Haibo Huang, Tiancai Huang and Yumin Chen
Mathematics 2026, 14(3), 462; https://doi.org/10.3390/math14030462 - 28 Jan 2026
Viewed by 70
Abstract
The K-means clustering algorithm is widely applied in various clustering tasks due to its high computational efficiency and simple implementation. However, its performance significantly deteriorates when dealing with non-convex structures, fuzzy boundaries, or noisy data, as it relies on the assumption that clusters [...] Read more.
The K-means clustering algorithm is widely applied in various clustering tasks due to its high computational efficiency and simple implementation. However, its performance significantly deteriorates when dealing with non-convex structures, fuzzy boundaries, or noisy data, as it relies on the assumption that clusters are spherical or linearly separable. To address these limitations, this paper proposes a Gaussian membership-driven fuzzy granular K-means clustering method. In this approach, multi-function Gaussian membership functions are used for fuzzy granulation at the single-feature level to generate fuzzy granules, while fuzzy granule vectors are constructed in the multi-feature space. A novel distance metric for fuzzy granules is defined along with operational rules, for which axiomatic proof is provided. This Gaussian-based granulation enables effective modeling of nonlinear separability in complex data structures, leading to the development of a new fuzzy granular K-means clustering framework. Experimental results on multiple public UCI datasets demonstrate that the proposed method significantly outperforms traditional K-means and other baseline methods in clustering tasks involving complex geometric data (e.g., circular and spiral structures), showing improved robustness and adaptability. This offers an effective solution for clustering data with intricate distributions. Full article
22 pages, 1274 KB  
Article
A Predictive Approach for the Early Reliability Assessment in Embedded Systems Using Code and Trace Embeddings via Machine Learning
by Felipe Restrepo-Calle, Enrique Abma Romero and Sergio Cuenca-Asensi
Electronics 2026, 15(3), 543; https://doi.org/10.3390/electronics15030543 - 27 Jan 2026
Viewed by 146
Abstract
Radiation-induced transient faults pose a growing challenge for safety-critical embedded systems, yet traditional radiation testing and large-scale statistical fault injection (SFI) remain costly and impractical during early design stages. This paper presents a predictive approach for early reliability assessment that replaces handcrafted feature [...] Read more.
Radiation-induced transient faults pose a growing challenge for safety-critical embedded systems, yet traditional radiation testing and large-scale statistical fault injection (SFI) remain costly and impractical during early design stages. This paper presents a predictive approach for early reliability assessment that replaces handcrafted feature engineering with automatically learned vector representations of source code and execution traces. We derive multiple embeddings for traces and source code, and use them as inputs to a family of regression models, including ensemble methods and linear baselines, to build predictive models for reliability. Experimental evaluation shows that embedding-based models outperform prior approaches, reducing the mean absolute percentage error (MAPE) from 6.24% to 2.14% for correct executions (unACE), from 20.95% to 10.40% for Hangs, and from 49.09% to 37.69% for silent data corruptions (SDC) after excluding benchmarks with SDC below 1%. These results show that source code and trace embeddings can serve as effective estimators for expensive fault injection campaigns, enabling early-stage reliability assessment in radiation-exposed embedded systems without requiring any manual feature engineering. This capability provides a practical foundation for supporting design-space exploration during early development phases. Full article
Show Figures

Figure 1

10 pages, 258 KB  
Article
Quantum-like Cognition and Decision-Making: Interpretation of Phases in Quantum-like Superposition
by Andrei Khrennikov
Entropy 2026, 28(2), 134; https://doi.org/10.3390/e28020134 - 23 Jan 2026
Viewed by 182
Abstract
This paper addresses a central conceptual challenge in Quantum-like Cognition and Decision-Making (QCDM) and the broader research program of Quantum-like Modeling (QLM): the interpretation of phases in quantum-like state superpositions. In QLM, system states are represented by normalized vectors in a complex [...] Read more.
This paper addresses a central conceptual challenge in Quantum-like Cognition and Decision-Making (QCDM) and the broader research program of Quantum-like Modeling (QLM): the interpretation of phases in quantum-like state superpositions. In QLM, system states are represented by normalized vectors in a complex Hilbert space, |ψ=kXk|k, where the squared amplitudes Pk=|Xk|2 are outcome probabilities. However, the meaning of the phase factors eiϕk in the coefficients Xk=Pkeiϕk has remained elusive, often treating them as purely phenomenological parameters. This practice, while successful in describing cognitive interference effects (the “interference of the mind”), has drawn criticism for expanding the model’s parameter space without a clear physical or cognitive underpinning. Building on a recent framework that connects QCDM to neuronal network activity, we propose a concrete interpretation. We argue that the phases in quantum-like superpositions correspond directly to the phases of random oscillations generated by neuronal circuits in the brain. This interpretation not only provides a natural, non-phenomenological basis for phase parameters within QCDM but also helps to bridge the gap between quantum-like models and classical neurocognitive frameworks, offering a consistent physical analogy for the descriptive power of QLM. Full article
20 pages, 13461 KB  
Article
Multi-View 3D Reconstruction of Ship Hull via Multi-Scale Weighted Neural Radiation Field
by Han Chen, Xuanhe Chu, Ming Li, Yancheng Liu, Jingchun Zhou, Xianping Fu, Siyuan Liu and Fei Yu
J. Mar. Sci. Eng. 2026, 14(2), 229; https://doi.org/10.3390/jmse14020229 - 21 Jan 2026
Viewed by 125
Abstract
The 3D reconstruction of vessel hulls is crucial for enhancing safety, efficiency, and knowledge in the maritime industry. Neural Radiance Fields (NeRFs) are an alternative to 3D reconstruction and rendering from multi-view images; particularly, tensor-based methods have proven effective in improving efficiency. However, [...] Read more.
The 3D reconstruction of vessel hulls is crucial for enhancing safety, efficiency, and knowledge in the maritime industry. Neural Radiance Fields (NeRFs) are an alternative to 3D reconstruction and rendering from multi-view images; particularly, tensor-based methods have proven effective in improving efficiency. However, existing tensor-based methods typically suffer from a lack of spatial coherence, resulting in gaps in the reconstruction of fine-grained geometric structures. This paper proposes a spatial multi-scale weighted NeRF (MDW-NeRF) for accurate and efficient surface reconstruction of vessel hulls. The proposed method develops a novel multi-scale feature decomposition mechanism that models 3D space by leveraging multi-resolution features, facilitating the integration of high-resolution details with low-resolution regional information. We designed separate color and density weighting, using a coarse-to-fine strategy, for density and a weighted matrix for color to decouple feature vectors from appearance attributes. To boost the efficiency of 3D reconstruction and rendering, we implement a hybrid sampling point strategy for volume rendering, selecting sample points based on volumetric density. Extensive experiments on the SVH dataset confirm MDW-NeRF’s superiority: quantitatively, it outperforms TensoRF by 1.5 dB in PSNR and 6.1% in CD, and shrinks the model size by 9%, with comparable training times; qualitatively, it resolves tensor-based methods’ inherent spatial incoherence and fine-grained gaps, enabling accurate restoration of hull cavities and realistic surface texture rendering. These results validate our method’s effectiveness in achieving excellent rendering quality, high reconstruction accuracy, and timeliness. Full article
Show Figures

Figure 1

27 pages, 1302 KB  
Review
The RTF-Compass: Navigating the Trade-Off Between Thermogenic Potential and Ferroptotic Stress in Adipocytes
by Minghao Fu, Manish Kumar Singh, Jyotsna Suresh Ranbhise, Kyung-Sik Yoon, Sung Soo Kim, Joohun Ha, Insug Kang, Suk Chon and Wonchae Choe
Cells 2026, 15(2), 170; https://doi.org/10.3390/cells15020170 - 16 Jan 2026
Viewed by 235
Abstract
Adipose tissue thermogenesis is a promising strategy to counter obesity and metabolic disease, but sustained activation of thermogenic adipocytes elevates oxidative and lipid-peroxidation stress, increasing susceptibility to ferroptotic cell death. Existing models often treat redox buffering, hypoxia signaling and ferroptosis as separate processes, [...] Read more.
Adipose tissue thermogenesis is a promising strategy to counter obesity and metabolic disease, but sustained activation of thermogenic adipocytes elevates oxidative and lipid-peroxidation stress, increasing susceptibility to ferroptotic cell death. Existing models often treat redox buffering, hypoxia signaling and ferroptosis as separate processes, which cannot explain why similar interventions—such as antioxidants, β-adrenergic agonists or iron modulators—alternately enhance thermogenesis or precipitate tissue failure. Here, we propose the Redox–Thermogenesis–Ferroptosis Compass (RTF-Compass) as a framework that maps adipose depots within a space defined by ferroptosis resistance capacity (FRC), ferroptosis signaling intensity (FSI) and HIF-1α-dependent hypoxic tone. Within this space, thermogenic output follows a hormetic, inverted-U trajectory, with a Thermogenic Ferroptosis Window (TFW) bounded by two failure states: a Reductive-Blunted state with excessive antioxidant buffering and weak signaling, and a Cytotoxic state with high ferroptotic pressure and inadequate defense. We use this model to reinterpret genetic, nutritional and pharmacological studies as state-dependent vectors that move depots through FRC–FSI–HIF space and to outline principles for precision redox medicine. Although the TFW is represented as coordinates in FRC–FSI–HIF space, we use ‘Compass’ to denote a coordinate framework in which perturbations act as vectors that orient depots toward thermogenic or cytotoxic outcomes. Finally, we highlight priorities for testing the model in vivo, including defining lipid species that encode ferroptotic tone, resolving spatial heterogeneity within depots and determining how metabolic memory constrains reversibility of pathological states. Full article
Show Figures

Figure 1

18 pages, 1144 KB  
Article
Hypersector-Based Method for Real-Time Classification of Wind Turbine Blade Defects
by Lesia Dubchak, Bohdan Rusyn, Carsten Wolff, Tomasz Ciszewski, Anatoliy Sachenko and Yevgeniy Bodyanskiy
Energies 2026, 19(2), 442; https://doi.org/10.3390/en19020442 - 16 Jan 2026
Viewed by 147
Abstract
This paper presents a novel hypersector-based method with Fuzzy Learning Vector Quantization (FLVQ) for the real-time classification of wind turbine blade defects using data acquired by unmanned aerial vehicles (UAVs). Unlike conventional prototype-based FLVQ approaches that rely on Euclidean distance in the feature [...] Read more.
This paper presents a novel hypersector-based method with Fuzzy Learning Vector Quantization (FLVQ) for the real-time classification of wind turbine blade defects using data acquired by unmanned aerial vehicles (UAVs). Unlike conventional prototype-based FLVQ approaches that rely on Euclidean distance in the feature space, the proposed method models each defect class as a hypersector on an n-dimensional hypersphere, where class boundaries are defined by angular similarity and fuzzy membership transitions. This geometric reinterpretation of FLVQ constitutes the core innovation of the study, enabling improved class separability, robustness to noise, and enhanced interpretability under uncertain operating conditions. Feature vectors extracted via the pre-trained SqueezeNet convolutional network are normalized onto the hypersphere, forming compact directional clusters that serve as the geometric foundation of the FLVQ classifier. A fuzzy softmax membership function and an adaptive prototype-updating mechanism are introduced to handle class overlap and improve learning stability. Experimental validation on a custom dataset of 900 UAV-acquired images achieved 95% classification accuracy on test data and 98.3% on an independent dataset, with an average F1-score of 0.91. Comparative analysis with the classical FLVQ prototype demonstrated superior performance and noise robustness. Owing to its low computational complexity and transparent geometric decision structure, the developed model is well-suited for real-time deployment on UAV embedded systems. Furthermore, the proposed hypersector FLVQ framework is generic and can be extended to other renewable-energy diagnostic tasks, including solar and hydropower asset monitoring, contributing to enhanced energy security and sustainability. Full article
(This article belongs to the Special Issue Modeling, Control and Optimization of Wind Power Systems)
Show Figures

Figure 1

32 pages, 999 KB  
Article
A Robust Hybrid Metaheuristic Framework for Training Support Vector Machines
by Khalid Nejjar, Khalid Jebari and Siham Rekiek
Algorithms 2026, 19(1), 70; https://doi.org/10.3390/a19010070 - 13 Jan 2026
Viewed by 109
Abstract
Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the [...] Read more.
Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the efficiency of the optimization algorithm used to solve their underlying dual problem, which is often complex and constrained. Classical solvers, such as Sequential Minimal Optimization (SMO) and Stochastic Gradient Descent (SGD), present inherent limitations: SMO ensures numerical stability but lacks scalability and is sensitive to heuristics, while SGD scales well but suffers from unstable convergence and limited suitability for nonlinear kernels. To address these challenges, this study proposes a novel hybrid optimization framework based on Open Competency Optimization and Particle Swarm Optimization (OCO–PSO) to enhance the training of SVMs. The proposed approach combines the global exploration capability of PSO with the adaptive competency-based learning mechanism of OCO, enabling efficient exploration of the solution space, avoidance of local minima, and strict enforcement of dual constraints on the Lagrange multipliers. Across multiple datasets spanning medical (diabetes), agricultural yield, signal processing (sonar and ionosphere), and imbalanced synthetic data, the proposed OCO-PSO–SVM consistently outperforms classical SVM solvers (SMO and SGD) as well as widely used classifiers, including decision trees and random forests, in terms of accuracy, macro-F1-score, Matthews correlation coefficient (MCC), and ROC-AUC. On the Ionosphere dataset, OCO-PSO achieves an accuracy of 95.71%, an F1-score of 0.954, and an MCC of 0.908, matching the accuracy of random forest while offering superior interpretability through its kernel-based structure. In addition, the proposed method yields a sparser model with only 66 support vectors compared to 71 for standard SVC (a reduction of approximately 7%), while strictly satisfying the dual constraints with a near-zero violation of 1.3×103. Notably, the optimal hyperparameters identified by OCO-PSO (C=2, γ0.062) differ substantially from those obtained via Bayesian optimization for SVC (C=10, γ0.012), indicating that the proposed approach explores alternative yet equally effective regions of the hypothesis space. The statistical significance and robustness of these improvements are confirmed through extensive validation using 1000 bootstrap replications, paired Student’s t-tests, Wilcoxon signed-rank tests, and Holm–Bonferroni correction. These results demonstrate that the proposed metaheuristic hybrid optimization framework constitutes a reliable, interpretable, and scalable alternative for training SVMs in complex and high-dimensional classification tasks. Full article
Show Figures

Figure 1

20 pages, 597 KB  
Article
Fast 3D-HEVC Depth Map Coding Method Based on Spatio-Temporal Correlation and a Two-Stage Mode Decision Framework
by Erlin Tian, Jiabao Zhang and Qiuwen Zhang
Sensors 2026, 26(2), 529; https://doi.org/10.3390/s26020529 - 13 Jan 2026
Viewed by 199
Abstract
Efficient intra-mode decision for depth maps assumes a pivotal role in augmenting the overall performance of 3D-HEVC. Existing research endeavors predominantly rely on fast mode screening strategies grounded in texture characteristics or machine learning techniques. These strategies, to a certain extent, mitigate the [...] Read more.
Efficient intra-mode decision for depth maps assumes a pivotal role in augmenting the overall performance of 3D-HEVC. Existing research endeavors predominantly rely on fast mode screening strategies grounded in texture characteristics or machine learning techniques. These strategies, to a certain extent, mitigate the complexity of mode search. Nevertheless, these approaches often fall short of fully leveraging the intrinsic spatio-temporal correlations within depth maps. Moreover, strategies relying on deterministic classifiers exhibit insufficient discrimination reliability in regions featuring edge mutations or intricate structures. To tackle these challenges, this paper presents a two-stage fast intra-mode decision algorithm for depth maps, integrating naive Bayes probability estimation and fuzzy support vector machine (FSVM). Initially, it confines the candidate mode space through spatio-temporal prior modeling. Subsequently, FSVM is employed to enhance the decision accuracy in regions with low confidence. This methodology constructs a joint mode decision framework spanning from probability screening to refined classification. By doing so, it significantly reduces the computational burden while preserving rate-distortion performance, thereby attaining an effective equilibrium between encoding complexity and performance. Experimental findings demonstrate that the proposed algorithm reduces the average encoding time by 52.30% with merely a 0.68% increment in BDBR. Additionally, it showcases stable universality across test sequences of diverse resolutions and scenes. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 7621 KB  
Article
Weighted Sampling Enclosing Subgraphs-Based Link Prediction in Attributed Graphs
by Ganglin Hu
Information 2026, 17(1), 66; https://doi.org/10.3390/info17010066 - 11 Jan 2026
Viewed by 150
Abstract
Link prediction is a fundamental problem for graphs, which can reveal the potential relationships between users. Graph embedding can easily encode graph structural relations, and heterogeneous attribute features in a continuous vector space, which is effective in link prediction. However, graph embedding methods [...] Read more.
Link prediction is a fundamental problem for graphs, which can reveal the potential relationships between users. Graph embedding can easily encode graph structural relations, and heterogeneous attribute features in a continuous vector space, which is effective in link prediction. However, graph embedding methods for large-scale graphs suffer high computation and space costs, and sampling enclosing subgraphs is a practical yet efficient way to obtain the most features at the least cost. Nevertheless, the existing sampling techniques may lose essential features when the random sampling number of nodes is not large, as node features are assumed to follow a uniform distribution. In this paper, we propose a novel large-scale graph sampling strategy for link prediction named Weighted Sampling Enclosing subgraphs-based Link prediction (WSEL) to resolve this issue, which maximumly preserves the structural and attribute features of enclosing subgraphs with less sampling. More specifically, we first extract the feature importance of each node in an enclosing subgraph and then take the node importance as node weight. Then, random walk node sequences are obtained by multiple weighted random walks from a target pair of nodes, generating a weighted sampling of enclosing subgraphs. By leveraging the weighted sampling enclosing subgraphs, WSEL can scale to larger graphs with much less overhead while maintaining some essential information of the original graph. Experiments on real-world datasets demonstrate that our model can scale to larger graphs while maintaining competitive link prediction performance under substantially reduced computational cost. Full article
Show Figures

Graphical abstract

20 pages, 36648 KB  
Article
Global Lunar FeO Mapping via Wavelet–Autoencoder Feature Learning from M3 Hyperspectral Data
by Julia Fernández–Díaz, Fernando Sánchez Lasheras, Javier Gracia Rodríguez, Santiago Iglesias Álvarez, Antonio Luis Marqués Sierra and Francisco Javier de Cos Juez
Mathematics 2026, 14(2), 254; https://doi.org/10.3390/math14020254 - 9 Jan 2026
Viewed by 234
Abstract
Accurate global mapping of lunar iron oxide (FeO) abundance is essential for understanding the Moon’s geological evolution and for supporting future in situ resource utilization (ISRU). While hyperspectral data from the Moon Mineralogy Mapper (M3) provide a unique combination of high spectral dimensionality, [...] Read more.
Accurate global mapping of lunar iron oxide (FeO) abundance is essential for understanding the Moon’s geological evolution and for supporting future in situ resource utilization (ISRU). While hyperspectral data from the Moon Mineralogy Mapper (M3) provide a unique combination of high spectral dimensionality, hectometre-scale spatial resolution, and near-global coverage, existing FeO retrieval approaches struggle to fully exploit the high dimensionality, nonlinear spectral variability, and planetary-scale volume of the Global Mode dataset. To address these limitations, we present an integrated machine learning pipeline for estimating lunar FeO abundance from M3 hyperspectral observations. Unlike traditional methods based on raw reflectance or empirical spectral indices, the proposed framework combines Discrete Wavelet Transform (DWT), deep autoencoder-based feature compression, and ensemble regression to achieve robust and scalable FeO prediction. M3 spectra (83 bands, 475–3000 nm) are transformed using a Daubechies-4 (db4) DWT to extract 42 representative coefficients per pixel, capturing the dominant spectral information while filtering high-frequency noise. These features are further compressed into a six-dimensional latent space via a deep autoencoder and used as input to a Random Forest regressor, which outperforms kernel-based and linear Support Vector Regression (SVR) as well as Lasso regression in predictive accuracy and stability. The proposed model achieves an average prediction error of 1.204 wt.% FeO and demonstrates consistent performance across diverse lunar geological units. Applied to 806 orbital tracks (approximately 3.5×109 pixels), covering more than 95% of the lunar surface, the pipeline produces a global FeO abundance map at 150 m per pixel resolution. These results demonstrate the potential of integrating multiscale wavelet representations with nonlinear feature learning to enable large-scale, geochemically constrained planetary mineral mapping. Full article
Show Figures

Figure 1

Back to TopTop