Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (13)

Search Parameters:
Keywords = non-Euclidean norms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 670 KiB  
Article
LDC-GAT: A Lyapunov-Stable Graph Attention Network with Dynamic Filtering and Constraint-Aware Optimization
by Liping Chen, Hongji Zhu and Shuguang Han
Axioms 2025, 14(7), 504; https://doi.org/10.3390/axioms14070504 - 27 Jun 2025
Viewed by 229
Abstract
Graph attention networks are pivotal for modeling non-Euclidean data, yet they face dual challenges: training oscillations induced by projection-based high-dimensional constraints and gradient anomalies due to poor adaptation to heterophilic structure. To address these issues, we propose LDC-GAT (Lyapunov-Stable Graph Attention Network with [...] Read more.
Graph attention networks are pivotal for modeling non-Euclidean data, yet they face dual challenges: training oscillations induced by projection-based high-dimensional constraints and gradient anomalies due to poor adaptation to heterophilic structure. To address these issues, we propose LDC-GAT (Lyapunov-Stable Graph Attention Network with Dynamic Filtering and Constraint-Aware Optimization), which jointly optimizes both forward and backward propagation processes. In the forward path, we introduce Dynamic Residual Graph Filtering, which integrates a tunable self-loop coefficient to balance neighborhood aggregation and self-feature retention. This filtering mechanism, constrained by a lower bound on Dirichlet energy, improves multi-head attention via multi-scale fusion and mitigates overfitting. In the backward path, we design the Fro-FWNAdam, a gradient descent algorithm guided by a learning-rate-aware perceptron. An explicit Frobenius norm bound on weights is derived from Lyapunov theory to form the basis of the perceptron. This stability-aware optimizer is embedded within a Frank–Wolfe framework with Nesterov acceleration, yielding a projection-free constrained optimization strategy that stabilizes training dynamics. Experiments on six benchmark datasets show that LDC-GAT outperforms GAT by 10.54% in classification accuracy, which demonstrates strong robustness on heterophilic graphs. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

21 pages, 1681 KiB  
Article
Scalable Clustering of Complex ECG Health Data: Big Data Clustering Analysis with UMAP and HDBSCAN
by Vladislav Kaverinskiy, Illya Chaikovsky, Anton Mnevets, Tatiana Ryzhenko, Mykhailo Bocharov and Kyrylo Malakhov
Computation 2025, 13(6), 144; https://doi.org/10.3390/computation13060144 - 10 Jun 2025
Cited by 1 | Viewed by 772
Abstract
This study explores the potential of unsupervised machine learning algorithms to identify latent cardiac risk profiles by analyzing ECG-derived parameters from two general groups: clinically healthy individuals (Norm dataset, n = 14,863) and patients hospitalized with heart failure (patients’ dataset, n = 8220). [...] Read more.
This study explores the potential of unsupervised machine learning algorithms to identify latent cardiac risk profiles by analyzing ECG-derived parameters from two general groups: clinically healthy individuals (Norm dataset, n = 14,863) and patients hospitalized with heart failure (patients’ dataset, n = 8220). Each dataset includes 153 ECG and heart rate variability (HRV) features, including both conventional and novel diagnostic parameters obtained using a Universal Scoring System. The study aims to apply unsupervised clustering algorithms to ECG data to detect latent risk profiles related to heart failure, based on distinctive ECG features. The focus is on identifying patterns that correlate with cardiac health risks, potentially aiding in early detection and personalized care. We applied a combination of Uniform Manifold Approximation and Projection (UMAP) for dimensionality reduction and Hierarchical Density-Based Spatial Clustering (HDBSCAN) for unsupervised clustering. Models trained on one dataset were applied to the other to explore structural differences and detect latent predispositions to cardiac disorders. Both Euclidean and Manhattan distance metrics were evaluated. Features such as the QRS angle in the frontal plane, Detrended Fluctuation Analysis (DFA), High-Frequency power (HF), and others were analyzed for their ability to distinguish different patient clusters. In the Norm dataset, Euclidean distance clustering identified two main clusters, with Cluster 0 indicating a lower risk of heart failure. Key discriminative features included the “ALPHA QRS ANGLE IN THE FRONTAL PLANE” and DFA. In the patients’ dataset, three clusters emerged, with Cluster 1 identified as potentially high-risk. Manhattan distance clustering provided additional insights, highlighting features like “ST DISLOCATION” and “T AMP NORMALIZED” as significant for distinguishing between clusters. The analysis revealed distinct clusters that correspond to varying levels of heart failure risk. In the Norm dataset, two main clusters were identified, with one associated with a lower risk profile. In the patients’ dataset, a three-cluster structure emerged, with one subgroup displaying markedly elevated risk indicators such as high-frequency power (HF) and altered QRS angle values. Cross-dataset clustering confirmed consistent feature shifts between groups. These findings demonstrate the feasibility of ECG-based unsupervised clustering for early risk stratification. The results offer a non-invasive tool for personalized cardiac monitoring and merit further clinical validation. These findings emphasize the potential for clustering techniques to contribute to early heart failure detection and personalized monitoring. Future research should aim to validate these results in other populations and integrate these methods into clinical decision-making frameworks. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

10 pages, 259 KiB  
Perspective
Revisiting the Definition of Vectors—From ‘Magnitude and Direction’ to Abstract Tuples
by Reinout Heijungs
Foundations 2025, 5(2), 12; https://doi.org/10.3390/foundations5020012 - 15 Apr 2025
Viewed by 681
Abstract
Vectors are almost always introduced as objects having magnitude and direction. Following that idea, textbooks and courses introduce the concept of a vector norm and the angle between two vectors. While this is correct and useful for vectors in two- or three-dimensional Euclidean [...] Read more.
Vectors are almost always introduced as objects having magnitude and direction. Following that idea, textbooks and courses introduce the concept of a vector norm and the angle between two vectors. While this is correct and useful for vectors in two- or three-dimensional Euclidean space, these concepts make no sense for more general vectors, that are defined in abstract, non-metric vector spaces. This is even the case when an inner product exists. Here, we analyze how several textbooks are imprecise in presenting the restricted validity of the expressions for the norm and the angle. We also study one concrete example, the so-called ‘vector-based sustainability analytics’, in which scientists have gone astray by mistaking an abstract vector for a Euclidean vector. We recommend that future textbook authors introduce the distinction between vectors that have and that do not have magnitude and direction, even in cases where an inner product exists. Full article
(This article belongs to the Section Mathematical Sciences)
22 pages, 7027 KiB  
Article
Color Remote Sensing Image Restoration through Singular-Spectra-Derived Self-Similarity Metrics
by Xudong Xu, Zhihua Zhang and M. James C. Crabbe
Electronics 2023, 12(22), 4685; https://doi.org/10.3390/electronics12224685 - 17 Nov 2023
Viewed by 1331
Abstract
Color remote sensing images have key features of pronounced internal similarity characterized by numerous repetitive local patterns, so the capacity to effectively harness these self-similarity features plays a key role in the enhancement of color images. The main novelty of this study lies [...] Read more.
Color remote sensing images have key features of pronounced internal similarity characterized by numerous repetitive local patterns, so the capacity to effectively harness these self-similarity features plays a key role in the enhancement of color images. The main novelty of this study lies in that we utilized an unusual technique (singular spectrum) to derive brand-new similarity metrics inside the quaternion representation of color images and then incorporated these metrics into denoising algorithms. Color image denoising experiments demonstrated that compared with seven mainstream image restoration algorithms (homomorphic filtering (HPF), wavelet transforms (WT), non-local means (NLM), non-local total variation (NLTV), the color adaptation of non-local means (NLMC), quaternion Euclidean metric (QNLM), and quaternion Euclidean metric total variation (QNLTV)), our algorithms with two novel self-similarity metrics achieved maximum peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), average gradient (AG), and information entropy index (IE) values, with average increases of 1.98 dB /2.12 dB, 0.1168/0.1244, 1.824/1.897, and 0.158/0.135. Moreover, for a complex, mixed-noise scenario, two versions of our algorithms also achieved average increases of 0.382 dB/0.394 dB and 0.0207/0.0210 under Motion and Gaussian mixed noise and average increases of 0.129 dB/0.154 dB and 0.0154/0.0158 under Average and Gaussian mixed noise compared with three quaternion-based restoration algorithms (QNLM, QNLTV, and quantization weighted nuclear norm minimization (QWNNM)). Full article
Show Figures

Figure 1

12 pages, 1069 KiB  
Article
Development and Evaluation of Sedentary Time Cut-Points for the activPAL in Adults Using the GGIR R-Package
by Duncan S. Buchan and Julien S. Baker
Int. J. Environ. Res. Public Health 2023, 20(3), 2293; https://doi.org/10.3390/ijerph20032293 - 27 Jan 2023
Cited by 3 | Viewed by 2453
Abstract
The purpose of this study was to develop sedentary cut-points for the activPAL and evaluate their performance against a criterion measure (i.e., activPAL processed by PALbatch). Part 1: Thirty-five adults (23.4 ± 3.6 years) completed 12 laboratory activities (6 sedentary and 6 non-sedentary [...] Read more.
The purpose of this study was to develop sedentary cut-points for the activPAL and evaluate their performance against a criterion measure (i.e., activPAL processed by PALbatch). Part 1: Thirty-five adults (23.4 ± 3.6 years) completed 12 laboratory activities (6 sedentary and 6 non-sedentary activities). Receiver operator characteristic (ROC) curves proposed optimal Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) cut-points of 26.4 mg (ENMO) and 30.1 mg (MAD). Part 2: Thirty-eight adults (22.6 ± 4.1 years) wore an activPAL during free-living. Estimates from PALbatch and MAD revealed a mean percent error (MPE) of 2.2%, mean absolute percent error (MAPE) of 6.5%, limits of agreement (LoA) of 19% with absolute and relative equivalence zones of 5% and 0.3 SD. Estimates from PALbatch and ENMO revealed an MPE of −10.6%, MAPE of 14.4%, LoA of 31% and 16% and 1 SD equivalence zones. After standing was isolated from sedentary behaviours, ROC analysis proposed an optimal cut-off of 21.9 mg (herein ENMOs). Estimates from PALbatch and ENMOs revealed an MPE of 3.1%, MAPE of 7.5%, LoA of 25% and 9% and 0.5 SD equivalence zones. The MAD and ENMOs cut-points performed best in discriminating between sedentary and non-sedentary activity during free-living. Full article
(This article belongs to the Special Issue Physical Activity and Health Behaviors)
Show Figures

Figure 1

18 pages, 26010 KiB  
Article
Non-Intrusive Load Monitoring Based on Swin-Transformer with Adaptive Scaling Recurrence Plot
by Yongtao Shi, Xiaodong Zhao, Fan Zhang and Yaguang Kong
Energies 2022, 15(20), 7800; https://doi.org/10.3390/en15207800 - 21 Oct 2022
Cited by 11 | Viewed by 2644
Abstract
Non-Intrusive Load Monitoring (NILM) is an effective energy consumption analysis technology, which just requires voltage and current signals on the user bus. This non-invasive monitoring approach can clarify the working state of multiple loads in the building with fewer sensing devices, thus reducing [...] Read more.
Non-Intrusive Load Monitoring (NILM) is an effective energy consumption analysis technology, which just requires voltage and current signals on the user bus. This non-invasive monitoring approach can clarify the working state of multiple loads in the building with fewer sensing devices, thus reducing the cost of energy consumption monitoring. In this paper, an NILM method combining adaptive Recurrence Plot (RP) feature extraction and deep-learning-based image recognition is proposed. Firstly, the time-series signal of current is transformed into a threshold-free RP in phase space to obtain the image features. The Euclidean norm in threshold-free RP is scaled exponentially according to the voltage and current correlation to reflect the working characteristics of different loads adaptively. Afterwards, the obtained adaptive RP features can be mapped into images using the corresponding pixel value. In the load identification stage, an advanced computer vision deep network, Hierarchical Vision Transformer using Shifted Windows (Swin-Transformer), is applied to identify the adaptive RP images. The proposed solution is extensively verified by four real, measured load signal datasets, including industrial and household power situations, covering single-phase and three-phase electrical signals. The numerical results demonstrate that the proposed NILM method based on the adaptive RP can effectively improve the accuracy of load detection. Full article
Show Figures

Figure 1

10 pages, 357 KiB  
Article
Sleep, Sedentary Time and Physical Activity Levels in Children with Cystic Fibrosis
by Mayara S. Bianchim, Melitta A. McNarry, Alan R. Barker, Craig A. Williams, Sarah Denford, Anne E. Holland, Narelle S. Cox, Julianna Dreger, Rachel Evans, Lena Thia and Kelly A. Mackintosh
Int. J. Environ. Res. Public Health 2022, 19(12), 7133; https://doi.org/10.3390/ijerph19127133 - 10 Jun 2022
Cited by 6 | Viewed by 2911
Abstract
The aim of this study was to compare the use of generic and cystic fibrosis (CF)-specific cut-points to assess movement behaviours in children and adolescents with CF. Physical activity (PA) was assessed for seven consecutive days using a non-dominant wrist-worn ActiGraph GT9X in [...] Read more.
The aim of this study was to compare the use of generic and cystic fibrosis (CF)-specific cut-points to assess movement behaviours in children and adolescents with CF. Physical activity (PA) was assessed for seven consecutive days using a non-dominant wrist-worn ActiGraph GT9X in 71 children and adolescents (36 girls; 13.5 ± 2.9 years) with mild CF. CF-specific and generic Euclidean norm minus one (ENMO) cut-points were used to determine sedentary time (SED), sleep, light physical activity (LPA), moderate physical activity and vigorous physical activity. The effect of using a CF-specific or generic cut-point on the relationship between PA intensities and lung function was determined. Movement behaviours differed significantly according to the cut-point used, with the CF-specific cut-points resulting in less time asleep (−31.4 min; p < 0.01) and in LPA (−195.1 min; p < 0.001), and more SED and moderate-to-vigorous PA (159.3 and 67.1 min, respectively; both p < 0.0001) than the generic thresholds. Lung function was significantly associated with LPA according to the CF-specific cut-points (r = 0.52; p = 0.04). Thresholds developed for healthy populations misclassified PA levels, sleep and SED in children and adolescents with CF. This discrepancy affected the relationship between lung function and PA, which was only apparent when using the CF-specific cut-points. Promoting LPA seems a promising strategy to enhance lung function in children and adolescents with CF. Full article
21 pages, 14021 KiB  
Article
Robust Spectral Clustering Incorporating Statistical Sub-Graph Affinity Model
by Zhenxian Lin, Jiagang Wang and Chengmao Wu
Axioms 2022, 11(6), 269; https://doi.org/10.3390/axioms11060269 - 5 Jun 2022
Cited by 2 | Viewed by 2587
Abstract
Hyperspectral image (HSI) clustering is a challenging work due to its high complexity. Subspace clustering has been proven to successfully excavate the intrinsic relationships between data points, while traditional subspace clustering methods ignore the inherent structural information between data points. This study uses [...] Read more.
Hyperspectral image (HSI) clustering is a challenging work due to its high complexity. Subspace clustering has been proven to successfully excavate the intrinsic relationships between data points, while traditional subspace clustering methods ignore the inherent structural information between data points. This study uses graph convolutional subspace clustering (GCSC) for robust HSI clustering. The model remaps the self-expression of the data to non-Euclidean domains, which can generate a robust graph embedding dictionary. The EKGCSC model can achieve a globally optimal closed-form solution by using a subspace clustering model with the Frobenius norm and a Gaussian kernel function, making it easier to implement, train, and apply. However, the presence of noise can have a noteworthy negative impact on the segmentation performance. To diminish the impact of image noise, the concept of sub-graph affinity is introduced, where each node in the primary graph is modeled as a sub-graph describing the neighborhood around the node. A statistical sub-graph affinity matrix is then constructed based on the statistical relationships between sub-graphs of connected nodes in the primary graph, thus counteracting the uncertainty image noise by using more information. The model used in this work was named statistical sub-graph affinity kernel graph convolutional subspace clustering (SSAKGCSC). Experiment results on Salinas, Indian Pines, Pavia Center, and Pavia University data sets showed that the SSAKGCSC model can achieve improved segmentation performance and better noise resistance ability. Full article
(This article belongs to the Special Issue Machine Learning: Theory, Algorithms and Applications)
Show Figures

Figure 1

25 pages, 5055 KiB  
Article
An Improved Image Filtering Algorithm for Mixed Noise
by Chun He, Ke Guo and Huayue Chen
Appl. Sci. 2021, 11(21), 10358; https://doi.org/10.3390/app112110358 - 4 Nov 2021
Cited by 12 | Viewed by 3393
Abstract
In recent years, image filtering has been a hot research direction in the field of image processing. Experts and scholars have proposed many methods for noise removal in images, and these methods have achieved quite good denoising results. However, most methods are performed [...] Read more.
In recent years, image filtering has been a hot research direction in the field of image processing. Experts and scholars have proposed many methods for noise removal in images, and these methods have achieved quite good denoising results. However, most methods are performed on single noise, such as Gaussian noise, salt and pepper noise, multiplicative noise, and so on. For mixed noise removal, such as salt and pepper noise + Gaussian noise, although some methods are currently available, the denoising effect is not ideal, and there are still many places worthy of improvement and promotion. To solve this problem, this paper proposes a filtering algorithm for mixed noise with salt and pepper + Gaussian noise that combines an improved median filtering algorithm, an improved wavelet threshold denoising algorithm and an improved Non-local Means (NLM) algorithm. The algorithm makes full use of the advantages of the median filter in removing salt and pepper noise and demonstrates the good performance of the wavelet threshold denoising algorithm and NLM algorithm in filtering Gaussian noise. At first, we made improvements to the three algorithms individually, and then combined them according to a certain process to obtain a new method for removing mixed noise. Specifically, we adjusted the size of window of the median filtering algorithm and improved the method of detecting noise points. We improved the threshold function of the wavelet threshold algorithm, analyzed its relevant mathematical characteristics, and finally gave an adaptive threshold. For the NLM algorithm, we improved its Euclidean distance function and the corresponding distance weight function. In order to test the denoising effect of this method, salt and pepper + Gaussian noise with different noise levels were added to the test images, and several state-of-the-art denoising algorithms were selected to compare with our algorithm, including K-Singular Value Decomposition (KSVD), Non-locally Centralized Sparse Representation (NCSR), Structured Overcomplete Sparsifying Transform Model with Block Cosparsity (OCTOBOS), Trilateral Weighted Sparse Coding (TWSC), Block Matching and 3D Filtering (BM3D), and Weighted Nuclear Norm Minimization (WNNM). Experimental results show that our proposed algorithm is about 2–7 dB higher than the above algorithms in Peak Signal-Noise Ratio (PSNR), and also has better performance in Root Mean Square Error (RMSE), Structural Similarity (SSIM), and Feature Similarity (FSIM). In general, our algorithm has better denoising performance, better restoration of image details and edge information, and stronger robustness than the above-mentioned algorithms. Full article
(This article belongs to the Special Issue Soft Computing Application to Engineering Design)
Show Figures

Figure 1

16 pages, 3185 KiB  
Article
General Fitting Methods Based on Lq Norms and their Optimization
by George Livadiotis
Stats 2020, 3(1), 16-31; https://doi.org/10.3390/stats3010002 - 6 Jan 2020
Cited by 6 | Viewed by 3325
Abstract
The widely used fitting method of least squares is neither unique nor does it provide the most accurate results. Other fitting methods exist which differ on the metric norm can be used for expressing the total deviations between the given data and the [...] Read more.
The widely used fitting method of least squares is neither unique nor does it provide the most accurate results. Other fitting methods exist which differ on the metric norm can be used for expressing the total deviations between the given data and the fitted statistical model. The least square method is based on the Euclidean norm L2, while the alternative least absolute deviations method is based on the Taxicab norm, L1. In general, there is an infinite number of fitting methods based on metric spaces induced by Lq norms. The most accurate, and thus optimal method, is the one with the (i) highest sensitivity, given by the curvature at the minimum of total deviations, (ii) the smallest errors of the fitting parameters, (iii) best goodness of fitting. The first two cases concern fitting methods where the given curve functions or datasets do not have any errors, while the third case deals with fitting methods where the given data are assigned with errors. Full article
Show Figures

Figure 1

13 pages, 605 KiB  
Article
Geometric Interpretation of Errors in Multi-Parametrical Fitting Methods Based on Non-Euclidean Norms
by George Livadiotis
Stats 2019, 2(4), 426-438; https://doi.org/10.3390/stats2040029 - 29 Oct 2019
Cited by 3 | Viewed by 2336
Abstract
The paper completes the multi-parametrical fitting methods, which are based on metrics induced by the non-Euclidean Lq-norms, by deriving the errors of the optimal parameter values. This was achieved using the geometric representation of the residuals sum expanded near its minimum, [...] Read more.
The paper completes the multi-parametrical fitting methods, which are based on metrics induced by the non-Euclidean Lq-norms, by deriving the errors of the optimal parameter values. This was achieved using the geometric representation of the residuals sum expanded near its minimum, and the geometric interpretation of the errors. Typical fitting methods are mostly developed based on Euclidean norms, leading to the traditional least–square method. On the other hand, the theory of general fitting methods based on non-Euclidean norms is still under development; the normal equations provide implicitly the optimal values of the fitting parameters, while this paper completes the puzzle by improving understanding the derivations and geometric meaning of the optimal errors. Full article
Show Figures

Figure 1

12 pages, 1541 KiB  
Article
A Robust Manifold Graph Regularized Nonnegative Matrix Factorization Algorithm for Cancer Gene Clustering
by Rong Zhu, Jin-Xing Liu, Yuan-Ke Zhang and Ying Guo
Molecules 2017, 22(12), 2131; https://doi.org/10.3390/molecules22122131 - 2 Dec 2017
Cited by 17 | Viewed by 5197
Abstract
Detecting genomes with similar expression patterns using clustering techniques plays an important role in gene expression data analysis. Non-negative matrix factorization (NMF) is an effective method for clustering the analysis of gene expression data. However, the NMF-based method is performed within the Euclidean [...] Read more.
Detecting genomes with similar expression patterns using clustering techniques plays an important role in gene expression data analysis. Non-negative matrix factorization (NMF) is an effective method for clustering the analysis of gene expression data. However, the NMF-based method is performed within the Euclidean space, and it is usually inappropriate for revealing the intrinsic geometric structure of data space. In order to overcome this shortcoming, Cai et al. proposed a novel algorithm, called graph regularized non-negative matrices factorization (GNMF). Motivated by the topological structure of the GNMF-based method, we propose improved graph regularized non-negative matrix factorization (GNMF) to facilitate the display of geometric structure of data space. Robust manifold non-negative matrix factorization (RM-GNMF) is designed for cancer gene clustering, leading to an enhancement of the GNMF-based algorithm in terms of robustness. We combine the l 2 , 1 -norm NMF with spectral clustering to conduct the wide-ranging experiments on the three known datasets. Clustering results indicate that the proposed method outperforms the previous methods, which displays the latest application of the RM-GNMF-based method in cancer gene clustering. Full article
Show Figures

Figure 1

22 pages, 480 KiB  
Article
Expectation Values and Variance Based on Lp-Norms
by George Livadiotis
Entropy 2012, 14(12), 2375-2396; https://doi.org/10.3390/e14122375 - 26 Nov 2012
Cited by 19 | Viewed by 6657
Abstract
This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose [...] Read more.
This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose the Lp variance. These Lp expectation values embody a generic formal scheme of means characterization. Having the p-norm as a free parameter, both the Lp-normed expectation values and their variance are flexible to analyze new phenomena that cannot be described under the notions of classical statistics based on Euclidean norms. The new statistical approach provides insights into regression theory and Statistical Physics. Several illuminating examples are examined. Full article
Show Figures

Figure 1

Back to TopTop