Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (23)

Search Parameters:
Keywords = BINAR(1) model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 12183 KiB  
Article
Triplanar Point Cloud Reconstruction of Head Skin Surface from Computed Tomography Images in Markerless Image-Guided Surgery
by Jurica Cvetić, Bojan Šekoranja, Marko Švaco and Filip Šuligoj
Bioengineering 2025, 12(5), 498; https://doi.org/10.3390/bioengineering12050498 - 8 May 2025
Viewed by 567
Abstract
Accurate preoperative image processing in markerless image-guided surgeries is an important task. However, preoperative planning highly depends on the quality of medical imaging data. In this study, a novel algorithm for outer skin layer extraction from head computed tomography (CT) scans is presented [...] Read more.
Accurate preoperative image processing in markerless image-guided surgeries is an important task. However, preoperative planning highly depends on the quality of medical imaging data. In this study, a novel algorithm for outer skin layer extraction from head computed tomography (CT) scans is presented and evaluated. Axial, sagittal, and coronal slices are processed separately to generate spatial data. Each slice is binarized using manually defined Hounsfield unit (HU) range thresholding to create binary images from which valid contours are extracted. The individual points of each contour are then projected into three-dimensional (3D) space using slice spacing and origin information, resulting in uniplanar point clouds. These point clouds are then fused through geometric addition into a single enriched triplanar point cloud. A two-step downsampling process is applied, first at the uniplanar level and then after merging, using a voxel size of 1 mm. Across two independent datasets with a total of 83 individuals, the merged cloud approach yielded an average of 11.61% more unique points compared to the axial cloud. The validity of the triplanar point cloud reconstruction was confirmed by a root mean square (RMS) registration error of 0.848 ± 0.035 mm relative to the ground truth models. These results establish the proposed algorithm as robust and accurate across different CT scanners and acquisition parameters, supporting its potential integration into patient registration for markerless image-guided surgeries. Full article
(This article belongs to the Special Issue Advancements in Medical Imaging Technology)
Show Figures

Figure 1

21 pages, 83210 KiB  
Article
Digital Empowerment: The Sustainable Development of Chengdu Lacquerware’s Colors and Decorations
by Jianhua Lyu, Qin Xu, Chuxiao Hu and Ming Chen
Appl. Sci. 2025, 15(9), 5065; https://doi.org/10.3390/app15095065 - 2 May 2025
Viewed by 554
Abstract
The preservation and innovation of traditional craftsmanship under industrialization pressures constitute critical challenges for cultural sustainability. Focusing on Chengdu lacquerware—a Chinese intangible cultural heritage facing multifaceted preservation dilemmas—this study develops a digital methodology for its systematic documentation and contemporary adaptation. Through computational analysis [...] Read more.
The preservation and innovation of traditional craftsmanship under industrialization pressures constitute critical challenges for cultural sustainability. Focusing on Chengdu lacquerware—a Chinese intangible cultural heritage facing multifaceted preservation dilemmas—this study develops a digital methodology for its systematic documentation and contemporary adaptation. Through computational analysis of 307 historical artifacts spanning four craftsmanship categories (carved silver mercer, carved lacquer hidden flower, carved filling, and broach needle carving), we established a three-phase digital preservation framework: (1) image preprocessing of 280 qualified samples using adaptive binarization and Canny edge detection for ornament extraction, (2) chromatic analysis via two-stage K-means clustering to decode traditional color schemes, and (3) creation of a digital repository encompassing color profiles and ornamental elements. The resource library facilitated three practical applications: modular recombination of high-frequency motifs, cross-media design adaptations, and interactive visualization of craftsmanship processes. Technical analysis confirmed that adaptive binarization effectively mitigated image noise compared to conventional methods, while secondary clustering enhanced color scheme representativeness. These advancements demonstrate that structured digital archiving coupled with computational analysis can reconcile traditional aesthetics with modern design requirements without compromising cultural authenticity. The workflow provides a transferable model for intangible heritage preservation, emphasizing rigorous documentation alongside adaptive reuse mechanisms. Full article
Show Figures

Figure 1

12 pages, 1488 KiB  
Article
Batchnorm-Free Binarized Deep Spiking Neural Network for a Lightweight Machine Learning Model
by Hasna Nur Karimah, Chankyu Lee and Yeongkyo Seo
Electronics 2025, 14(8), 1602; https://doi.org/10.3390/electronics14081602 - 16 Apr 2025
Viewed by 487
Abstract
The development of deep neural networks, although demonstrating astounding capabilities, leads to more complex models, high energy consumption, and expensive hardware costs. While network quantization is a widely used method to address this problem, the typical binary neural networks often require the batch [...] Read more.
The development of deep neural networks, although demonstrating astounding capabilities, leads to more complex models, high energy consumption, and expensive hardware costs. While network quantization is a widely used method to address this problem, the typical binary neural networks often require the batch normalization (batchnorm) layer to preserve their classification performances. The batchnorm layer contains full-precision multiplication and the addition operation that requires extra hardware and memory access. To address this issue, we present a batch normalization-free binarized deep spiking neural network (B-SNN). We combine spike-based backpropagation in a spiking neural network with weight binarization to further reduce the memory and computation overhead while maintaining comparable accuracy. Weight binarization reduces the huge amount of memory storage for a large number of parameters by replacing the full-precision weights (32 bit) with binary weights (1 bit). Moreover, the proposed B-SNN employs the stochastic input encoding scheme together with a spiking neuron model, thereby enabling networks to perform efficient bitwise computations without the necessity of using a batchnorm layer. As a result, our experimental results demonstrate that the efficacy of the proposed binarization scheme on deep SNNs outperforms the conventional binarized convolutional neural network. Full article
Show Figures

Figure 1

25 pages, 400 KiB  
Article
A Flexible Bivariate Integer-Valued Autoregressive of Order (1) Model for Over- and Under-Dispersed Time Series Applications
by Naushad Mamode Khan and Yuvraj Sunecher
Stats 2025, 8(1), 22; https://doi.org/10.3390/stats8010022 - 12 Mar 2025
Viewed by 608
Abstract
In real-life inter-related time series, the counting responses of different entities are commonly influenced by some time-dependent covariates, while the individual counting series may exhibit different levels of mutual over- or under-dispersion or mixed levels of over- and under-dispersion. In the current literature, [...] Read more.
In real-life inter-related time series, the counting responses of different entities are commonly influenced by some time-dependent covariates, while the individual counting series may exhibit different levels of mutual over- or under-dispersion or mixed levels of over- and under-dispersion. In the current literature, there is still no flexible bivariate time series process that can model series of data of such types. This paper introduces a bivariate integer-valued autoregressive of order 1 (BINAR(1)) model with COM-Poisson innovations under time-dependent moments that can accommodate different levels of over- and under-dispersion. Another particularity of the proposed model is that the cross-correlation between the series is induced locally by relating the current observation of one series with the previous-lagged observation of the other series. The estimation of the model parameters is conducted via a Generalized Quasi-Likelihood (GQL) approach. The proposed model is applied to different real-life series problems in Mauritius, including transport, finance, and socio-economic sectors. Full article
Show Figures

Figure 1

24 pages, 2716 KiB  
Article
A Multiscale CNN-Based Intrinsic Permeability Prediction in Deformable Porous Media
by Yousef Heider, Fadi Aldakheel and Wolfgang Ehlers
Appl. Sci. 2025, 15(5), 2589; https://doi.org/10.3390/app15052589 - 27 Feb 2025
Cited by 2 | Viewed by 942
Abstract
This work introduces a novel application for predicting the macroscopic intrinsic permeability tensor in deformable porous media, using a limited set of μ-CT images of real microgeometries. The primary goal is to develop an efficient, machine learning (ML)-based method that overcomes the [...] Read more.
This work introduces a novel application for predicting the macroscopic intrinsic permeability tensor in deformable porous media, using a limited set of μ-CT images of real microgeometries. The primary goal is to develop an efficient, machine learning (ML)-based method that overcomes the limitations of traditional permeability estimation techniques, which often rely on time-consuming experiments or computationally expensive fluid dynamics simulations. The novelty of this work lies in leveraging convolutional neural networks (CNNs) to predict pore-fluid flow behavior under deformation and anisotropic flow conditions. The approach utilizes binarized CT images of porous microstructures to predict the permeability tensor, a crucial parameter in continuum porous media flow modeling. The methodology involves four steps: (1) constructing a dataset of CT images from Bentheim sandstone at varying volumetric strain levels; (2) conducting pore-scale flow simulations using the lattice Boltzmann method (LBM) to obtain permeability data; (3) training the CNN model with processed CT images as inputs and permeability tensors as outputs; and (4) employing techniques like data augmentation to enhance model generalization. Examples demonstrate the CNN’s ability to accurately predict the permeability tensor in connection with the deformation state through the porosity parameter. A source code has been made available as open access. Full article
(This article belongs to the Special Issue Machine Learning in Multi-scale Modeling)
Show Figures

Figure 1

21 pages, 1906 KiB  
Article
BinVPR: Binary Neural Networks towards Real-Valued for Visual Place Recognition
by Junshuai Wang, Junyu Han, Ruifang Dong and Jiangming Kan
Sensors 2024, 24(13), 4130; https://doi.org/10.3390/s24134130 - 25 Jun 2024
Viewed by 1664
Abstract
Visual Place Recognition (VPR) aims to determine whether a robot or visual navigation system locates in a previously visited place using visual information. It is an essential technology and challenging problem in computer vision and robotic communities. Recently, numerous works have demonstrated that [...] Read more.
Visual Place Recognition (VPR) aims to determine whether a robot or visual navigation system locates in a previously visited place using visual information. It is an essential technology and challenging problem in computer vision and robotic communities. Recently, numerous works have demonstrated that the performance of Convolutional Neural Network (CNN)-based VPR is superior to that of traditional methods. However, with a huge number of parameters, large memory storage is necessary for these CNN models. It is a great challenge for mobile robot platforms equipped with limited resources. Fortunately, Binary Neural Networks (BNNs) can reduce memory consumption by converting weights and activation values from 32-bit into 1-bit. But current BNNs always suffer from gradients vanishing and a marked drop in accuracy. Therefore, this work proposed a BinVPR model to handle this issue. The solution is twofold. Firstly, a feature restoration strategy was explored to add features into the latter convolutional layers to further solve the gradient-vanishing problem during the training process. Moreover, we identified two principles to address gradient vanishing: restoring basic features and restoring basic features from higher to lower layers. Secondly, considering the marked drop in accuracy results from gradient mismatch during backpropagation, this work optimized the combination of binarized activation and binarized weight functions in the Larq framework, and the best combination was obtained. The performance of BinVPR was validated on public datasets. The experimental results show that it outperforms state-of-the-art BNN-based approaches and full-precision networks of AlexNet and ResNet in terms of both recognition accuracy and model size. It is worth mentioning that BinVPR achieves the same accuracy with only 1% and 4.6% model sizes of AlexNet and ResNet. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

16 pages, 4474 KiB  
Article
Viscoelastic Analysis of Asphalt Concrete with a Digitally Reconstructed Microstructure
by Marek Klimczak
Materials 2024, 17(10), 2443; https://doi.org/10.3390/ma17102443 - 18 May 2024
Cited by 2 | Viewed by 1278
Abstract
In the finite element analysis of asphalt concrete (AC), it is nowadays common to incorporate the information from the underlying scales to study the overall response of this material. Heterogeneity observed at the asphalt mixture scale is analyzed in this paper. Reliable finite [...] Read more.
In the finite element analysis of asphalt concrete (AC), it is nowadays common to incorporate the information from the underlying scales to study the overall response of this material. Heterogeneity observed at the asphalt mixture scale is analyzed in this paper. Reliable finite element analysis (FEA) of asphalt concrete comprises a set of complex issues. The two main aspects of the asphalt concrete FEA discussed in this study are: (1) digital reconstruction of the asphalt pavement microstructure using processing of the high-quality images; and (2) FEA of the asphalt concrete idealized samples accounting for the viscoelastic material model. Reconstruction of the asphalt concrete microstructure is performed using a sequence of image processing operations (binarization, removing holes, filtering, segmentation and boundaries detection). Geometry of the inclusions (aggregate) are additionally simplified in a controlled mode to reduce the numerical cost of the analysis. As is demonstrated in the study, the introduced geometry simplifications are justified. Computational cost reduction exceeds of several orders of magnitude additional modeling error occurring due to the applied simplification technique. Viscoelastic finite element analysis of the AC identified microstructure is performed using the Burgers material model. The analysis algorithm is briefly described with a particular focus on the computational efficiency aspects. In order to illustrate the proposed approach, a set of 2D problems is solved. Numerical results confirm both the effectiveness of the self-developed code and the applicability of the Burgers model to the analyzed class of AC analysis problems. Further research directions are also described to highlight the potential benefits of the developed approach to numerical modeling of asphalt concrete. Full article
(This article belongs to the Special Issue Experimental Tests and Numerical Analysis of Construction Materials)
Show Figures

Figure 1

13 pages, 896 KiB  
Article
On Comparing and Assessing Robustness of Some Popular Non-Stationary BINAR(1) Models
by Yuvraj Sunecher and Naushad Mamode Khan
J. Risk Financial Manag. 2024, 17(3), 100; https://doi.org/10.3390/jrfm17030100 - 28 Feb 2024
Cited by 1 | Viewed by 1568
Abstract
Intra-day transactions of stocks from competing firms in the financial markets are known to exhibit significant volatility and over-dispersion. This paper proposes some bivariate integer-valued auto-regressive models of order 1 (BINAR(1)) that are useful to analyze such financial series. These models were constructed [...] Read more.
Intra-day transactions of stocks from competing firms in the financial markets are known to exhibit significant volatility and over-dispersion. This paper proposes some bivariate integer-valued auto-regressive models of order 1 (BINAR(1)) that are useful to analyze such financial series. These models were constructed under both time-variant and time-invariant conditions to capture features such as over-dispersion and non-stationarity in time series of counts. However, the quest for the most robust BINAR(1) models is still on. This paper considers specifically the family of BINAR(1)s with a non-diagonal cross-correlation structure and with unpaired innovation series. These assumptions relax the number of parameters to be estimated. Simulation experiments are performed to assess both the consistency of the estimators and the robust behavior of the BINAR(1)s under mis-specified innovation distribution specifications. The proposed BINAR(1)s are applied to analyze the intra-day transaction series of AstraZeneca and Ericsson. Diagnostic measures such as the root mean square errors (RMSEs) and Akaike information criteria (AICs) are also considered. The paper concludes that the BINAR(1)s with negative binomial and COM–Poisson innovations are among the most suitable models to analyze over-dispersed intra-day transaction series of stocks. Full article
(This article belongs to the Special Issue Financial Valuation and Econometrics)
Show Figures

Figure 1

27 pages, 14614 KiB  
Article
Crack Segmentation Extraction and Parameter Calculation of Asphalt Pavement Based on Image Processing
by Zhongbo Li, Chao Yin and Xixuan Zhang
Sensors 2023, 23(22), 9161; https://doi.org/10.3390/s23229161 - 14 Nov 2023
Cited by 12 | Viewed by 2639
Abstract
Crack disease is one of the most serious and common diseases in road detection. Traditional manual methods for measuring crack detection can no longer meet the needs of road crack detection. In previous work, the authors proposed a crack detection method for asphalt [...] Read more.
Crack disease is one of the most serious and common diseases in road detection. Traditional manual methods for measuring crack detection can no longer meet the needs of road crack detection. In previous work, the authors proposed a crack detection method for asphalt pavements based on an improved YOLOv5s model, which is a better model for detecting various types of cracks in asphalt pavements. However, most of the current research on automatic pavement crack detection is still focused on crack identification and location stages, which contributes little to practical engineering applications. Based on the shortcomings of the above work, and in order to improve its contribution to practical engineering applications, this paper proposes a method for segmenting and analyzing asphalt pavement cracks and identifying parameters based on image processing. The first step is to extract the crack profile through image grayscale, histogram equalization, segmented linear transformation, median filtering, Sauvola binarization, and the connected domain threshold method. Then, the magnification between the pixel area and the actual area of the calibration object is calculated. The second step is to extract the skeleton from the crack profile images of asphalt pavement using the Zhang–Suen thinning algorithm, followed by removing the burrs of the crack skeleton image using the connected domain threshold method. The final step is to calculate physical parameters, such as the actual area, width, segments, and length of the crack with images obtained from the crack profile and skeleton. The results show that (1) the method of local thresholding and connected domain thresholding can completely filter noise regions under the premise of retaining detailed crack region information. (2) The Zhang–Suen iterative refinement algorithm is faster in extracting the crack skeleton of asphalt pavement, retaining the foreground features of the image better, while the connected-domain thresholding method is able to eliminate the missed isolated noise. (3) In comparison to the manual calibration method, the crack parameter calculation method proposed in this paper can better complete the calculation of crack length, width, and area within an allowable margin of error. On the basis of this research, a windowing system for asphalt pavement crack detection, WSPCD1.0, was developed. It integrates the research results from this paper, facilitating automated detection and parameter output for asphalt pavement cracks. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

8 pages, 283 KiB  
Proceeding Paper
A Novel Unconstrained Geometric BINAR(1) Model
by Sunecher Yuvraj and Mamode Khan Naushad
Eng. Proc. 2023, 39(1), 52; https://doi.org/10.3390/engproc2023039052 - 5 Jul 2023
Viewed by 868
Abstract
Modelling the non-stationary unconstrained bivariate integer-valued autoregressive of order 1 (NSUBINAR(1)) model is challenging due to the complex cross-correlation relationship between the counting series. Hence, this paper introduces a novel non-stationary unconstrained BINAR(1) with geometric marginals (NSUBINAR(1)GEOM) based on the assumption that the [...] Read more.
Modelling the non-stationary unconstrained bivariate integer-valued autoregressive of order 1 (NSUBINAR(1)) model is challenging due to the complex cross-correlation relationship between the counting series. Hence, this paper introduces a novel non-stationary unconstrained BINAR(1) with geometric marginals (NSUBINAR(1)GEOM) based on the assumption that the counting series are both influenced by the same time-dependent explanatory variables. The generalized quasi-likelihood (GQL) estimation method is used to estimate the regression and dependence parameters. Monte Carlo simulations and an application to a real-life accident series data are presented. Full article
(This article belongs to the Proceedings of The 9th International Conference on Time Series and Forecasting)
19 pages, 9285 KiB  
Article
CLIP-Based Adaptive Graph Attention Network for Large-Scale Unsupervised Multi-Modal Hashing Retrieval
by Yewen Li, Mingyuan Ge, Mingyong Li, Tiansong Li and Sen Xiang
Sensors 2023, 23(7), 3439; https://doi.org/10.3390/s23073439 - 24 Mar 2023
Cited by 16 | Viewed by 3820
Abstract
With the proliferation of multi-modal data generated by various sensors, unsupervised multi-modal hashing retrieval has been extensively studied due to its advantages in storage, retrieval efficiency, and label independence. However, there are still two obstacles to existing unsupervised methods: (1) As existing methods [...] Read more.
With the proliferation of multi-modal data generated by various sensors, unsupervised multi-modal hashing retrieval has been extensively studied due to its advantages in storage, retrieval efficiency, and label independence. However, there are still two obstacles to existing unsupervised methods: (1) As existing methods cannot fully capture the complementary and co-occurrence information of multi-modal data, existing methods suffer from inaccurate similarity measures. (2) Existing methods suffer from unbalanced multi-modal learning and data semantic structure being corrupted in the process of hash codes binarization. To address these obstacles, we devise an effective CLIP-based Adaptive Graph Attention Network (CAGAN) for large-scale unsupervised multi-modal hashing retrieval. Firstly, we use the multi-modal model CLIP to extract fine-grained semantic features, mine similar information from different perspectives of multi-modal data and perform similarity fusion and enhancement. In addition, this paper proposes an adaptive graph attention network to assist the learning of hash codes, which uses an attention mechanism to learn adaptive graph similarity across modalities. It further aggregates the intrinsic neighborhood information of neighboring data nodes through a graph convolutional network to generate more discriminative hash codes. Finally, this paper employs an iterative approximate optimization strategy to mitigate the information loss in the binarization process. Extensive experiments on three benchmark datasets demonstrate that the proposed method significantly outperforms several representative hashing methods in unsupervised multi-modal retrieval tasks. Full article
(This article belongs to the Special Issue Multi-Modal Data Sensing and Processing)
Show Figures

Figure 1

18 pages, 10131 KiB  
Article
A Novel Modeling Approach for Soil and Rock Mixture and Applications in Tunnel Engineering
by Xiujie Zhang, Hongzhong Li, Kaiyan Xu, Wenwei Yang, Rongtao Yan, Zhanwu Ma, Yonghong Wang, Zhihua Su and Haizhi Wu
Sustainability 2023, 15(4), 3077; https://doi.org/10.3390/su15043077 - 8 Feb 2023
Cited by 1 | Viewed by 1789
Abstract
Soil and rock mixtures are complicated geomaterials that are characterized by both continuity and discontinuity. A homogeneous model cannot take into consideration the interactions between rocks and soil, which could lead to misjudgments of the mechanical properties. To simulate the mechanical responses of [...] Read more.
Soil and rock mixtures are complicated geomaterials that are characterized by both continuity and discontinuity. A homogeneous model cannot take into consideration the interactions between rocks and soil, which could lead to misjudgments of the mechanical properties. To simulate the mechanical responses of soil and rock mixtures accurately, a stochastic generation approach to soil and rock mixtures was developed systematically in this study. The proposed approach includes the following three major steps: (1) a combined image filtering technique and multi-threshold binarization method were developed to extract rock block files from raw images. (2) The shapes and sizes of block profiles were controlled and reconstructed randomly using Fourier analysis. (3) A fast-overlapping detection strategy was proposed to allocate the rock blocks efficiently. Finally, models of soil and rock mixtures with a specific rock proportion can be generated. To validate the proposed approach, numerical models were established in tunnel engineering using the conventional homogeneous method and the proposed numerical method, respectively. In addition, a series of field tests on tunnel deformation and stress were conducted. The results showed that the proposed heterogeneous numerical model can model the mechanical response of the soil and rock mixtures well and is much more effective and accurate than the conventional homogeneous approach. Using the proposed numerical approach, the failure mechanism of a tunnel in a soil and rock mixture is discussed, and a reinforcement strategy for the surrounding rocks is proposed. The field tests results indicate that tunnel lining stress can be well controlled within the strength criterion by the proposed reinforcement strategy. Full article
(This article belongs to the Special Issue The Development of Underground Projects in Urban Areas)
Show Figures

Figure 1

19 pages, 5386 KiB  
Article
A Cascaded Individual Cow Identification Method Based on DeepOtsu and EfficientNet
by Ruihong Zhang, Jiangtao Ji, Kaixuan Zhao, Jinjin Wang, Meng Zhang and Meijia Wang
Agriculture 2023, 13(2), 279; https://doi.org/10.3390/agriculture13020279 - 23 Jan 2023
Cited by 19 | Viewed by 3687
Abstract
Precision dairy farming technology is widely used to improve the management efficiency and reduce cost in large-scale dairy farms. Machine vision systems are non-contact technologies to obtain individual and behavioral information from animals. However, the accuracy of image-based individual identification of dairy cows [...] Read more.
Precision dairy farming technology is widely used to improve the management efficiency and reduce cost in large-scale dairy farms. Machine vision systems are non-contact technologies to obtain individual and behavioral information from animals. However, the accuracy of image-based individual identification of dairy cows is still inadequate, which limits the application of machine vision technologies in large-scale dairy farms. There are three key problems in dairy cattle identification based on images and biometrics: (1) the biometrics of different dairy cattle may be similar; (2) the complex shooting environment leads to the instability of image quality; and (3) for the end-to-end identification method, the identity of each cow corresponds to a pattern, and the increase in the number of cows will lead to a rapid increase in the number of outputs and parameters of the identification model. To solve the above problems, this paper proposes a cascaded dairy individual cow identification method based on DeepOtsu and EfficientNet, which can realize a breakthrough in dairy cow group identification accuracy and speed by binarization and cascaded classification of dairy cow body pattern images. The specific implementation steps of the proposed method are as follows. First, the YOLOX model was used to locate the trunk of the cow in the side-looking walking image to obtain the body pattern image, and then, the DeepOtsu model was used to binarize the body pattern image. After that, primary classification was carried out according to the proportion of black pixels in the binary image; then, for each subcategory obtained by the primary classification, the EfficientNet-B1 model was used for secondary classification to achieve accurate and rapid identification of dairy cows. A total of 11,800 side-looking walking images of 118 cows were used to construct the dataset; and the training set, validation set, and test set were constructed at a ratio of 5:3:2. The test results showed that the binarization segmentation accuracy of the body pattern image is 0.932, and the overall identification accuracy of the individual cow identification method is 0.985. The total processing time of a single image is 0.433 s. The proposed method outperforms the end-to-end dairy individual cow identification method in terms of efficiency and training speed. This study provides a new method for the identification of individual dairy cattle in large-scale dairy farms. Full article
Show Figures

Figure 1

22 pages, 8285 KiB  
Article
Research on Artificial Intelligence in New Year Prints: The Application of the Generated Pop Art Style Images on Cultural and Creative Products
by Bolun Zhang and Nurul Hanim Romainoor
Appl. Sci. 2023, 13(2), 1082; https://doi.org/10.3390/app13021082 - 13 Jan 2023
Cited by 33 | Viewed by 8616
Abstract
Chinese New Year prints constitute a significant component of the country’s cultural heritage and folk art. Yangliuqing New Year prints are the most important and widely circulated of all the different kinds of New Year prints. Due to a variety of factors including [...] Read more.
Chinese New Year prints constitute a significant component of the country’s cultural heritage and folk art. Yangliuqing New Year prints are the most important and widely circulated of all the different kinds of New Year prints. Due to a variety of factors including societal change, industrial structure change, and economic development, New Year prints, which were deeply rooted in agricultural society, have been adversely impacted, and have even reached the brink of disappearance. With the protection and effort from the government and researchers, New Year prints can finally be preserved. However, the underlying problems remain, such as receiving little attention, a singular product form, and being unable to keep up with the times, especially among the younger generation. In this paper, the researchers first processed Yangliuqing New Year prints through the GANs model. Then, the image is segmented by binarization and color extraction of images from the Pop art dataset by the K-Means algorithm, followed by colorizing the binarized and segmented image. Finally, usable high-quality Pop art style Yangliuqing New Year prints are generated. The generated images are used in the development of cultural and creative products. Questionnaires were then distributed based on the empirical research scale. The results of this study are as follows: 1. The method proposed in this study can generate high-quality Pop art style New Year prints. 2 Using Pop art style New Year print images in the design of cultural and creative products is popular among the younger generation, and they possess a great propensity to purchase. This study solves the problems encountered by the current cultural heritage of New Year prints, and broadens the artistic expression forms and product categories, and provides research ideas for the cultural heritage of the same type that is facing similar problems. In the future, researchers will continue to explore the incorporation of AI technology in New Year prints to stimulate the vitality of traditional cultural heritage. Full article
(This article belongs to the Special Issue Advanced Technologies in Digitizing Cultural Heritage)
Show Figures

Figure 1

24 pages, 51747 KiB  
Article
Interpretable Assessment of ST-Segment Deviation in ECG Time Series
by Israel Campero Jurado, Andrejs Fedjajevs, Joaquin Vanschoren and Aarnout Brombacher
Sensors 2022, 22(13), 4919; https://doi.org/10.3390/s22134919 - 29 Jun 2022
Cited by 7 | Viewed by 3663
Abstract
Nowadays, even with all the tremendous advances in medicine and health protocols, cardiovascular diseases (CVD) continue to be one of the major causes of death. In the present work, we focus on a specific abnormality: ST-segment deviation, which occurs regularly in high-performance athletes [...] Read more.
Nowadays, even with all the tremendous advances in medicine and health protocols, cardiovascular diseases (CVD) continue to be one of the major causes of death. In the present work, we focus on a specific abnormality: ST-segment deviation, which occurs regularly in high-performance athletes and elderly people, serving as a myocardial infarction (MI) indicator. It is usually diagnosed manually by experts, through visual interpretation of the printed electrocardiography (ECG) signal. We propose a methodology to detect ST-segment deviation and quantify its scale up to 1 mV by extracting statistical, point-to-point beat characteristics and signal quality indexes (SQIs) from single-lead ECG. We do so by applying automated machine learning methods to find the best hyperparameter configuration for classification and regression models. For validation of our method, we use the ST-T database from Physionet; the results show that our method obtains 98.30% accuracy in the case of a multiclass problem and 99.87% accuracy in the case of binarization. Full article
(This article belongs to the Special Issue Biosignal Sensing and Processing for Clinical Diagnosis)
Show Figures

Figure 1

Back to TopTop