Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = BaDS filter

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 720 KB  
Article
A Bilevel Optimization Framework for Adversarial Control of Gas Pipeline Operations
by Tejaswini Sanjay Katale, Lu Gao, Yunpeng Zhang and Alaa Senouci
Actuators 2025, 14(10), 480; https://doi.org/10.3390/act14100480 - 1 Oct 2025
Viewed by 394
Abstract
Cyberattacks on pipeline operational technology systems pose growing risks to energy infrastructure. This study develops a physics-informed simulation and optimization framework for analyzing cyber–physical threats in petroleum pipeline networks. The model integrates networked hydraulic dynamics, SCADA-based state estimation, model predictive control (MPC), and [...] Read more.
Cyberattacks on pipeline operational technology systems pose growing risks to energy infrastructure. This study develops a physics-informed simulation and optimization framework for analyzing cyber–physical threats in petroleum pipeline networks. The model integrates networked hydraulic dynamics, SCADA-based state estimation, model predictive control (MPC), and a bilevel formulation for stealthy false-data injection (FDI) attacks. Pipeline flow and pressure dynamics are modeled on a directed graph using nodal pressure evolution and edge-based Weymouth-type relations, including control-aware equipment such as valves and compressors. An extended Kalman filter estimates the full network state from partial SCADA telemetry. The controller computes pressure-safe control inputs via MPC under actuator constraints and forecasted demands. Adversarial manipulation is formalized as a bilevel optimization problem where an attacker perturbs sensor data to degrade throughput while remaining undetected by bad-data detectors. This attack–control interaction is solved via Karush–Kuhn–Tucker (KKT) reformulation, which results in a tractable mixed-integer quadratic program. Test gas pipeline case studies demonstrate the covert reduction in service delivery under attack. Results show that undetectable attacks can cause sustained throughput loss with minimal instantaneous deviation. This reveals the need for integrated detection and control strategies in cyber–physical infrastructure. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

14 pages, 2822 KB  
Article
Accuracy and Reliability of Smartphone Versus Mirrorless Camera Images-Assisted Digital Shade Guides: An In Vitro Study
by Soo Teng Chew, Suet Yeo Soo, Mohd Zulkifli Kassim, Khai Yin Lim and In Meei Tew
Appl. Sci. 2025, 15(14), 8070; https://doi.org/10.3390/app15148070 - 20 Jul 2025
Viewed by 1230
Abstract
Image-assisted digital shade guides are increasingly popular for shade matching; however, research on their accuracy remains limited. This study aimed to compare the accuracy and reliability of color coordination in image-assisted digital shade guides constructed using calibrated images of their shade tabs captured [...] Read more.
Image-assisted digital shade guides are increasingly popular for shade matching; however, research on their accuracy remains limited. This study aimed to compare the accuracy and reliability of color coordination in image-assisted digital shade guides constructed using calibrated images of their shade tabs captured by a mirrorless camera (Canon, Tokyo, Japan) (MC-DSG) and a smartphone camera (Samsung, Seoul, Korea) (SC-DSG), using a spectrophotometer as the reference standard. Twenty-nine VITA Linearguide 3D-Master shade tabs were photographed under controlled settings with both cameras equipped with cross-polarizing filters. Images were calibrated using Adobe Photoshop (Adobe Inc., San Jose, CA, USA). The L* (lightness), a* (red-green chromaticity), and b* (yellow-blue chromaticity) values, which represent the color attributes in the CIELAB color space, were computed at the middle third of each shade tab using Adobe Photoshop. Specifically, L* indicates the brightness of a color (ranging from black [0] to white [100]), a* denotes the position between red (+a*) and green (–a*), and b* represents the position between yellow (+b*) and blue (–b*). These values were used to quantify tooth shade and compare them to reference measurements obtained from a spectrophotometer (VITA Easyshade V, VITA Zahnfabrik, Bad Säckingen, Germany). Mean color differences (∆E00) between MC-DSG and SC-DSG, relative to the spectrophotometer, were compared using a independent t-test. The ∆E00 values were also evaluated against perceptibility (PT = 0.8) and acceptability (AT = 1.8) thresholds. Reliability was evaluated using intraclass correlation coefficients (ICC), and group differences were analyzed via one-way ANOVA and Bonferroni post hoc tests (α = 0.05). SC-DSG showed significantly lower ΔE00 deviations than MC-DSG (p < 0.001), falling within acceptable clinical AT. The L* values from MC-DSG were significantly higher than SC-DSG (p = 0.024). All methods showed excellent reliability (ICC > 0.9). The findings support the potential of smartphone image-assisted digital shade guides for accurate and reliable tooth shade assessment. Full article
(This article belongs to the Special Issue Advances in Dental Materials, Instruments, and Their New Applications)
Show Figures

Figure 1

13 pages, 1695 KB  
Article
Deepfake Voice Detection: An Approach Using End-to-End Transformer with Acoustic Feature Fusion by Cross-Attention
by Liang Yu Gong and Xue Jun Li
Electronics 2025, 14(10), 2040; https://doi.org/10.3390/electronics14102040 - 16 May 2025
Viewed by 2098
Abstract
Deepfake technology uses artificial intelligence to create highly realistic but fake audio, video, or images, often making it difficult to distinguish from real content. Due to its potential use for misinformation, fraud, and identity theft, deepfake technology has gained a bad reputation in [...] Read more.
Deepfake technology uses artificial intelligence to create highly realistic but fake audio, video, or images, often making it difficult to distinguish from real content. Due to its potential use for misinformation, fraud, and identity theft, deepfake technology has gained a bad reputation in the digital world. Recently, many works have reported on the detection of deepfake videos/images. However, few studies have concentrated on developing robust deepfake voice detection systems. Among most existing studies in this field, a deepfake voice detection system commonly requires a large amount of training data and a robust backbone to detect real and logistic attack audio. For acoustic feature extractions, Mel-frequency Filter Bank (MFB)-based approaches are more suitable for extracting speech signals than applying the raw spectrum as input. Recurrent Neural Networks (RNNs) have been successfully applied to Natural Language Processing (NLP), but these backbones suffer from gradient vanishing or explosion while processing long-term sequences. In addition, the cross-dataset evaluation of most deepfake voice recognition systems has weak performance, leading to a system robustness issue. To address these issues, we propose an acoustic feature-fusion method to combine Mel-spectrum and pitch representation based on cross-attention mechanisms. Then, we combine a Transformer encoder with a convolutional neural network block to extract global and local features as a front end. Finally, we connect the back end with one linear layer for classification. We summarized several deepfake voice detectors’ performances on the silence-segment processed ASVspoof 2019 dataset. Our proposed method can achieve an Equal Error Rate (EER) of 26.41%, while most of the existing methods result in EER higher than 30%. We also tested our proposed method on the ASVspoof 2021 dataset, and found that it can achieve an EER as low as 28.52%, while the EER values for existing methods are all higher than 28.9%. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 6692 KB  
Article
A Lightweight Network Based on YOLOv8 for Improving Detection Performance and the Speed of Thermal Image Processing
by Huyen Trang Dinh and Eung-Tae Kim
Electronics 2025, 14(4), 783; https://doi.org/10.3390/electronics14040783 - 17 Feb 2025
Cited by 2 | Viewed by 3895
Abstract
Deep learning and image processing technology continue to evolve, with YOLO models widely used for real-time object recognition. These YOLO models offer both blazing fast processing and high precision, making them super popular in fields like self-driving cars, security cameras, and medical support. [...] Read more.
Deep learning and image processing technology continue to evolve, with YOLO models widely used for real-time object recognition. These YOLO models offer both blazing fast processing and high precision, making them super popular in fields like self-driving cars, security cameras, and medical support. Most YOLO models are optimized for RGB images, which creates some limitations. While RGB images are super sensitive to lighting conditions, infrared (IR) images using thermal data can detect objects consistently, even in low-light settings. However, infrared images present unique challenges like low resolution, tiny object sizes, and high amounts of noise, which makes direct application tricky in regard to the current YOLO models available. This situation requires the development of object detection models designed specifically for thermal images, especially for real-time recognition. Given the GPU and memory constraints in edge device environments, designing a lightweight model that maintains a high speed is crucial. Our research focused on training a YOLOv8 model using infrared image data to recognize humans. We proposed a YOLOv8s model that had unnecessary layers removed, which was better suited to infrared images and significantly reduced the weight of the model. We also integrated an improved Global Attention Mechanism (GAM) module to boost IR image precision and applied depth-wise convolution filtering to maintain the processing speed. The proposed model achieved a 2% precision improvement, 75% parameter reduction, and 12.8% processing speed increase, compared to the original YOLOv8s model. This method can be effectively used in thermal imaging applications like night surveillance cameras, cameras used in bad weather, and smart ventilation systems, particularly in environments requiring real-time processing with limited computational resources. Full article
Show Figures

Figure 1

20 pages, 6545 KB  
Article
RFCS-YOLO: Target Detection Algorithm in Adverse Weather Conditions via Receptive Field Enhancement and Cross-Scale Fusion
by Gang Liu, Yingzheng Huang, Shuguang Yan and Enxiang Hou
Sensors 2025, 25(3), 912; https://doi.org/10.3390/s25030912 - 3 Feb 2025
Cited by 4 | Viewed by 1585
Abstract
The paper proposes a model based on receptive field enhancement and cross-scale fusion (RFCS-YOLO). It addresses challenges like complex backgrounds and problems of missing and mis-detecting traffic targets in bad weather. First, an efficient feature extraction module (EFEM) is created. It reconfigures the [...] Read more.
The paper proposes a model based on receptive field enhancement and cross-scale fusion (RFCS-YOLO). It addresses challenges like complex backgrounds and problems of missing and mis-detecting traffic targets in bad weather. First, an efficient feature extraction module (EFEM) is created. It reconfigures the backbone network. This helps to make the receptive field better and improves its ability to extract features of targets at different scales. Next, a cross-scale fusion module (CSF) is introduced. It uses the receptive field coordinate attention mechanism (RFCA) to fuse information from different scales well. It also filters out noise and background information that might interfere. Also, a new Focaler-Minimum Point Distance Intersection over Union (F-MPDIoU) loss function is proposed. It makes the model converge faster and deals with issues of leakage and false detection. Experiments were conducted on the expanded Vehicle Detection in Adverse Weather Nature dataset (DWAN). The results show significant improvements compared to the conventional You Only Look Once v7 (YOLOv7) model. The mean Average Precision (mAP@0.5), precision, and recall are enhanced by 4.2%, 8.3%, and 1.4%, respectively. The mean Average Precision is 86.5%. The frame rate is 68 frames per second (FPS), which meets the requirements for real-time detection. A generalization experiment was conducted using the autonomous driving dataset SODA10M. The mAP@0.5 achieved 56.7%, which is a 3.6% improvement over the original model. This result demonstrates the good generalization ability of the proposed method. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

13 pages, 11812 KB  
Article
Performance Comparison of Selected Filters in Fast Denoising of Oil Palm Hyperspectral Data
by Imanurfatiehah Ibrahim, Mofleh Hannuf AlRowaily, Hamzah Arof and Mohamad Sofian Abu Talip
Appl. Sci. 2024, 14(19), 8895; https://doi.org/10.3390/app14198895 - 2 Oct 2024
Cited by 2 | Viewed by 1145
Abstract
Usually, hyperspectral data captured from an airborne UAV or satellite contain some noise that can be severe in some channels. Often, channels that are badly affected by the noise are discarded. This is because the corrupted channels cannot be reclaimed by common filtering [...] Read more.
Usually, hyperspectral data captured from an airborne UAV or satellite contain some noise that can be severe in some channels. Often, channels that are badly affected by the noise are discarded. This is because the corrupted channels cannot be reclaimed by common filtering techniques, making important information in the affected channels different from those of field spectroscopy of similar wavelengths. In this study, a fast-denoising method is implemented on some channels of oil palm hyperspectral data that are badly affected by noise. The amount of noise is unknown, and it varies across the noisy channels from bad to severe. This is different from the data normally used by many studies, which are essentially clean data spiked with mild noise of known variance. The process starts by identifying which noisy channels to filter based on the level of the estimated noise in them. Then, filtering is conducted within each channel and across channels. Once the noise is removed, the improvement in signal-to-noise ratio (SNR) is calculated for each channel. The performance of Kalman, Wiener, Savitzky–Golay, wavelet, and cosine filters is tested in the same framework and the results are compared in terms of execution time, signal-to-noise ratio, and visual quality. The results show that the Kalman filter slightly outperformed the other filters. The proposed scheme was implemented using MATLAB R2023b running on an Intel i7 processor, and the average execution time was less than 1 second for each channel. To the best of our knowledge, this is the first attempt to filter real oil palm hyperspectral data containing speckle noise using a Kalman filter. This technique can be a useful tool to those working in the oil palm industry. Full article
Show Figures

Figure 1

17 pages, 1604 KB  
Article
Social Network Community Detection to Deal with Gray-Sheep and Cold-Start Problems in Music Recommender Systems
by Diego Sánchez-Moreno, Vivian F. López Batista, María Dolores Muñoz Vicente, Ángel Luis Sánchez Lázaro and María N. Moreno-García
Information 2024, 15(3), 138; https://doi.org/10.3390/info15030138 - 29 Feb 2024
Cited by 6 | Viewed by 2422
Abstract
Information from social networks is currently being widely used in many application domains, although in the music recommendation area, its use is less common because of the limited availability of social data. However, most streaming platforms allow for establishing relationships between users that [...] Read more.
Information from social networks is currently being widely used in many application domains, although in the music recommendation area, its use is less common because of the limited availability of social data. However, most streaming platforms allow for establishing relationships between users that can be leveraged to address some drawbacks of recommender systems. In this work, we take advantage of the social network structure to improve recommendations for users with unusual preferences and new users, thus dealing with the gray-sheep and cold-start problems, respectively. Since collaborative filtering methods base the recommendations for a given user on the preferences of his/her most similar users, the scarcity of users with similar tastes to the gray-sheep users and the unawareness of the preferences of the new users usually lead to bad recommendations. These general problems of recommender systems are worsened in the music domain, where the popularity bias drawback is also present. In order to address these problems, we propose a user similarity metric based on the network structure as well as on user ratings. This metric significantly improves the recommendation reliability in those scenarios by capturing both homophily effects in implicit communities of users in the network and user similarity in terms of preferences. Full article
(This article belongs to the Special Issue 2nd Edition of Information Retrieval and Social Media Mining)
Show Figures

Figure 1

20 pages, 724 KB  
Article
Detection of False Data Injection Attacks in a Smart Grid Based on WLS and an Adaptive Interpolation Extended Kalman Filter
by Guoqing Zhang, Wengen Gao, Yunfei Li, Xinxin Guo, Pengfei Hu and Jiaming Zhu
Energies 2023, 16(20), 7203; https://doi.org/10.3390/en16207203 - 23 Oct 2023
Cited by 10 | Viewed by 3388
Abstract
An accurate power state is the basis of the normal functioning of the smart grid. However, false data injection attacks (FDIAs) take advantage of the vulnerability in the bad data detection mechanism of the power system to manipulate the process of state estimation. [...] Read more.
An accurate power state is the basis of the normal functioning of the smart grid. However, false data injection attacks (FDIAs) take advantage of the vulnerability in the bad data detection mechanism of the power system to manipulate the process of state estimation. By attacking the measurements, then affecting the estimated state, FDIAs have become a serious hidden danger that affects the security and stable operation of the power system. To address the bad data detection vulnerability, in this paper, a false data attack detection method based on weighted least squares (WLS) and an adaptive interpolation extended Kalman filter (AIEKF) is proposed. On the basis of applying WLS and AIEKF, the Euclidean distance is used to calculate the deviation values of the two-state estimations to determine whether the current moment is subjected to a false data injection attack in the power system. Extensive experiments were conducted to simulate an IEEE-14-bus power system, showing that the adaptive interpolation extended Kalman filter can compensate for the deficiency in the bad data detection mechanism and successfully detect FDIAs. Full article
(This article belongs to the Special Issue Advanced Electric Power System 2023)
Show Figures

Figure 1

18 pages, 5379 KB  
Article
Similarity Distribution Density: An Optimized Approach to Outlier Detection
by Li Quan, Tao Gong and Kaida Jiang
Electronics 2023, 12(20), 4227; https://doi.org/10.3390/electronics12204227 - 12 Oct 2023
Viewed by 2309
Abstract
When dealing with uncertain data, traditional model construction methods often ignore or filter out noise data to improve model performance. However, this simple approach can lead to insufficient data utilization, model bias, reduced detection ability, and decreased robustness of detection models. Outliers can [...] Read more.
When dealing with uncertain data, traditional model construction methods often ignore or filter out noise data to improve model performance. However, this simple approach can lead to insufficient data utilization, model bias, reduced detection ability, and decreased robustness of detection models. Outliers can be considered as data that are inconsistent with other patterns at certain specific moments and are not always negative data, so their emergence is not always bad. In the process of data analysis, outliers play a crucial role in sample vector recognition, missing value processing, and model stability verification. In addition, unsupervised models have very high computation costs when recognizing outliers, especially non-parameterized unsupervised models. To solve the above problems, we used semi-supervised learning processes and used similarity as a negative selection criterion to propose a local density verification detection model (Vd-LOD). This model establishes similarity pseudo-labels for multi-label and multi-type samples, verifies the accuracy of outlier values based on local outlier factors, and increases the detector’s sensitivity to outliers. The experimental results show that under different parameter settings with varying outlier quantities, Vd-LOD outperforms other detection models in terms of the significant increase in average time consumption caused by verifying the presence of relationships, while also achieving an approximate 6% improvement in average detection accuracy. Full article
(This article belongs to the Special Issue Intelligent Analysis and Security Calculation of Multisource Data)
Show Figures

Figure 1

32 pages, 458 KB  
Review
Gene Therapy in Hereditary Retinal Dystrophies: The Usefulness of Diagnostic Tools in Candidate Patient Selections
by Mariaelena Malvasi, Lorenzo Casillo, Filippo Avogaro, Alessandro Abbouda and Enzo Maria Vingolo
Int. J. Mol. Sci. 2023, 24(18), 13756; https://doi.org/10.3390/ijms241813756 - 6 Sep 2023
Cited by 14 | Viewed by 4106
Abstract
Purpose: Gene therapy actually seems to have promising results in the treatment of Leber Congenital Amaurosis and some different inherited retinal diseases (IRDs); the primary goal of this strategy is to change gene defects with a wild-type gene without defects in a DNA [...] Read more.
Purpose: Gene therapy actually seems to have promising results in the treatment of Leber Congenital Amaurosis and some different inherited retinal diseases (IRDs); the primary goal of this strategy is to change gene defects with a wild-type gene without defects in a DNA sequence to achieve partial recovery of the photoreceptor function and, consequently, partially restore lost retinal functions. This approach led to the introduction of a new drug (voretigene neparvovec-rzyl) for replacement of the RPE65 gene in patients affected by Leber Congenital Amaurosis (LCA); however, the treatment results are inconstant and with variable long-lasting effects due to a lack of correctly evaluating the anatomical and functional conditions of residual photoreceptors. These variabilities may also be related to host immunoreactive reactions towards the Adenovirus-associated vector. A broad spectrum of retinal dystrophies frequently generates doubt as to whether the disease or the patient is a good candidate for a successful gene treatment, because, very often, different diseases share similar genetic characteristics, causing an inconstant genotype/phenotype correlation between clinical characteristics also within the same family. For example, mutations on the RPE65 gene cause Leber Congenital Amaurosis (LCA) but also some forms of Retinitis Pigmentosa (RP), Bardet Biedl Syndrome (BBS), Congenital Stationary Night Blindness (CSNB) and Usher syndrome (USH), with a very wide spectrum of clinical manifestations. These confusing elements are due to the different pathways in which the product protein (retinoid isomer-hydrolase) is involved and, consequently, the overlapping metabolism in retinal function. Considering this point and the cost of the drug (over USD one hundred thousand), it would be mandatory to follow guidelines or algorithms to assess the best-fitting disease and candidate patients to maximize the output. Unfortunately, at the moment, there are no suggestions regarding who to treat with gene therapy. Moreover, gene therapy might be helpful in other forms of inherited retinal dystrophies, with more frequent incidence of the disease and better functional conditions (actually, gene therapy is proposed only for patients with poor vision, considering possible side effects due to the treatment procedures), in which this approach leads to better function and, hopefully, visual restoration. But, in this view, who might be a disease candidate or patient to undergo gene therapy, in relationship to the onset of clinical trials for several different forms of IRD? Further, what is the gold standard for tests able to correctly select the patient? Our work aims to evaluate clinical considerations on instrumental morphofunctional tests to assess candidate subjects for treatment and correlate them with clinical and genetic defect analysis that, often, is not correspondent. We try to define which parameters are an essential and indispensable part of the clinical rationale to select patients with IRDs for gene therapy. This review will describe a series of models used to characterize retinal morphology and function from tests, such as optical coherence tomography (OCT) and electrophysiological evaluation (ERG), and its evaluation as a primary outcome in clinical trials. A secondary aim is to propose an ancillary clinical classification of IRDs and their accessibility based on gene therapy’s current state of the art. Material and Methods: OCT, ERG, and visual field examinations were performed in different forms of IRDs, classified based on clinical and retinal conditions; compared to the gene defect classification, we utilized a diagnostic algorithm for the clinical classification based on morphofunctional information of the retina of patients, which could significantly improve diagnostic accuracy and, consequently, help the ophthalmologist to make a correct diagnosis to achieve optimal clinical results. These considerations are very helpful in selecting IRD patients who might respond to gene therapy with possible therapeutic success and filter out those in which treatment has a lower chance or no chance of positive results due to bad retinal conditions, avoiding time-consuming patient management with unsatisfactory results. Full article
(This article belongs to the Special Issue Molecular Mechanisms of Retinal Degeneration and How to Avoid It)
20 pages, 6296 KB  
Article
Approaches to Improve the Quality of Person Re-Identification for Practical Use
by Timur Mamedov, Denis Kuplyakov and Anton Konushin
Sensors 2023, 23(17), 7382; https://doi.org/10.3390/s23177382 - 24 Aug 2023
Cited by 4 | Viewed by 3822
Abstract
The idea of the person re-identification (Re-ID) task is to find the person depicted in the query image among other images obtained from different cameras. Algorithms solving this task have important practical applications, such as illegal action prevention and searching for missing persons [...] Read more.
The idea of the person re-identification (Re-ID) task is to find the person depicted in the query image among other images obtained from different cameras. Algorithms solving this task have important practical applications, such as illegal action prevention and searching for missing persons through a smart city’s video surveillance. In most of the papers devoted to the problem under consideration, the authors propose complex algorithms to achieve a better quality of person Re-ID. Some of these methods cannot be used in practice due to technical limitations. In this paper, we propose several approaches that can be used in almost all popular modern re-identification algorithms to improve the quality of the problem being solved and do not practically increase the computational complexity of algorithms. In real-world data, bad images can be fed into the input of the Re-ID algorithm; therefore, the new Filter Module is proposed in this paper, designed to pre-filter input data before feeding the data to the main re-identification algorithm. The Filter Module improves the quality of the baseline by 2.6% according to the Rank1 metric and 3.4% according to the mAP metric on the Market-1501 dataset. Furthermore, in this paper, a fully automated data collection strategy from surveillance cameras for self-supervised pre-training is proposed in order to increase the generality of neural networks on real-world data. The use of self-supervised pre-training on the data collected using the proposed strategy improves the quality of cross-domain upper-body Re-ID on the DukeMTMC-reID dataset by 1.0% according to the Rank1 and mAP metrics. Full article
(This article belongs to the Special Issue Person Re-Identification Based on Computer Vision)
Show Figures

Figure 1

17 pages, 4510 KB  
Article
Trust Is for the Strong: How Health Status May Influence Generalized and Personalized Trust
by Quan-Hoang Vuong, Phuong-Loan Nguyen, Ruining Jin, Minh-Hoang Nguyen and Tam-Tri Le
Healthcare 2023, 11(17), 2373; https://doi.org/10.3390/healthcare11172373 - 23 Aug 2023
Cited by 1 | Viewed by 1988
Abstract
In the trust–health relationship, how trusting other people in society may promote good health is a topic often examined. However, the other direction of influence—how health may affect trust—has not been well explored. In order to investigate this possible effect, we employed the [...] Read more.
In the trust–health relationship, how trusting other people in society may promote good health is a topic often examined. However, the other direction of influence—how health may affect trust—has not been well explored. In order to investigate this possible effect, we employed the Bayesian Mindsponge Framework (BMF) analytics to go deeper into the information processing mechanisms underlying the expressions of trust. Conducting a Bayesian analysis on a dataset of 1237 residents from Cali, Colombia, we found that general health status is positively associated with generalized trust, but recent experiences of illnesses/injuries have a negative moderating effect. Personalized trust is largely unchanged across different general health conditions, but the trust level becomes higher with recent experiences of illnesses/injuries. Psychophysiological mechanisms of increasing information filtering intensity toward unfamiliar sources during a vulnerable state of health is a plausible explanation of found patterns in generalized trust. Because established personal relationships are reinforced information channels, personalized trust is not affected as much. Rather, the results suggest that people may rely even more on loved ones when they are in bad health conditions. This exploratory study shows that the trust–health relationship can be examined from a different angle that may provide new insights. Full article
Show Figures

Figure 1

13 pages, 23563 KB  
Article
Paleoclimatic Reconstruction Based on the Late Pleistocene San Josecito Cave Stratum 720 Fauna Using Fossil Mammals, Reptiles, and Birds
by J. Alberto Cruz, Julián A. Velasco, Joaquín Arroyo-Cabrales and Eileen Johnson
Diversity 2023, 15(7), 881; https://doi.org/10.3390/d15070881 - 24 Jul 2023
Cited by 5 | Viewed by 4063
Abstract
Advances in technology have equipped paleobiologists with new analytical tools to assess the fossil record. The functional traits of vertebrates have been used to infer paleoenvironmental conditions. In Quaternary deposits, birds are the second-most-studied group after mammals. They are considered a poor paleoambiental [...] Read more.
Advances in technology have equipped paleobiologists with new analytical tools to assess the fossil record. The functional traits of vertebrates have been used to infer paleoenvironmental conditions. In Quaternary deposits, birds are the second-most-studied group after mammals. They are considered a poor paleoambiental proxy because their high vagility and phenotypic plasticity allow them to respond more effectively to climate change. Investigating multiple groups is important, but it is not often attempted. Biogeographical and climatic niche information concerning small mammals, reptiles, and birds have been used to infer the paleoclimatic conditions present during the Late Pleistocene at San Josecito Cave (~28,000 14C years BP), Mexico. Warmer and dryer conditions are inferred with respect to the present. The use of all of the groups of small vertebrates is recommended because they represent an assemblage of species that have gone through a series of environmental filters in the past. Individually, different vertebrate groups provide different paleoclimatic information. Birds are a good proxy for inferring paleoprecipitation but not paleotemperature. Together, reptiles and small mammals are a good proxy for inferring paleoprecipitation and paleotemperature, but reptiles alone are a bad proxy, and mammals alone are a good proxy for inferring paleotemperature and precipitation. The current paleoclimatic results coupled with those of a previous vegetation structure analysis indicate the presence of non-analog paleoenvironmental conditions during the Late Pleistocene in the San Josecito Cave area. This situation would explain the presence of a disharmonious fauna and the extinction of several taxa when these conditions later disappeared and do not reappear again. Full article
(This article belongs to the Special Issue Biodiversity in Subterranean Habitats)
Show Figures

Figure 1

18 pages, 5177 KB  
Article
Weak Spatial Target Extraction Based on Small-Field Optical System
by Xuguang Zhang, Yunmeng Liu, Huixian Duan and E Zhang
Sensors 2023, 23(14), 6315; https://doi.org/10.3390/s23146315 - 11 Jul 2023
Cited by 7 | Viewed by 1899
Abstract
Compared to wide-field telescopes, small-field detection systems have higher spatial resolution, resulting in stronger detection capabilities and higher positioning accuracy. When detecting by small fields in synchronous orbit, both space debris and fixed stars are imaged as point targets, making it difficult to [...] Read more.
Compared to wide-field telescopes, small-field detection systems have higher spatial resolution, resulting in stronger detection capabilities and higher positioning accuracy. When detecting by small fields in synchronous orbit, both space debris and fixed stars are imaged as point targets, making it difficult to distinguish them. In addition, with the improvement in detection capabilities, the number of stars in the background rapidly increases, which puts higher requirements on recognition algorithms. Therefore, star detection is indispensable for identifying and locating space debris in complex backgrounds. To address these difficulties, this paper proposes a real-time star extraction method based on adaptive filtering and multi-frame projection. We use bad point repair and background suppression algorithms to preprocess star images. Afterwards, we analyze and enhance the target signal-to-noise ratio (SNR). Then, we use multi-frame projection to fuse information. Subsequently, adaptive filtering, adaptive morphology, and adaptive median filtering algorithms are proposed to detect trajectories. Finally, the projection is released to locate the target. Our recognition algorithm has been verified by real star images, and the images were captured using small-field telescopes. The experimental results demonstrate the effectiveness of the algorithm proposed in this paper. We successfully extracted hip-27066 star, which has a magnitude of about 12 and an SNR of about 1.5. Compared with existing methods, our algorithm has advantages in both recognition rate and false-alarm rate, and can be used as a real-time target recognition algorithm for space-based synchronous orbit detection payloads. Full article
(This article belongs to the Special Issue Optical Sensors for Space Situational Awareness)
Show Figures

Figure 1

14 pages, 1828 KB  
Article
A Knowledge Graph Embedding Based Service Recommendation Method for Service-Based System Development
by Fang Xie, Yiming Zhang, Krzysztof Przystupa and Orest Kochan
Electronics 2023, 12(13), 2935; https://doi.org/10.3390/electronics12132935 - 4 Jul 2023
Cited by 7 | Viewed by 2221
Abstract
Web API is an efficient way for Service-based Software (SBS) development, and mashup is a key technology which merges several web services to deal with the increasing complexity of software requirements and expedite the service-based system development. The efficient service recommendation method is [...] Read more.
Web API is an efficient way for Service-based Software (SBS) development, and mashup is a key technology which merges several web services to deal with the increasing complexity of software requirements and expedite the service-based system development. The efficient service recommendation method is vital for the software development. However, the existing methods often suffer from data sparsity or cold start issues, which should lead to bad effects. Currently, this paper starts with SBS development, and proposes a service recommendation method based on knowledge graph embedding and collaborative filtering (CF) technology. In our model, we first construct a refined knowledge graph using SBS-service co-invocation record and SBS and service related information to mine the potential semantics relationship between SBS and service. Then, we learn the SBS and service entities in the knowledge graph. These heterogeneous entities (SBS and service, etc.) are embedded into the low-dimensional space through the representation learning algorithms of Word2vec and TransR, and the distances between SBS and service vectors are calculated. The input of recommendation model is SBS requirement (target SBS), the similarities functional SBS set is extracted from knowledge graph, which can relieve the cold start problem. Meanwhile, the recommendation model uses CF to recommend service to target SBS. Finally, this paper verifies the effectiveness of method on the real-word dataset. Compared with the several state-of-the-art methods, our method has the best service hit rate and ranking quality. Full article
Show Figures

Figure 1

Back to TopTop