Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = proxy label

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3773 KiB  
Article
Generative Artificial Intelligence for Synthetic Spectral Data Augmentation in Sensor-Based Plastic Recycling
by Roman-David Kulko, Andreas Hanus and Benedikt Elser
Sensors 2025, 25(13), 4114; https://doi.org/10.3390/s25134114 - 1 Jul 2025
Viewed by 377
Abstract
The reliance on deep learning models for sensor-based material classification amplifies the demand for labeled training data. However, acquiring large-scale, annotated spectral data for applications such as near-infrared (NIR) reflectance spectroscopy in plastic sorting remains a significant challenge due to high acquisition costs [...] Read more.
The reliance on deep learning models for sensor-based material classification amplifies the demand for labeled training data. However, acquiring large-scale, annotated spectral data for applications such as near-infrared (NIR) reflectance spectroscopy in plastic sorting remains a significant challenge due to high acquisition costs and environmental variability. This paper investigates the potential of large language models (LLMs) in synthetic spectral data generation. Specifically, it examines whether LLMs have acquired sufficient implicit knowledge to assist in generating spectral data and introduce meaningful variations that enhance model performance when used for data augmentation. Classification accuracy is reported exclusively as a proxy for structural plausibility of the augmented spectra; maximizing augmentation performance itself is not the study’s goal. From as little as one empirical mean spectrum per class, LLM-guided simulation produced data that enabled up to 86% accuracy, evidence that the generated variation preserves class-distinguishing information. While the approach performs best for spectral distinct polymers, overlapping classes remain challenging. Additionally, the transfer of optimized augmentation parameters to unseen classes indicates potential for generalization across material types. While plastic sorting serves as a case study, the methodology may be applicable to other domains such as agriculture or food quality assessment, where spectral data are limited. The study outlines a novel path toward scalable, AI-supported data augmentation in spectroscopy-based classification systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

28 pages, 4325 KiB  
Article
Decoding Global Palates: Unveiling Cross-Cultural Flavor Preferences Through Online Recipes
by Qing Zhang, David Elsweiler and Christoph Trattner
Foods 2025, 14(8), 1411; https://doi.org/10.3390/foods14081411 - 18 Apr 2025
Viewed by 762
Abstract
Navigating cross-cultural food choices is complex, influenced by cultural nuances and various factors, with flavor playing a crucial role. Understanding cultural flavor preferences helps individuals make informed food choices in cross-cultural contexts. We examined flavor differences across China, the US, and Germany, as [...] Read more.
Navigating cross-cultural food choices is complex, influenced by cultural nuances and various factors, with flavor playing a crucial role. Understanding cultural flavor preferences helps individuals make informed food choices in cross-cultural contexts. We examined flavor differences across China, the US, and Germany, as well as consistent flavor preference patterns using online recipes from prominent recipe portals. Distinct from applying traditional food pairing theory, we directly mapped ingredients to their individual flavor compounds using an authorized database. This allowed us to analyze cultural flavor preferences at the molecular level and conduct machine learning experiments on 25,000 recipes from each culture to reveal flavor-based distinctions. The classifier, trained on these flavor compounds, achieved 77% accuracy in discriminating recipes by country in a three-class classification task, where random choice would yield 33.3% accuracy. Additionally, using user interaction data on appreciation metrics from each recipe portal (e.g., recipe ratings), we selected the top 10% and bottom 10% of recipes as proxies for appreciated and less appreciated recipes, respectively. Models trained within each portal discriminated between the two groups, reaching a maximum accuracy of 66%, while random selection would result in a baseline accuracy of 50%. We also explored cross-cultural preferences by applying classifiers trained on one culture to recipes from other cultures. While the cross-cultural performance was modest (specifically, a max accuracy of 54% was obtained when predicting food preferences ofthe USusers with models trained on the Chinesedata), the results indicate potential shared flavor patterns, especially between Chinese and US recipes, which show similarities, while German preferences differ. Exploratory analyses further validated these findings: we constructed ingredient networks based on co-occurrence relationships to label recipes as savory or sweet, and clustered the flavor profiles of compounds as sweet or non-sweet. These analyses showed opposing trends in sweet vs. non-sweet/savory appreciation between US and German users, supporting the machine learning results. Although our findings are likely to be influenced by biases in online data sources and the limitations of data-driven methods, they may still highlight meaningful cultural differences and shared flavor preferences. These insights offer potential for developing food recommender systems that cater to cross-cultural contexts. Full article
Show Figures

Figure 1

22 pages, 4990 KiB  
Article
Edge-Centric Embeddings of Digraphs: Properties and Stability Under Sparsification
by Ahmed Begga, Francisco Escolano Ruiz and Miguel Ángel Lozano
Entropy 2025, 27(3), 304; https://doi.org/10.3390/e27030304 - 14 Mar 2025
Viewed by 818
Abstract
In this paper, we define and characterize the embedding of edges and higher-order entities in directed graphs (digraphs) and relate these embeddings to those of nodes. Our edge-centric approach consists of the following: (a) Embedding line digraphs (or their iterated versions); (b) Exploiting [...] Read more.
In this paper, we define and characterize the embedding of edges and higher-order entities in directed graphs (digraphs) and relate these embeddings to those of nodes. Our edge-centric approach consists of the following: (a) Embedding line digraphs (or their iterated versions); (b) Exploiting the rank properties of these embeddings to show that edge/path similarity can be posed as a linear combination of node similarities; (c) Solving scalability issues through digraph sparsification; (d) Evaluating the performance of these embeddings for classification and clustering. We commence by identifying the motive behind the need for edge-centric approaches. Then we proceed to introduce all the elements of the approach, and finally, we validate it. Our edge-centric embedding entails a top-down mining of links, instead of inferring them from the similarities of node embeddings. This analysis is key to discovering inter-subgraph links that hold the whole graph connected, i.e., central edges. Using directed graphs (digraphs) allows us to cluster edge-like hubs and authorities. In addition, since directed edges inherit their labels from destination (origin) nodes, their embedding provides a proxy representation for node classification and clustering as well. This representation is obtained by embedding the line digraph of the original one. The line digraph provides nice formal properties with respect to the original graph; in particular, it produces more entropic latent spaces. With these properties at hand, we can relate edge embeddings to node embeddings. The main contribution of this paper is to set and prove the linearity theorem, which poses each element of the transition matrix for an edge embedding as a linear combination of the elements of the transition matrix for the node embedding. As a result, the rank preservation property explains why embedding the line digraph and using the labels of the destination nodes provides better classification and clustering performances than embedding the nodes of the original graph. In other words, we do not only facilitate edge mining but enforce node classification and clustering. However, computing the line digraph is challenging, and a sparsification strategy is implemented for the sake of scalability. Our experimental results show that the line digraph representation of the sparsified input graph is quite stable as we increase the sparsification level, and also that it outperforms the original (node-centric) representation. For the sake of simplicity, our theorem relies on node2vec-like (factorization) embeddings. However, we also include several experiments showing how line digraphs may improve the performance of Graph Neural Networks (GNNs), also following the principle of maximum entropy. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

19 pages, 3546 KiB  
Article
Proxy-Based Semi-Supervised Cross-Modal Hashing
by Hao Chen, Zhuoyang Zou and Xinghui Zhu
Appl. Sci. 2025, 15(5), 2390; https://doi.org/10.3390/app15052390 - 23 Feb 2025
Cited by 1 | Viewed by 508
Abstract
Due to the difficulty in obtaining label information in practical applications, semi-supervised cross-modal retrieval has emerged. However, the existing semi-supervised cross-modal hashing retrieval methods mainly focus on exploring the structural relationships between data and generating high-quality discrete pseudo-labels while neglecting the relationships between [...] Read more.
Due to the difficulty in obtaining label information in practical applications, semi-supervised cross-modal retrieval has emerged. However, the existing semi-supervised cross-modal hashing retrieval methods mainly focus on exploring the structural relationships between data and generating high-quality discrete pseudo-labels while neglecting the relationships between data and categories, as well as the structural relationships between data and categories inherent in continuous pseudo-labels. Based on this, Proxy-based Semi-Supervised Cross-Modal Hashing (PSSCH) is proposed. Specifically, we propose a category proxy network to generate category center points in both feature and hash spaces. Additionally, we design an Adaptive Dual-Label Loss function, which applies different learning strategies to discrete ground truth labels and continuous pseudo-labels and adaptively increases the training weights of unlabeled data with more epochs. Experiments on the MIRFLICKR-25K, NUS-WIDE, and MS COCO datasets show that PSSCH achieves the highest mAP improvements of 3%, 1%, and 4%, respectively, demonstrating better results than the latest baseline methods. Full article
Show Figures

Figure 1

22 pages, 3198 KiB  
Article
Metabolic Responses, Cell Recoverability, and Protein Signatures of Three Extremophiles: Sustained Life During Long-Term Subzero Incubations
by Marcela Ewert, Brook L. Nunn, Erin Firth and Karen Junge
Microorganisms 2025, 13(2), 251; https://doi.org/10.3390/microorganisms13020251 - 24 Jan 2025
Viewed by 1192
Abstract
Few halophilic strains have been examined in detail for their culturability and metabolic activity at subzero temperatures, within the ice matrix, over the longer term. Here, we examine three Arctic strains with varied salinity tolerances: Colwellia psychrerythraea str. 34H (Cp34H), Psychrobacter sp. str. [...] Read more.
Few halophilic strains have been examined in detail for their culturability and metabolic activity at subzero temperatures, within the ice matrix, over the longer term. Here, we examine three Arctic strains with varied salinity tolerances: Colwellia psychrerythraea str. 34H (Cp34H), Psychrobacter sp. str. 7E (P7E), and Halomonas sp. str. 3E (H3E). As a proxy for biosignatures, we examine observable cells, metabolic activity, and recoverability on 12-month incubations at −5, −10 and −36 °C. To further develop life-detection strategies, we also study the short-term tracking of new protein synthesis on Cp34H at −5 °C for the first time, using isotopically labeled 13C6-leucine and mass spectrometry-based proteomics. All three bacterial species remained metabolically active after 12 months at −5 °C, while recoverability varied greatly among strains. At −10 and −36 °C, metabolic activity was drastically reduced and recoverability patterns were strain-specific. Cells were observable at high numbers in all treatments, validating their potential as biosignatures. Newly synthesized proteins were detectable and identifiable after one hour of incubation. Proteins prioritized for synthesis with the provided substrate are involved in motility, protein synthesis, and in nitrogen and carbohydrate metabolism, with an emphasis on structural proteins, enzymatic activities in central metabolic pathways, and regulatory functions. Full article
(This article belongs to the Special Issue Advances in Halophilic Microorganisms)
Show Figures

Figure 1

16 pages, 2495 KiB  
Article
Evaluating Medical Image Segmentation Models Using Augmentation
by Mattin Sayed, Sari Saba-Sadiya, Benedikt Wichtlhuber, Julia Dietz, Matthias Neitzel, Leopold Keller, Gemma Roig and Andreas M. Bucher
Tomography 2024, 10(12), 2128-2143; https://doi.org/10.3390/tomography10120150 - 23 Dec 2024
Viewed by 2028
Abstract
Background: Medical image segmentation is an essential step in both clinical and research applications, and automated segmentation models—such as TotalSegmentator—have become ubiquitous. However, robust methods for validating the accuracy of these models remain limited, and manual inspection is often necessary before the segmentation [...] Read more.
Background: Medical image segmentation is an essential step in both clinical and research applications, and automated segmentation models—such as TotalSegmentator—have become ubiquitous. However, robust methods for validating the accuracy of these models remain limited, and manual inspection is often necessary before the segmentation masks produced by these models can be used. Methods: To address this gap, we have developed a novel validation framework for segmentation models, leveraging data augmentation to assess model consistency. We produced segmentation masks for both the original and augmented scans, and we calculated the alignment metrics between these segmentation masks. Results: Our results demonstrate strong correlation between the segmentation quality of the original scan and the average alignment between the masks of the original and augmented CT scans. These results were further validated by supporting metrics, including the coefficient of variance and the average symmetric surface distance, indicating that agreement with augmented-scan segmentation masks is a valid proxy for segmentation quality. Conclusions: Overall, our framework offers a pipeline for evaluating segmentation performance without relying on manually labeled ground truth data, establishing a foundation for future advancements in automated medical image analysis. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

14 pages, 11542 KiB  
Article
Open-Source High-Throughput Phenotyping for Blueberry Yield and Maturity Prediction Across Environments: Neural Network Model and Labeled Dataset for Breeders
by Jing Zhang, Jerome Maleski, Hudson Ashrafi, Jessica A. Spencer and Ye Chu
Horticulturae 2024, 10(12), 1332; https://doi.org/10.3390/horticulturae10121332 - 13 Dec 2024
Cited by 2 | Viewed by 1625
Abstract
Time to maturity and yield are important traits for highbush blueberry (Vaccinium corymbosum) breeding. Proper determination of the time to maturity of blueberry varieties and breeding lines informs the harvest window, ensuring that the fruits are harvested at optimum maturity and [...] Read more.
Time to maturity and yield are important traits for highbush blueberry (Vaccinium corymbosum) breeding. Proper determination of the time to maturity of blueberry varieties and breeding lines informs the harvest window, ensuring that the fruits are harvested at optimum maturity and quality. On the other hand, high-yielding crops bring in high profits per acre of planting. Harvesting and quantifying the yield for each blueberry breeding accession are labor-intensive and impractical. Instead, visual ratings as an estimation of yield are often used as a faster way to quantify the yield, which is categorical and subjective. In this study, we developed and shared a high-throughput phenotyping method using neural networks to predict blueberry time to maturity and to provide a proxy for yield, overcoming the labor constraints of obtaining high-frequency data. We aim to facilitate further research in computer vision and precision agriculture by publishing the labeled image dataset and the trained model. In this research, true-color images of blueberry bushes were collected, annotated, and used to train a deep neural network object detection model [You Only Look Once (YOLOv11)] to detect mature and immature berries. Different versions of YOLOv11 were used, including nano, small, and medium, which had similar performance, while the medium version had slightly higher metrics. The YOLOv11m model shows strong performance for the mature berry class, with a precision of 0.90 and an F1 score of 0.90. The precision and recall for detecting immature berries were 0.81 and 0.79. The model was tested on 10 blueberry bushes by hand harvesting and weighing blueberries. The results showed that the model detects approximately 25% of the berries on the bushes, and the correlation coefficients between model-detected and hand-harvested traits were 0.66, 0.86, and 0.72 for mature fruit count, immature fruit count, and mature ratio, respectively. The model applied to 91 blueberry advance selections and categorized them into groups with diverse levels of maturity and productivity using principal component analysis (PCA). These results inform the harvest window and yield of these breeding lines with precision and objectivity through berry classification and quantification. This model will be helpful for blueberry breeders, enabling more efficient selection, and for growers, helping them accurately estimate optimal harvest windows. This open-source tool can potentially enhance research capabilities and agricultural productivity. Full article
(This article belongs to the Special Issue AI-Powered Phenotyping of Horticultural Plants)
Show Figures

Figure 1

20 pages, 13452 KiB  
Article
Cadastral-to-Agricultural: A Study on the Feasibility of Using Cadastral Parcels for Agricultural Land Parcel Delineation
by Han Sae Kim, Hunsoo Song and Jinha Jung
Remote Sens. 2024, 16(19), 3568; https://doi.org/10.3390/rs16193568 - 25 Sep 2024
Cited by 2 | Viewed by 1559
Abstract
Agricultural land parcels (ALPs) are essential for effective agricultural management, influencing activities ranging from crop yield estimation to policy development. However, traditional methods of ALP delineation are often labor-intensive and require frequent updates due to the dynamic nature of agricultural practices. Additionally, the [...] Read more.
Agricultural land parcels (ALPs) are essential for effective agricultural management, influencing activities ranging from crop yield estimation to policy development. However, traditional methods of ALP delineation are often labor-intensive and require frequent updates due to the dynamic nature of agricultural practices. Additionally, the significant variations across different regions and the seasonality of agriculture pose challenges to the automatic generation of accurate and timely ALP labels for extensive areas. This study introduces the cadastral-to-agricultural (Cad2Ag) framework, a novel approach that utilizes cadastral data as training labels to train deep learning models for the delineation of ALPs. Cadastral parcels, which are relatively widely available and stable elements in land management, serve as proxies for ALP delineation. Employing an adapted U-Net model, the framework automates the segmentation process using remote sensing images and geographic information system (GIS) data. This research evaluates the effectiveness of the proposed Cad2Ag framework in two U.S. regions—Indiana and California—characterized by diverse agricultural conditions. Through rigorous evaluation across multiple scenarios, the study explores diverse scenarios to enhance the accuracy and efficiency of ALP delineation. Notably, the framework demonstrates effective ALP delineation across different geographic contexts through transfer learning when supplemented with a small set of clean labels, achieving an F1-score of 0.80 and an Intersection over Union (IoU) of 0.67 using only 200 clean label samples. The Cad2Ag framework’s ability to leverage automatically generated, extensive, free training labels presents a promising solution for efficient ALP delineation, thereby facilitating effective management of agricultural land. Full article
Show Figures

Figure 1

22 pages, 4935 KiB  
Article
FireDA: A Domain Adaptation-Based Method for Forest Fire Recognition with Limited Labeled Scenarios
by Zhengjun Yan, Xing Zheng, Wei Li, Liming Wang, Peng Ding, Ling Zhang, Muyi Yin and Xiaowei Wang
Forests 2024, 15(10), 1684; https://doi.org/10.3390/f15101684 - 24 Sep 2024
Viewed by 1543
Abstract
Vision-based forest fire detection systems have significantly advanced through Deep Learning (DL) applications. However, DL-based models typically require large-scale labeled datasets for effective training, where the quality of data annotation is crucial to their performance. To address challenges related to the quality and [...] Read more.
Vision-based forest fire detection systems have significantly advanced through Deep Learning (DL) applications. However, DL-based models typically require large-scale labeled datasets for effective training, where the quality of data annotation is crucial to their performance. To address challenges related to the quality and quantity of labeling, a domain adaptation-based approach called FireDA is proposed for forest fire recognition in scenarios with limited labels. Domain adaptation, a subfield of transfer learning, facilitates the transfer of knowledge from a labeled source domain to an unlabeled target domain. The construction of the source domain FBD is initiated, which includes three common fire scenarios: forest (F), brightness (B), and darkness (D), utilizing publicly available labeled data. Subsequently, a novel algorithm called Neighborhood Aggregation-based 2-Stage Domain Adaptation (NA2SDA) is proposed. This method integrates feature distribution alignment with target domain Proxy Classification Loss (PCL), leveraging a neighborhood aggregation mechanism and a memory bank designed for the unlabeled samples in the target domain. This mechanism calibrates the source classifier and generates more accurate pseudo-labels for the unlabeled sample. Consequently, based on these pseudo-labels, the Local Maximum Mean Discrepancy (LMMD) and the Proxy Classification Loss (PCL) are computed. To validate the efficacy of the proposed method, the publicly available forest fire dataset, FLAME, is employed as the target domain for constructing a transfer learning task. The results demonstrate that our method achieves performance comparable to the supervised Convolutional Neural Network (CNN)-based state-of-the-art (SOTA) method, without requiring access to labels from the FLAME training set. Therefore, our study presents a viable solution for forest fire recognition in scenarios with limited labeling and establishes a high-accuracy benchmark for future research. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

12 pages, 1226 KiB  
Article
Speech Emotion Recognition Incorporating Relative Difficulty and Labeling Reliability
by Youngdo Ahn, Sangwook Han, Seonggyu Lee and Jong Won Shin
Sensors 2024, 24(13), 4111; https://doi.org/10.3390/s24134111 - 25 Jun 2024
Viewed by 1341
Abstract
Emotions in speech are expressed in various ways, and the speech emotion recognition (SER) model may perform poorly on unseen corpora that contain different emotional factors from those expressed in training databases. To construct an SER model robust to unseen corpora, regularization approaches [...] Read more.
Emotions in speech are expressed in various ways, and the speech emotion recognition (SER) model may perform poorly on unseen corpora that contain different emotional factors from those expressed in training databases. To construct an SER model robust to unseen corpora, regularization approaches or metric losses have been studied. In this paper, we propose an SER method that incorporates relative difficulty and labeling reliability of each training sample. Inspired by the Proxy-Anchor loss, we propose a novel loss function which gives higher gradients to the samples for which the emotion labels are more difficult to estimate among those in the given minibatch. Since the annotators may label the emotion based on the emotional expression which resides in the conversational context or other modality but is not apparent in the given speech utterance, some of the emotional labels may not be reliable and these unreliable labels may affect the proposed loss function more severely. In this regard, we propose to apply label smoothing for the samples misclassified by a pre-trained SER model. Experimental results showed that the performance of the SER on unseen corpora was improved by adopting the proposed loss function with label smoothing on the misclassified data. Full article
(This article belongs to the Special Issue Sensors Applications on Emotion Recognition)
Show Figures

Figure 1

18 pages, 495 KiB  
Article
A Cross-Sectional Study of Sports Food Consumption Patterns, Experiences, and Perceptions amongst Non-Athletes in Australia
by Celeste I. Chapple, Alissa J. Burnett, Julie L. Woods and Catherine G. Russell
Nutrients 2024, 16(8), 1101; https://doi.org/10.3390/nu16081101 - 9 Apr 2024
Cited by 3 | Viewed by 2260
Abstract
Sports foods are designed for athletes, yet their availability, type, and sales have increased over the past decade, likely driven by non-athlete use. This could lead to detrimental health outcomes via over/misuse or unwanted side effects. The aim of this study was to [...] Read more.
Sports foods are designed for athletes, yet their availability, type, and sales have increased over the past decade, likely driven by non-athlete use. This could lead to detrimental health outcomes via over/misuse or unwanted side effects. The aim of this study was to describe sports food consumption patterns and associated drivers, consumption reasons, perception of risks, and side effects experienced amongst non-athletes in Australia. In 2022, n = 307 non-athlete Australian adults (18–65 years) completed an online cross-sectional survey including closed-ended (consumption patterns, factors, and exercise participation) and open-ended questions (reasons for consumption, risk perception, and side effects experienced). Descriptive statistics (frequency and percent) described the sample. Ordinal logistic regression was used for univariate associations and a multivariate model was used to determine relationships between sports food consumption proxy and significant univariate associations. The themes were analysed via inductive thematic analysis using NVivo 14. Females consumed sports foods most frequently, 65% of participants consumed three or more sports foods, and participants with higher sports food consumption/frequency were less likely to perceive risks or experience side effects. The main reason for consumption was protein intake, digestion/stomach issues were the main perceived risks, and the main side effect was bloating. Despite understanding the risks and side effects, non-athlete consumers continue to use numerous sports foods, which appear to be influenced by sociodemographic factors and packaging labels. Tighter regulation of packaging-label information would ensure safer and more informed consumption. Full article
(This article belongs to the Topic Consumer Behaviour and Healthy Food Consumption)
Show Figures

Figure 1

19 pages, 5716 KiB  
Article
Efficient Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies
by Peng Chen and Huibing Wang
Appl. Sci. 2023, 13(15), 8993; https://doi.org/10.3390/app13158993 - 5 Aug 2023
Viewed by 1242
Abstract
Semi-supervised metric learning intends to learn a distance function from the limited labeled data as well as a large amount of unlabeled data to better gauge the similarities of any two instances than using a general distance function. However, most existing semi-supervised metric [...] Read more.
Semi-supervised metric learning intends to learn a distance function from the limited labeled data as well as a large amount of unlabeled data to better gauge the similarities of any two instances than using a general distance function. However, most existing semi-supervised metric learning methods rely on the manifold assumptions to mine the rich discriminant information of the unlabeled data, which breaks the intrinsic connection between the manifold regularizer-building process and the subsequent metric learning. Moreover, these methods usually encounter high computational or memory overhead. To solve these issues, we develop a novel method entitled Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies (ISMLP). ISMLP aims to simultaneously learn multiple proxy vectors as well as a Mahalanobis matrix and forms the semi-supervised metric learning as the probability distribution optimization parameterized by the Mahalanobis distance between the instance and each proxy vector. ISMLP maximizes the entropy of the labeled data and minimizes that of the unlabeled data to follow the entropy regularization, in this way, the labeled part and unlabeled part can be integrated in a meaningful way. Furthermore, the time complexity of the proposed method has a linear dependency concerning the number of instances, thereby, can be extended to the large-scale dataset without incurring too much time. Experiments on multiple datasets demonstrate the superiority of the proposed method over the compared methods used in the experiments. Full article
(This article belongs to the Special Issue Algorithms and Applications of Multi-View Information Clustering)
Show Figures

Figure 1

12 pages, 777 KiB  
Article
Can Respiration Complexity Help the Diagnosis of Disorders of Consciousness in Rehabilitation?
by Piergiuseppe Liuzzi, Antonello Grippo, Francesca Draghi, Bahia Hakiki, Claudio Macchi, Francesca Cecchi and Andrea Mannini
Diagnostics 2023, 13(3), 507; https://doi.org/10.3390/diagnostics13030507 - 30 Jan 2023
Cited by 1 | Viewed by 1857
Abstract
Background: Autonomic Nervous System (ANS) activity, as cardiac, respiratory and electrodermal activity, has been shown to provide specific information on different consciousness states. Respiration rates (RRs) are considered indicators of ANS activity and breathing patterns are currently already included in the evaluation of [...] Read more.
Background: Autonomic Nervous System (ANS) activity, as cardiac, respiratory and electrodermal activity, has been shown to provide specific information on different consciousness states. Respiration rates (RRs) are considered indicators of ANS activity and breathing patterns are currently already included in the evaluation of patients in critical care. Objective: The aim of this work was to derive a proxy of autonomic functions via the RR variability and compare its diagnostic capability with known neurophysiological biomarkers of consciousness. Methods: In a cohort of sub-acute patients with brain injury during post-acute rehabilitation, polygraphy (ECG, EEG) recordings were collected. The EEG was labeled via descriptors based on American Clinical Neurophysiology Society terminology and the respiration variability was extracted by computing the Approximate Entropy (ApEN) of the ECG-derived respiration signal. Competing logistic regressions were applied to evaluate the improvement in model performances introduced by the RR ApEN. Results: Higher RR complexity was significantly associated with higher consciousness levels and improved diagnostic models’ performances in contrast to the ones built with only electroencephalographic descriptors. Conclusions: Adding a quantitative, instrumentally based complexity measure of RR variability to multimodal consciousness assessment protocols may improve diagnostic accuracy based only on electroencephalographic descriptors. Overall, this study promotes the integration of biomarkers derived from the central and the autonomous nervous system for the most comprehensive diagnosis of consciousness in a rehabilitation setting. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

35 pages, 8231 KiB  
Article
A Digital Media Subscription Management System Combined with Blockchain and Proxy Re-Encryption Mechanisms
by Der-Chen Huang, Ling-Chun Liu, Yong-Yuan Deng and Chin-Ling Chen
Symmetry 2022, 14(10), 2167; https://doi.org/10.3390/sym14102167 - 16 Oct 2022
Cited by 6 | Viewed by 2432
Abstract
The subscription economy was born because the relationship between creators and customers is different than it used to be. The era of the creator economy seems to be filled with boundless promise, but at the end of the day, creators are just slaves [...] Read more.
The subscription economy was born because the relationship between creators and customers is different than it used to be. The era of the creator economy seems to be filled with boundless promise, but at the end of the day, creators are just slaves to tech giants. Neither the control of the content created, nor the money made in their pockets is in complete control of the creator. The blockchain can completely solve these injustices monopolized by enterprises. In the blockchain era, all kinds of creations from music, and video-to-text can be turned into assets that can be purchased and traded through smart contracts. In the music industry, for example, creators do not need to share profits with streaming platforms and record labels and get all the benefits directly. In addition, when the content created is on the chain, every transaction will be recorded on the blockchain, and everyone can inquire about it, avoiding opacity or causing disputes in the future. However, with the structure of the standard blockchain, as long as the registration is successful, each role in the chain will have permanent data access rights, and the flexibility of access control is poor. Therefore, this study proposes a digital media subscription mechanism based on the Hyperledger blockchain architecture combined with proxy re-encryption. We use symmetric and asymmetric cryptography, smart contracts, and algorithms to design our protocol. When the licensee violates the agreement with the creator, the creator can revoke the access rights to the digital media of the licensee at any time, to realize more secure and convenient digital media transmission. The proposed scheme meets various security requirements of blockchain architecture, and we have also applied the BAN logic proof model to evaluate the correctness of the proposed scheme. This study also proposes an arbitration mechanism when the dispute occurs, and performed well in terms of communication and computational costs. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

24 pages, 3158 KiB  
Article
Impaired Neurovascular Function Underlies Poor Neurocognitive Outcomes and Is Associated with Nitric Oxide Bioavailability in Congenital Heart Disease
by Vanessa J. Schmithorst, Phillip S. Adams, Daryaneh Badaly, Vincent K. Lee, Julia Wallace, Nancy Beluk, Jodie K. Votava-Smith, Jacqueline G. Weinberg, Sue R. Beers, Jon Detterich, John C. Wood, Cecilia W. Lo and Ashok Panigrahy
Metabolites 2022, 12(9), 882; https://doi.org/10.3390/metabo12090882 - 19 Sep 2022
Cited by 5 | Viewed by 2983
Abstract
We use a non-invasive MRI proxy of neurovascular function (pnvf) to assess the ability of the vasculature to supply baseline metabolic demand, to compare pediatric and young adult congenital heart disease (CHD) patients to normal referents and relate the proxy to neurocognitive outcomes [...] Read more.
We use a non-invasive MRI proxy of neurovascular function (pnvf) to assess the ability of the vasculature to supply baseline metabolic demand, to compare pediatric and young adult congenital heart disease (CHD) patients to normal referents and relate the proxy to neurocognitive outcomes and nitric oxide bioavailability. In a prospective single-center study, resting-state blood-oxygen-level-dependent (BOLD) and arterial spin labeling (ASL) MRI scans were successfully obtained from 24 CHD patients (age = 15.4 ± 4.06 years) and 63 normal referents (age = 14.1 ± 3.49) years. Pnvf was computed on a voxelwise basis as the negative of the ratio of functional connectivity strength (FCS) estimated from the resting-state BOLD acquisition to regional cerebral blood flow (rCBF) as estimated from the ASL acquisition. Pnvf was used to predict end-tidal CO2 (PETCO2) levels and compared to those estimated from the BOLD data. Nitric oxide availability was obtained via nasal measurements (nNO). Pnvf was compared on a voxelwise basis between CHD patients and normal referents and correlated with nitric oxide availability and neurocognitive outcomes as assessed via the NIH Toolbox. Pnvf was shown as highly predictive of PETCO2 using theoretical modeling. Pnvf was found to be significantly reduced in CHD patients in default mode network (DMN, comprising the ventromedial prefrontal cortex and posterior cingulate/precuneus), salience network (SN, comprising the insula and dorsal anterior cingulate), and central executive network (CEN, comprising posterior parietal and dorsolateral prefrontal cortex) regions with similar findings noted in single cardiac ventricle patients. Positive correlations of Pnvf in these brain regions, as well as the hippocampus, were found with neurocognitive outcomes. Similarly, positive correlations between Pnvf and nitric oxide availability were found in frontal DMN and CEN regions, with particularly strong correlations in subcortical regions (putamen). Reduced Pnvf in CHD patients was found to be mediated by nNO. Mediation analyses further supported that reduced Pnvf in these regions underlies worse neurocognitive outcome in CHD patients and is associated with nitric oxide bioavailability. Impaired neuro-vascular function, which may be non-invasively estimated via combined arterial-spin label and BOLD MR imaging, is a nitric oxide bioavailability dependent factor implicated in adverse neurocognitive outcomes in pediatric and young adult CHD. Full article
(This article belongs to the Special Issue Exploring Oxidative Stress Biomarkers in Human Disease)
Show Figures

Figure 1

Back to TopTop