Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = NICE classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1606 KB  
Article
Periodicity Shadows II: Computational Aspects
by Jerzy Białkowski and Adam Skowyrski
Mathematics 2025, 13(24), 3933; https://doi.org/10.3390/math13243933 - 9 Dec 2025
Viewed by 403
Abstract
This article is the second part of the research project initiated last year, in which we introduced and investigated so-called periodicity shadows, i.e. special skew-symmetric integer matrices related to symmetric algebras with periodic simple modules. These matrices provide a new tool for describing [...] Read more.
This article is the second part of the research project initiated last year, in which we introduced and investigated so-called periodicity shadows, i.e. special skew-symmetric integer matrices related to symmetric algebras with periodic simple modules. These matrices provide a new tool for describing the structure of quivers arising in the problem classification of all tame symmetric algebras of period four, an important class of algebras with various links to different branches of algebra. In the first part of the project, we focused on the theoretical aspects of this notion setting a general framework, and we obtained a few nice properties of these quivers. The part of the project described in this paper is devoted to complementary considerations concerning computational issues. We present here an algorithm for computing all tame periodicity shadows of a given size, briefly discuss the output for small sizes, i.e., up to 7×7, and provide some graphical visualizations. We also mention that by applying the computations for sizes up to 5×5, we obtained a classification of tame symmetric algebras of period four defined by quivers with at most 5 vertices. Full article
Show Figures

Figure 1

23 pages, 5588 KB  
Article
The Divergent Geographies of Urban Amenities: A Data Comparison Between OpenStreetMap and Google Maps
by Federico Mara, Chiara Anselmi, Federica Deri and Valerio Cutini
Sustainability 2025, 17(20), 9016; https://doi.org/10.3390/su17209016 - 11 Oct 2025
Cited by 2 | Viewed by 1908
Abstract
Urban models support sustainable, resilient, and equitable planning, but their validity hinges on underlying spatial data. This study examines the epistemological and technical consequences of relying on two dominant yet divergent platforms—OpenStreetMap (OSM) and Google Maps—for extracting proximity-based amenities within the 15-min city [...] Read more.
Urban models support sustainable, resilient, and equitable planning, but their validity hinges on underlying spatial data. This study examines the epistemological and technical consequences of relying on two dominant yet divergent platforms—OpenStreetMap (OSM) and Google Maps—for extracting proximity-based amenities within the 15-min city framework. Across four European contexts—Versilia, Gothenburg, Nice, and Vienna—we compare (i) data completeness and spatial coverage; (ii) semantic categories; and (iii) the effects of data heterogeneity on accessibility modelling. Findings show that OSM, while semantically consistent and openly accessible, systematically underrepresents peripheral amenities, introducing bias towards urban cores in accessibility metrics. Conversely, Google Maps provides broader coverage but is constrained by dependencies on extraction methods, opaque data structures, and ambiguous classification schemes, which hinder reproducibility, reduce interpretability, and limit its analytical robustness. These divergences yield distinct accessibility landscapes and competing readings of functionality and spatial equity. We argue that data source choice and protocol design are epistemological decisions and advocate transparent, hybrid strategies with cross-platform semantic harmonisation to strengthen robustness, equity, and policy relevance. Full article
Show Figures

Figure 1

22 pages, 4990 KB  
Article
Edge-Centric Embeddings of Digraphs: Properties and Stability Under Sparsification
by Ahmed Begga, Francisco Escolano Ruiz and Miguel Ángel Lozano
Entropy 2025, 27(3), 304; https://doi.org/10.3390/e27030304 - 14 Mar 2025
Viewed by 1656
Abstract
In this paper, we define and characterize the embedding of edges and higher-order entities in directed graphs (digraphs) and relate these embeddings to those of nodes. Our edge-centric approach consists of the following: (a) Embedding line digraphs (or their iterated versions); (b) Exploiting [...] Read more.
In this paper, we define and characterize the embedding of edges and higher-order entities in directed graphs (digraphs) and relate these embeddings to those of nodes. Our edge-centric approach consists of the following: (a) Embedding line digraphs (or their iterated versions); (b) Exploiting the rank properties of these embeddings to show that edge/path similarity can be posed as a linear combination of node similarities; (c) Solving scalability issues through digraph sparsification; (d) Evaluating the performance of these embeddings for classification and clustering. We commence by identifying the motive behind the need for edge-centric approaches. Then we proceed to introduce all the elements of the approach, and finally, we validate it. Our edge-centric embedding entails a top-down mining of links, instead of inferring them from the similarities of node embeddings. This analysis is key to discovering inter-subgraph links that hold the whole graph connected, i.e., central edges. Using directed graphs (digraphs) allows us to cluster edge-like hubs and authorities. In addition, since directed edges inherit their labels from destination (origin) nodes, their embedding provides a proxy representation for node classification and clustering as well. This representation is obtained by embedding the line digraph of the original one. The line digraph provides nice formal properties with respect to the original graph; in particular, it produces more entropic latent spaces. With these properties at hand, we can relate edge embeddings to node embeddings. The main contribution of this paper is to set and prove the linearity theorem, which poses each element of the transition matrix for an edge embedding as a linear combination of the elements of the transition matrix for the node embedding. As a result, the rank preservation property explains why embedding the line digraph and using the labels of the destination nodes provides better classification and clustering performances than embedding the nodes of the original graph. In other words, we do not only facilitate edge mining but enforce node classification and clustering. However, computing the line digraph is challenging, and a sparsification strategy is implemented for the sake of scalability. Our experimental results show that the line digraph representation of the sparsified input graph is quite stable as we increase the sparsification level, and also that it outperforms the original (node-centric) representation. For the sake of simplicity, our theorem relies on node2vec-like (factorization) embeddings. However, we also include several experiments showing how line digraphs may improve the performance of Graph Neural Networks (GNNs), also following the principle of maximum entropy. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

9 pages, 562 KB  
Article
Prescribing Antibiotics for Children with Acute Conditions in Public Primary Care Clinics in Singapore: A Retrospective Cohort Database Study
by Vivien Min Er Lee, Si Hui Low, Sky Wei Chee Koh, Anna Szuecs, Victor Weng Keong Loh, Meena Sundram, José M. Valderas and Li Yang Hsu
Antibiotics 2024, 13(8), 695; https://doi.org/10.3390/antibiotics13080695 - 25 Jul 2024
Cited by 3 | Viewed by 2356
Abstract
Data on primary care antibiotic prescription practices for children in Singapore, which are essential for health care policy, are lacking. We aimed to address this gap and to benchmark prescription practices against international standards. A retrospective cohort database study on antibiotic prescriptions for [...] Read more.
Data on primary care antibiotic prescription practices for children in Singapore, which are essential for health care policy, are lacking. We aimed to address this gap and to benchmark prescription practices against international standards. A retrospective cohort database study on antibiotic prescriptions for children (aged < 18 years) who visited six public primary care clinics in Singapore between 2018 and 2021 was conducted. Data were categorised according to the World Health Organization’s Access, Watch, Reserve (WHO AWaRe) classification. Quality indicators from the European Surveillance of Antimicrobial Consumption Network (ESAC-Net) and the National Institute for Health and Care Excellence (NICE) guidelines were used as a measure of appropriateness of antibiotic prescribing at the individual and overall patient level. In 831,669 polyclinic visits by children between 2018 and 2021, there was a significant reduction in mean antibiotics prescribed per month during pandemic years (2020–2021) compared to pre-pandemic (2018–2019) (MD 458.3, 95% CI 365.9–550.7). Most prescriptions (95.8%) for acute conditions fell within the WHO AWaRe “Access” group. Antibiotic prescription significantly exceeded (55.2%) the relevant quality indicator for otitis media (0–20%). The proportion of children receiving appropriate antibiotics for acute respiratory infections (n = 4506, 51.3%) and otitis media (n = 174, 49.4%) was low compared to the quality indicator (80–100%). There is a need to develop local evidence-based primary care antibiotic guidelines, as well as to support the development of stewardship programmes. Full article
Show Figures

Figure 1

14 pages, 1136 KB  
Article
NICE: A Web-Based Tool for the Characterization of Transient Noise in Gravitational Wave Detectors
by Nunziato Sorrentino, Massimiliano Razzano, Francesco Di Renzo, Francesco Fidecaro and Gary Hemming
Software 2024, 3(2), 169-182; https://doi.org/10.3390/software3020008 - 18 Apr 2024
Cited by 1 | Viewed by 1918
Abstract
NICE—Noise Interactive Catalogue Explorer—is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support [...] Read more.
NICE—Noise Interactive Catalogue Explorer—is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support detector noise characterization activities, in particular, the analysis of glitches from past and current observing runs, passing from glitch population visualization to individual glitch characterization. The NICE back-end API consists of a multi-database structure that brings order to glitch metadata generated by external detector characterization tools so that such information can be easily requested by gravitational wave scientists. Another novelty introduced by NICE is the interactive front-end infrastructure focused on glitch instrumental and environmental origin investigation, which uses labels determined by their time–frequency morphology. The NICE domain is intended for integration with the Advanced Virgo, Advanced LIGO, and KAGRA characterization pipelines and it will interface with systematic classification activities related to the transient noise sources present in the Virgo detector. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

11 pages, 1345 KB  
Article
The Utility of Narrow-Band Imaging International Colorectal Endoscopic Classification in Predicting the Histologies of Diminutive Colorectal Polyps Using I-Scan Optical Enhancement: A Prospective Study
by Yeo Wool Kang, Jong Hoon Lee and Jong Yoon Lee
Diagnostics 2023, 13(16), 2720; https://doi.org/10.3390/diagnostics13162720 - 21 Aug 2023
Cited by 5 | Viewed by 4040
Abstract
(1) Background: This study aimed to evaluate the accuracy of predicting the histology of diminutive colonic polyps (DCPs) (≤5 mm) using i-scan optical enhancement (OE) based on the narrow-band imaging international colorectal endoscopic (NICE) classification. The study compared the diagnostic accuracy between experts [...] Read more.
(1) Background: This study aimed to evaluate the accuracy of predicting the histology of diminutive colonic polyps (DCPs) (≤5 mm) using i-scan optical enhancement (OE) based on the narrow-band imaging international colorectal endoscopic (NICE) classification. The study compared the diagnostic accuracy between experts who were already familiar with the NICE classification and trainees who were not, both before and after receiving brief training on the NICE classification. (2) Method: This prospective, single-center clinical trial was conducted at the Dong-A University Hospital from March 2020 to August 2020 and involved two groups of participants. The first group comprised two experienced endoscopists who were proficient in using i-scan OE and had received formal training in optical diagnosis and dye-less chromoendoscopy (DLC) techniques. The second group consisted of three endoscopists in the process of training in internal medicine at the Dong-A University Hospital. Each endoscopist examined the polyps and evaluated them using the NICE classification through i-scan OE. The results were not among the participants. Trained endoscopists were divided into pre- and post-training groups. (3) Results: During the study, a total of 259 DCPs were assessed using i-scan OE by the two expert endoscopists. They made real-time histological predictions according to the NICE classification criteria. For the trainee group, before training, the area under the receiver operating characteristic curves (AUROCs) for predicting histopathological results using i-scan OE were 0.791, 0.775, and 0.818. However, after receiving training, the AUROCs improved to 0.935, 0.949, and 0.963, which were not significantly different from the results achieved by the expert endoscopists. (4) Conclusions: This study highlights the potential of i-scan OE, along with the NICE classification, in predicting the histopathological results of DCPs during colonoscopy. In addition, this study suggests that even an endoscopist without experience in DLC can effectively use i-scan OE to improve diagnostic performance with only brief training. Full article
(This article belongs to the Special Issue Advancements in Colonoscopy 2nd Edition)
Show Figures

Figure 1

15 pages, 1375 KB  
Article
The Need to Set up a Biobank Dedicated to Lymphoid Malignancies: Experience of a Single Center (Laboratory of Clinical and Experimental Pathology, University Côte d’Azur, Nice, France)
by Christophe Bontoux, Aubiège Marcovich, Samantha Goffinet, Florian Pesce, Virginie Tanga, Doriane Bohly, Myriam Salah, Kevin Washetine, Zeineb Messaoudi, Jean-Marc Felix, Christelle Bonnetaud, Lihui Wang, Geetha Menon, Jean-Philippe Berthet, Charlotte Cohen, Jonathan Benzaquen, Charles-Hugo Marquette, Sandra Lassalle, Elodie Long-Mira, Veronique Hofman, Luc Xerri, Marius Ilié and Paul Hofmanadd Show full author list remove Hide full author list
J. Pers. Med. 2023, 13(7), 1076; https://doi.org/10.3390/jpm13071076 - 29 Jun 2023
Viewed by 2591
Abstract
Several therapies to improve the management of lymphoma are currently being investigated, necessitating the development of new biomarkers. However, this requires high-quality and clinically annotated biological material. Therefore, we established a lymphoma biobank including all available biological material (tissue specimens and matched biological [...] Read more.
Several therapies to improve the management of lymphoma are currently being investigated, necessitating the development of new biomarkers. However, this requires high-quality and clinically annotated biological material. Therefore, we established a lymphoma biobank including all available biological material (tissue specimens and matched biological resources) along with associated clinical data for lymphoma patients diagnosed, according to the WHO classification, between 2005 and 2022 in the Laboratory of Clinical and Experimental Pathology, Nice, France. We retrospectively included selected cases in a new collection at the Côte d’Azur Biobank, which contains 2150 samples from 363 cases (351 patients). The male/female ratio was 1.3, and the median age at diagnosis was 58 years. The most common lymphoma types were classical Hodgkin lymphoma, diffuse large B-cell lymphoma, and extra-nodal marginal zone lymphoma of MALT tissue. The main sites of lymphoma were the mediastinum, lymph node, Waldeyer’s ring, and lung. The Côte d’Azur Biobank is ISO 9001 and ISO 20387 certified and aims to provide high quality and diverse biological material to support translational research projects into lymphoma. The clinico-pathological data generated by this collection should aid the development of new biomarkers to enhance the survival of patients with lymphoid malignancies. Full article
Show Figures

Figure 1

16 pages, 5353 KB  
Article
Land Use and Land Cover Mapping with VHR and Multi-Temporal Sentinel-2 Imagery
by Suzanna Cuypers, Andrea Nascetti and Maarten Vergauwen
Remote Sens. 2023, 15(10), 2501; https://doi.org/10.3390/rs15102501 - 10 May 2023
Cited by 31 | Viewed by 9961
Abstract
Land Use/Land Cover (LULC) mapping is the first step in monitoring urban sprawl and its environmental, economic and societal impacts. While satellite imagery and vegetation indices are commonly used for LULC mapping, the limited resolution of these images can hamper object recognition for [...] Read more.
Land Use/Land Cover (LULC) mapping is the first step in monitoring urban sprawl and its environmental, economic and societal impacts. While satellite imagery and vegetation indices are commonly used for LULC mapping, the limited resolution of these images can hamper object recognition for Geographic Object-Based Image Analysis (GEOBIA). In this study, we utilize very high-resolution (VHR) optical imagery with a resolution of 50 cm to improve object recognition for GEOBIA LULC classification. We focused on the city of Nice, France, and identified ten LULC classes using a Random Forest classifier in Google Earth Engine. We investigate the impact of adding Gray-Level Co-Occurrence Matrix (GLCM) texture information and spectral indices with their temporal components, such as maximum value, standard deviation, phase and amplitude from the multi-spectral and multi-temporal Sentinel-2 imagery. This work focuses on identifying which input features result in the highest increase in accuracy. The results show that adding a single VHR image improves the classification accuracy from 62.62% to 67.05%, especially when the spectral indices and temporal analysis are not included. The impact of the GLCM is similar but smaller than the VHR image. Overall, the inclusion of temporal analysis improves the classification accuracy to 74.30%. The blue band of the VHR image had the largest impact on the classification, followed by the amplitude of the green-red vegetation index and the phase of the normalized multi-band drought index. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Figure 1

14 pages, 2300 KB  
Article
FocusedDropout for Convolutional Neural Network
by Minghui Liu, Tianshu Xie, Xuan Cheng, Jiali Deng, Meiyi Yang, Xiaomin Wang and Ming Liu
Appl. Sci. 2022, 12(15), 7682; https://doi.org/10.3390/app12157682 - 30 Jul 2022
Cited by 9 | Viewed by 3625
Abstract
In a convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially. Except for randomly discarding regions or channels, many approaches try to overcome this defect by dropping influential units. [...] Read more.
In a convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially. Except for randomly discarding regions or channels, many approaches try to overcome this defect by dropping influential units. In this paper, we propose a non-random dropout method named FocusedDropout, aiming to make the network focus more on the target. In FocusedDropout, we use a simple but effective method to search for the target-related features, retain these features and discard others, which is contrary to the existing methods. We find that this novel method can improve network performance by making the network more target focused. Additionally, increasing the weight decay while using FocusedDropout can avoid overfitting and increase accuracy. Experimental results show that with a slight cost, 10% of batches employing FocusedDropout, can produce a nice performance boost over the baselines on multiple datasets of classification, including CIFAR10, CIFAR100 and Tiny ImageNet, and has a good versatility for different CNN models. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

16 pages, 363 KB  
Article
Extended Object Tracking with Embedded Classification
by Wen Cao and Qiwei Li
Sensors 2022, 22(6), 2134; https://doi.org/10.3390/s22062134 - 9 Mar 2022
Cited by 5 | Viewed by 2877
Abstract
This paper proposes a novel extended object tracking (EOT) approach with embedded classification. Traditionally, for extended objects, only tracking is addressed without considering classification. This has serious defects: On the one hand, some practical EOT problems require classification as an embedded subproblem; on [...] Read more.
This paper proposes a novel extended object tracking (EOT) approach with embedded classification. Traditionally, for extended objects, only tracking is addressed without considering classification. This has serious defects: On the one hand, some practical EOT problems require classification as an embedded subproblem; on the other hand, with the assistance of classification, the tracking performance can be improved. Therefore, we propose a systematic EOT method with embedded classification, which is desired to satisfy the practical demands and also enjoys superior tracking performance. Specifically, we first formulate the EOT problem with embedded classification by kinematic models and attribute models. Then, we explore a random-matrix-based, multiple model EOT method with embedded classification. Two strategies are creatively provided in which soft classification and hard classification are embedded, respectively. Especially for the EOT with hard classification, a sequential probability ratio-test-based classification scheme is explored due to its nice properties and adaptability to our problem. In both methods, classification assist tracking is used. The simulation results demonstrate the superiority of the proposed EOT method with embedded classification, which can not only satisfy the practical requirements for classification but can also improve the tracking performance by utilizing the assistant of classification. Full article
(This article belongs to the Collection Multi-Sensor Information Fusion)
Show Figures

Figure 1

12 pages, 2628 KB  
Article
Compound Endoscopic Morphological Features for Identifying Non-Pedunculated Lesions ≥20 mm with Intramucosal Neoplasia
by João Pedro da Costa-Seixas, María López-Cerón, Anna Arnau, Òria Rosiñol, Miriam Cuatrecasas, Alberto Herreros-de-Tejada, Ángel Ferrández, Miquel Serra-Burriel, Óscar Nogales, Luisa de Castro, Jorge López-Vicente, Pablo Vega, Marco A. Álvarez-González, Jesús M. González-Santiago, Marta Hernández-Conde, Pilar Diez-Redondo, Liseth Rivero-Sánchez, Antonio Z. Gimeno-García, Aurora Burgos, Francisco Javier García-Alonso, Marco Bustamante-Balén, Eva Martínez-Bauer, Beatriz Peñas, Daniel Rodríguez-Alcalde, Maria Pellisé and Ignasi Puigadd Show full author list remove Hide full author list
Cancers 2021, 13(21), 5302; https://doi.org/10.3390/cancers13215302 - 22 Oct 2021
Cited by 3 | Viewed by 3436
Abstract
Background: The major limitation of piecemeal endoscopic mucosal resection (EMR) is the inaccurate histological assessment of the resected specimen, especially in cases of submucosal invasion. Objective: To classify non-pedunculated lesions ≥20 mm based on endoscopic morphological features, in order to identify those that [...] Read more.
Background: The major limitation of piecemeal endoscopic mucosal resection (EMR) is the inaccurate histological assessment of the resected specimen, especially in cases of submucosal invasion. Objective: To classify non-pedunculated lesions ≥20 mm based on endoscopic morphological features, in order to identify those that present intramucosal neoplasia (includes low-grade neoplasia and high-grade neoplasia) and are suitable for piecemeal EMR. Design: A post-hoc analysis from an observational prospective multicentre study conducted by 58 endoscopists at 17 academic and community hospitals was performed. Unbiased conditional inference trees (CTREE) were fitted to analyse the association between intramucosal neoplasia and the lesions’ endoscopic characteristics. Result: 542 lesions from 517 patients were included in the analysis. Intramucosal neoplasia was present in 484 of 542 (89.3%) lesions. A conditional inference tree including all lesions’ characteristics assessed with white light imaging and narrow-band imaging (NBI) found that ulceration, pseudodepressed type and sessile morphology changed the accuracy for predicting intramucosal neoplasia. In ulcerated lesions, the probability of intramucosal neoplasia was 25% (95%CI: 8.3–52.6%; p < 0.001). In non-ulcerated lesions, its probability in lateral spreading lesions (LST) non-granular (NG) pseudodepressed-type lesions rose to 64.0% (95%CI: 42.6–81.3%; p < 0.001). Sessile morphology also raised the probability of intramucosal neoplasia to 86.3% (95%CI: 80.2–90.7%; p < 0.001). In the remaining 319 (58.9%) non-ulcerated lesions that were of the LST-granular (G) homogeneous type, LST-G nodular-mixed type, and LST-NG flat elevated morphology, the probability of intramucosal neoplasia was 96.2% (95%CI: 93.5–97.8%; p < 0.001). Conclusion: Non-ulcerated LST-G type and LST-NG flat elevated lesions are the most common non-pedunculated lesions ≥20 mm and are associated with a high probability of intramucosal neoplasia. This means that they are good candidates for piecemeal EMR. In the remaining lesions, further diagnostic techniques like magnification or diagnostic +/− therapeutic endoscopic submucosal dissection should be considered. Full article
(This article belongs to the Special Issue Recent Advances in Colorectal Cancer Diagnostics and Treatments)
Show Figures

Figure 1

27 pages, 532 KB  
Review
Non-Pharmacological Interventions towards Preventing the Triad Osteoporosis-Falls Risk-Hip Fracture, in Population Older than 65. Scoping Review
by Alba Peraza-Delgado, María Begoña Sánchez-Gómez, Juan Gómez-Salgado, Macarena Romero-Martín, Mercedes Novo-Muñoz and Gonzalo Duarte-Clíments
J. Clin. Med. 2020, 9(8), 2329; https://doi.org/10.3390/jcm9082329 - 22 Jul 2020
Cited by 20 | Viewed by 7215
Abstract
Osteoporosis leads to increased risk of falls, and thus an increase in fractures, highlighting here hip fractures, that result in high mortality, functional disability, and high medical expenditure. The aim is to summarise the available evidence on effective non-pharmacological interventions to prevent the [...] Read more.
Osteoporosis leads to increased risk of falls, and thus an increase in fractures, highlighting here hip fractures, that result in high mortality, functional disability, and high medical expenditure. The aim is to summarise the available evidence on effective non-pharmacological interventions to prevent the triad osteoporosis/falls risk/hip fracture. A scoping review was conducted consulting the Scientific Electronic Library Online (Scielo), National Institute for Health and Care Excellence (NICE), Cumulative Index to Nursing & Allied Health Literature (CINAHL) y PubMed.databases. Inclusion criteria were articles published between 2013 and 2019, in Spanish or English. In addition, publications on a population over 65 years of age covering non-pharmacological interventions aimed at hip fracture prevention for both institutionalised patients in long-stay health centres or hospitals, and patients cared for at home, both dependent and non-dependent, were included. Sixty-six articles were selected and 13 non-pharmacological interventions were identified according to the Nursing Interventions Classification taxonomy, aimed at preventing osteoporosis, falls, and hip fracture. The figures regarding the affected population according to the studies are alarming, reflecting the importance of preventing the triad osteoporosis, falls risk, and hip fracture among the population over 65 years of age. The most effective interventions were focused on increasing Bone Mineral Density through diet, exercise, and falls prevention. As a conclusion, primary prevention should be applied to the entire adult population, with special emphasis on people with osteoporosis. Full article
(This article belongs to the Special Issue Clinical Prevention and Treatment of Osteoporosis)
Show Figures

Figure 1

30 pages, 4684 KB  
Article
Identification of Risk Factors Associated with Obesity and Overweight—A Machine Learning Overview
by Ayan Chatterjee, Martin W. Gerdes and Santiago G. Martinez
Sensors 2020, 20(9), 2734; https://doi.org/10.3390/s20092734 - 11 May 2020
Cited by 174 | Viewed by 29806
Abstract
Social determining factors such as the adverse influence of globalization, supermarket growth, fast unplanned urbanization, sedentary lifestyle, economy, and social position slowly develop behavioral risk factors in humans. Behavioral risk factors such as unhealthy habits, improper diet, and physical inactivity lead to physiological [...] Read more.
Social determining factors such as the adverse influence of globalization, supermarket growth, fast unplanned urbanization, sedentary lifestyle, economy, and social position slowly develop behavioral risk factors in humans. Behavioral risk factors such as unhealthy habits, improper diet, and physical inactivity lead to physiological risks, and “obesity/overweight” is one of the consequences. “Obesity and overweight” are one of the major lifestyle diseases that leads to other health conditions, such as cardiovascular diseases (CVDs), chronic obstructive pulmonary disease (COPD), cancer, diabetes type II, hypertension, and depression. It is not restricted within the age and socio-economic background of human beings. The “World Health Organization” (WHO) has anticipated that 30% of global death will be caused by lifestyle diseases by 2030 and it can be prevented with the appropriate identification of associated risk factors and behavioral intervention plans. Health behavior change should be given priority to avoid life-threatening damages. The primary purpose of this study is not to present a risk prediction model but to provide a review of various machine learning (ML) methods and their execution using available sample health data in a public repository related to lifestyle diseases, such as obesity, CVDs, and diabetes type II. In this study, we targeted people, both male and female, in the age group of >20 and <60, excluding pregnancy and genetic factors. This paper qualifies as a tutorial article on how to use different ML methods to identify potential risk factors of obesity/overweight. Although institutions such as “Center for Disease Control and Prevention (CDC)” and “National Institute for Clinical Excellence (NICE)” guidelines work to understand the cause and consequences of overweight/obesity, we aimed to utilize the potential of data science to assess the correlated risk factors of obesity/overweight after analyzing the existing datasets available in “Kaggle” and “University of California, Irvine (UCI) database”, and to check how the potential risk factors are changing with the change in body-energy imbalance with data-visualization techniques and regression analysis. Analyzing existing obesity/overweight related data using machine learning algorithms did not produce any brand-new risk factors, but it helped us to understand: (a) how are identified risk factors related to weight change and how do we visualize it? (b) what will be the nature of the data (potential monitorable risk factors) to be collected over time to develop our intended eCoach system for the promotion of a healthy lifestyle targeting “obesity and overweight” as a study case in the future? (c) why have we used the existing “Kaggle” and “UCI” datasets for our preliminary study? (d) which classification and regression models are performing better with a corresponding limited volume of the dataset following performance metrics? Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 600 KB  
Article
Aspect-Based Sentiment Analysis Using Aspect Map
by Yunseok Noh, Seyoung Park and Seong-Bae Park
Appl. Sci. 2019, 9(16), 3239; https://doi.org/10.3390/app9163239 - 8 Aug 2019
Cited by 17 | Viewed by 6675
Abstract
Aspect-based sentiment analysis (ABSA) is the task of classifying the sentiment of a specific aspect in a text. Because a single text usually has multiple aspects which are expressed independently, ABSA is a crucial task for in-depth opinion mining. A key point of [...] Read more.
Aspect-based sentiment analysis (ABSA) is the task of classifying the sentiment of a specific aspect in a text. Because a single text usually has multiple aspects which are expressed independently, ABSA is a crucial task for in-depth opinion mining. A key point of solving ABSA is to align sentiment expressions with their proper target aspect in a text. Thus, many recent neural models have applied attention mechanisms to learning the alignment. However, it is problematic to depend solely on attention mechanisms to achieve this, because most sentiment expressions such as “nice” and “bad” are too general to be aligned with a proper aspect even through an attention mechanism. To solve this problem, this paper proposes a novel convolutional neural network (CNN)-based aspect-level sentiment classification model, which consists of two CNNs. Because sentiment expressions relevant to an aspect usually appear near the aspect expressions of the aspect, the proposed model first finds the aspect expressions for a given aspect and then focuses on the sentiment expressions around the aspect expressions to determine the final sentiment of an aspect. Thus, the first CNN extracts the positional information of aspect expressions for a target aspect and expresses the information as an aspect map. Even if there exist no data with annotations on direct relation between aspects and their expressions, the aspect map can be obtained effectively by learning it in a weakly supervised manner. Then, the second CNN classifies the sentiment of the target aspect in a text using the aspect map. The proposed model is evaluated on SemEval 2016 Task 5 dataset and is compared with several baseline models. According to the experimental results, the proposed model does not only outperform the baseline models but also shows state-of-the-art performance for the dataset. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 604 KB  
Perspective
Cognitive Function in Primary Sjögren’s Syndrome: A Systematic Review
by Ciro Manzo, Eva Martinez-Suarez, Melek Kechida, Marco Isetta and Jordi Serra-Mestres
Brain Sci. 2019, 9(4), 85; https://doi.org/10.3390/brainsci9040085 - 15 Apr 2019
Cited by 30 | Viewed by 11398
Abstract
Background: Cognitive disorders are reported to be common in patients with primary Sjogren’s syndrome (pSS). In some cases, they are the first clinical manifestation, preceding the diagnosis of pSS by two years on average. Aim: A systematic review was conducted to explore cognitive [...] Read more.
Background: Cognitive disorders are reported to be common in patients with primary Sjogren’s syndrome (pSS). In some cases, they are the first clinical manifestation, preceding the diagnosis of pSS by two years on average. Aim: A systematic review was conducted to explore cognitive impairment in pSS, with reference to diagnostic methods and their relationship with laboratory data and clinical manifestations. Materials and Methods: According to the PRISMA 2009 checklist, we carried out a comprehensive literature search in the three main bibliographic databases: MEDLINE, EMBASE, and PsycINFO (NICE HDAS interface). The following main search terms were used: primary Sjogren syndrome, neurological manifestations, fatigue, cognitive functions, psychiatric manifestations, mild cognitive impairment, dementia, and neurocognitive disorder. The search was made on 14 September, 2018. References from all selected studies were also examined. Inclusion criteria were: all studies and case-reports published in any language from 2002 that assessed the association of pSS (according to classification criteria proposed by the 2002 American/European collaborative group (AECG)) with all types of cognitive impairment (including dementia). Exclusion criteria were: reviews, abstracts, secondary Sjögren’s syndrome (SS), and all articles in which other classification criteria were used. Results: The initial search yielded 352 articles, of which 253 were excluded based on the title and abstract review. A total of 54 articles underwent a full-length review, and 32 articles were excluded. Data were extracted from 18 studies and three case-reports involving a total of 6196 participants. In most cases, cognitive dysfunction was a brain fog or a mild cognitive impairment (MCI). Occasionally, an autoimmune dementia was present. The relationship between pSS and degenerative dementias, such as Alzheimer’s disease (AD), was a controversial issue, even if some investigators hypothesized that pSS could be a risk factor. Several unmet needs were highlighted. First, some of the included studies had not reported the severity of pSS; hence, few correlations between disease severity and cognitive function were possible. Secondly, the evaluation of the pathogenetic role of comorbid diseases was often absent. The lack of information on the type of dementia represented a third critical point in the majority of the included studies. Conclusions: This systematic review confirmed that adequate studies on cognitive function in pSS are scarce, mostly performed on small-sized samples, and often conflicting. The routine assessment of cognitive function in patients with pSS seems advisable and it will help to elucidate some of the unmet needs highlighted by this review in future appropriately designed studies. Full article
(This article belongs to the Special Issue Collection on Cognitive Neuroscience)
Show Figures

Figure 1

Back to TopTop