Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Authors = Jose E. Cejudo

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1435 KiB  
Article
Enhancing the Release of Ellagic Acid from Mexican Rambutan Peel Using Solid-State Fermentation
by Nadia D. Cerda-Cejudo, José J. Buenrostro-Figueroa, Leonardo Sepúlveda, L. E. Estrada-Gil, Cristian Torres-León, Mónica L. Chávez-González, Cristóbal N. Aguilar and J. A. Ascacio-Valdés
Biomass 2024, 4(3), 1005-1016; https://doi.org/10.3390/biomass4030056 - 2 Sep 2024
Cited by 4 | Viewed by 1861
Abstract
This work describes research focused on the recovery of ellagic acid (EA) using solid-state fermentation-assisted extraction (SSF) with Aspergillus niger GH1 and Mexican rambutan peel as support. Several culture conditions (temperature, initial moisture, levels of inoculum, and concentration of salts) were evaluated using [...] Read more.
This work describes research focused on the recovery of ellagic acid (EA) using solid-state fermentation-assisted extraction (SSF) with Aspergillus niger GH1 and Mexican rambutan peel as support. Several culture conditions (temperature, initial moisture, levels of inoculum, and concentration of salts) were evaluated using a Placket–Burman design (PBD) for screening culture factors followed by a central composite design (CCD) for enhancing the EA. Antioxidant activity and polyphenol content were evaluated in SSF. Temperature (28.2 °C), inoculum (2 × 107 spores/g), and NaNO3 (3.83 g/L) concentration were identified as a significant parameter for EA in SSF. This enhancing procedure resulted in an increase in EA recovery [201.53 ± 0.58–392.23 ± 17.53 mg/g] and, with two steps of purification, [396.9 ± 65.2 mg/g] of EA compound was recovered per gram of recovered powder. Fermentation extracts reflect inhibition of radicals and the presence of polyphenol content. This work proposes to identify the ideal conditions of fermentation in order to obtain a higher yield high-quality compound from agro-industrial wastes through SSF. Full article
Show Figures

Figure 1

22 pages, 6589 KiB  
Article
Supercritical Impregnation of PETG with Olea europaea Leaf Extract: Influence of Operational Parameters on Expansion Degree, Antioxidant and Mechanical Properties
by Noelia D. Machado, José E. Mosquera, Cristina Cejudo-Bastante, María L. Goñi, Raquel E. Martini, Nicolás A. Gañán, Casimiro Mantell-Serrano and Lourdes Casas-Cardoso
Polymers 2024, 16(11), 1567; https://doi.org/10.3390/polym16111567 - 1 Jun 2024
Cited by 7 | Viewed by 1557
Abstract
PETG (poly(ethylene glycol-co-cyclohexane-1,4-dimethanol terephthalate)) is an amorphous copolymer, biocompatible, recyclable, and versatile. Nowadays, it is being actively researched for biomedical applications. However, proposals of PETG as a platform for the loading of bioactive compounds from natural extract are scarce, as well as the [...] Read more.
PETG (poly(ethylene glycol-co-cyclohexane-1,4-dimethanol terephthalate)) is an amorphous copolymer, biocompatible, recyclable, and versatile. Nowadays, it is being actively researched for biomedical applications. However, proposals of PETG as a platform for the loading of bioactive compounds from natural extract are scarce, as well as the effect of the supercritical impregnation on this polymer. In this work, the supercritical impregnation of PETG filaments with Olea europaea leaf extract was investigated, evaluating the effect of pressure (100–400 bar), temperature (35–55 °C), and depressurization rate (5–50 bar min−1) on the expansion degree, antioxidant activity, and mechanical properties of the resulting filaments. PETG expansion degree ranged from ~3 to 120%, with antioxidant loading ranging from 2.28 to 17.96 g per 100 g of polymer, corresponding to oxidation inhibition values of 7.65 and 66.55%, respectively. The temperature and the binary interaction between pressure and depressurization rate most affected these properties. The mechanical properties of PETG filaments depended greatly on process variables. Tensile strength values were similar or lower than the untreated filaments. Young’s modulus and elongation at break values decreased below ~1000 MPa and ~10%, respectively, after the scCO2 treatment and impregnation. The extent of this decrease depended on the supercritical operational parameters. Therefore, filaments with higher antioxidant activity and different expansion degrees and mechanical properties were obtained by adjusting the supercritical processing conditions. Full article
(This article belongs to the Special Issue Additive Manufacturing of (Bio) Polymeric Materials)
Show Figures

Figure 1

14 pages, 6947 KiB  
Article
Emulating Clinical Diagnostic Reasoning for Jaw Cysts with Machine Learning
by Balazs Feher, Ulrike Kuchler, Falk Schwendicke, Lisa Schneider, Jose Eduardo Cejudo Grano de Oro, Tong Xi, Shankeeth Vinayahalingam, Tzu-Ming Harry Hsu, Janet Brinz, Akhilanand Chaurasia, Kunaal Dhingra, Robert Andre Gaudin, Hossein Mohammad-Rahimi, Nielsen Pereira, Francesc Perez-Pastor, Olga Tryfonos, Sergio E. Uribe, Marcel Hanisch and Joachim Krois
Diagnostics 2022, 12(8), 1968; https://doi.org/10.3390/diagnostics12081968 - 14 Aug 2022
Cited by 15 | Viewed by 4637
Abstract
The detection and classification of cystic lesions of the jaw is of high clinical relevance and represents a topic of interest in medical artificial intelligence research. The human clinical diagnostic reasoning process uses contextual information, including the spatial relation of the detected lesion [...] Read more.
The detection and classification of cystic lesions of the jaw is of high clinical relevance and represents a topic of interest in medical artificial intelligence research. The human clinical diagnostic reasoning process uses contextual information, including the spatial relation of the detected lesion to other anatomical structures, to establish a preliminary classification. Here, we aimed to emulate clinical diagnostic reasoning step by step by using a combined object detection and image segmentation approach on panoramic radiographs (OPGs). We used a multicenter training dataset of 855 OPGs (all positives) and an evaluation set of 384 OPGs (240 negatives). We further compared our models to an international human control group of ten dental professionals from seven countries. The object detection model achieved an average precision of 0.42 (intersection over union (IoU): 0.50, maximal detections: 100) and an average recall of 0.394 (IoU: 0.50–0.95, maximal detections: 100). The classification model achieved a sensitivity of 0.84 for odontogenic cysts and 0.56 for non-odontogenic cysts as well as a specificity of 0.59 for odontogenic cysts and 0.84 for non-odontogenic cysts (IoU: 0.30). The human control group achieved a sensitivity of 0.70 for odontogenic cysts, 0.44 for non-odontogenic cysts, and 0.56 for OPGs without cysts as well as a specificity of 0.62 for odontogenic cysts, 0.95 for non-odontogenic cysts, and 0.76 for OPGs without cysts. Taken together, our results show that a combined object detection and image segmentation approach is feasible in emulating the human clinical diagnostic reasoning process in classifying cystic lesions of the jaw. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

9 pages, 2284 KiB  
Article
Classification of Dental Radiographs Using Deep Learning
by Jose E. Cejudo, Akhilanand Chaurasia, Ben Feldberg, Joachim Krois and Falk Schwendicke
J. Clin. Med. 2021, 10(7), 1496; https://doi.org/10.3390/jcm10071496 - 3 Apr 2021
Cited by 41 | Viewed by 5687
Abstract
Objectives: To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance. Methods: Our dataset consisted of 31,288 panoramic, 43,598 periapical, 14,326 bitewing, and 1176 cephalometric radiographs [...] Read more.
Objectives: To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance. Methods: Our dataset consisted of 31,288 panoramic, 43,598 periapical, 14,326 bitewing, and 1176 cephalometric radiographs from two centers (Berlin/Germany; Lucknow/India). For a subset of images L (32,381 images), image classifications were available and manually validated by an expert. The remaining subset of images U was iteratively annotated using active learning, with ResNet-34 being trained on L, least confidence informative sampling being performed on U, and the most uncertain image classifications from U being reviewed by a human expert and iteratively used for re-training. We then employed a baseline convolutional neural networks (CNN), a residual network (another ResNet-34, pretrained on ImageNet), and a capsule network (CapsNet) for classification. Early stopping was used to prevent overfitting. Evaluation of the model performances followed stratified k-fold cross-validation. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to provide visualizations of the weighted activations maps. Results: All three models showed high accuracy (>98%) with significantly higher accuracy, F1-score, precision, and sensitivity of ResNet than baseline CNN and CapsNet (p < 0.05). Specificity was not significantly different. ResNet achieved the best performance at small variance and fastest convergence. Misclassification was most common between bitewings and periapicals. For bitewings, model activation was most notable in the inter-arch space for periapicals interdentally, for panoramics on bony structures of maxilla and mandible, and for cephalometrics on the viscerocranium. Conclusions: Regardless of the models, high classification accuracies were achieved. Image features considered for classification were consistent with expert reasoning. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
Show Figures

Figure 1

8 pages, 1019 KiB  
Article
Generalizability of Deep Learning Models for Caries Detection in Near-Infrared Light Transillumination Images
by Agnes Holtkamp, Karim Elhennawy, José E. Cejudo Grano de Oro, Joachim Krois, Sebastian Paris and Falk Schwendicke
J. Clin. Med. 2021, 10(5), 961; https://doi.org/10.3390/jcm10050961 - 1 Mar 2021
Cited by 24 | Viewed by 3980
Abstract
Objectives: The present study aimed to train deep convolutional neural networks (CNNs) to detect caries lesions on Near-Infrared Light Transillumination (NILT) imagery obtained either in vitro or in vivo and to assess the models’ generalizability. Methods: In vitro, 226 extracted posterior permanent human [...] Read more.
Objectives: The present study aimed to train deep convolutional neural networks (CNNs) to detect caries lesions on Near-Infrared Light Transillumination (NILT) imagery obtained either in vitro or in vivo and to assess the models’ generalizability. Methods: In vitro, 226 extracted posterior permanent human teeth were mounted in a diagnostic model in a dummy head. Then, NILT images were generated (DIAGNOcam, KaVo, Biberach), and images were segmented tooth-wise. In vivo, 1319 teeth from 56 patients were obtained and segmented similarly. Proximal caries lesions were annotated pixel-wise by three experienced dentists, reviewed by a fourth dentist, and then transformed into binary labels. We trained ResNet classification models on both in vivo and in vitro datasets and used 10-fold cross-validation for estimating the performance and generalizability of the models. We used GradCAM to increase explainability. Results: The tooth-level prevalence of caries lesions was 41% in vitro and 49% in vivo, respectively. Models trained and tested on in vivo data performed significantly better (mean ± SD accuracy: 0.78 ± 0.04) than those trained and tested on in vitro data (accuracy: 0.64 ± 0.15; p < 0.05). When tested in vitro, the models trained in vivo showed significantly lower accuracy (0.70 ± 0.01; p < 0.01). Similarly, when tested in vivo, models trained in vitro showed significantly lower accuracy (0.61 ± 0.04; p < 0.05). In both cases, this was due to decreases in sensitivity (by −27% for models trained in vivo and −10% for models trained in vitro). Conclusions: Using in vitro setups for generating NILT imagery and training CNNs comes with low accuracy and generalizability. Clinical significance: Studies employing in vitro imagery for developing deep learning models should be critically appraised for their generalizability. Applicable deep learning models for assessing NILT imagery should be trained on in vivo data. Full article
(This article belongs to the Collection Digital Dentistry: Advances and Challenges)
Show Figures

Figure 1

Back to TopTop