Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Authors = Osslan Osiris Vergara Villegas

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6620 KB  
Article
A Study to Determine the Feasibility of Combining Mobile Augmented Reality and an Automatic Pill Box to Support Older Adults’ Medication Adherence
by Osslan Osiris Vergara-Villegas, Vianey Guadalupe Cruz-Sánchez, Abel Alejandro Rubín-Alvarado, Saulo Abraham Gante-Díaz, Jonathan Axel Cruz-Vazquez, Brandon Areyzaga-Mendizábal, Jesús Yaljá Montiel-Pérez, Juan Humberto Sossa-Azuela, Iliac Huerta-Trujillo and Rodolfo Romero-Herrera
Computers 2025, 14(10), 421; https://doi.org/10.3390/computers14100421 - 2 Oct 2025
Viewed by 1908
Abstract
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used [...] Read more.
Because of the increased prevalence of chronic diseases, older adults frequently take many medications. However, adhering to a medication treatment tends to be difficult. The lack of medication adherence can cause health problems or even patient death. This paper describes the methodology used in developing a mobile augmented reality (MAR) pill box. The proposal supports patients in adhering to their medication treatment. First, we explain the design and construction of the automatic pill box, which includes alarms and uses QR codes recognized by the MAR system to provide medication information. Then, we explain the development of the MAR system. We conducted a preliminary survey with 30 participants to assess the feasibility of the MAR app. One hundred older adults participated in the survey. After one week of using the proposal, each patient answered a survey regarding the proposal functionality. The results revealed that 88% of the participants strongly agree and 11% agree that the app is a support in adhering to medical treatment. Finally, we conducted a study to compare the time elapsed between the scheduled time for taking the medication and the time it was actually consumed. The results from 189 records showed that using the proposal, 63.5% of the patients take medication with a maximum delay of 4.5 min. The results also showed that the alarm always sounded at the scheduled time and that the QR code displayed always corresponded to the medication that had to be consumed. Full article
Show Figures

Figure 1

24 pages, 2159 KB  
Article
Cross-Domain Transfer Learning Architecture for Microcalcification Cluster Detection Using the MEXBreast Multiresolution Mammography Dataset
by Ricardo Salvador Luna Lozoya, Humberto de Jesús Ochoa Domínguez, Juan Humberto Sossa Azuela, Vianey Guadalupe Cruz Sánchez, Osslan Osiris Vergara Villegas and Karina Núñez Barragán
Mathematics 2025, 13(15), 2422; https://doi.org/10.3390/math13152422 - 28 Jul 2025
Cited by 1 | Viewed by 818
Abstract
Microcalcification clusters (MCCs) are key indicators of breast cancer, with studies showing that approximately 50% of mammograms with MCCs confirm a cancer diagnosis. Early detection is critical, as it ensures a five-year survival rate of up to 99%. However, MCC detection remains challenging [...] Read more.
Microcalcification clusters (MCCs) are key indicators of breast cancer, with studies showing that approximately 50% of mammograms with MCCs confirm a cancer diagnosis. Early detection is critical, as it ensures a five-year survival rate of up to 99%. However, MCC detection remains challenging due to their features, such as small size, texture, shape, and impalpability. Convolutional neural networks (CNNs) offer a solution for MCC detection. Nevertheless, CNNs are typically trained on single-resolution images, limiting their generalizability across different image resolutions. We propose a CNN trained on digital mammograms with three common resolutions: 50, 70, and 100 μm. The architecture processes individual 1 cm2 patches extracted from the mammograms as input samples and includes a MobileNetV2 backbone, followed by a flattening layer, a dense layer, and a sigmoid activation function. This architecture was trained to detect MCCs using patches extracted from the INbreast database, which has a resolution of 70 μm, and achieved an accuracy of 99.84%. We applied transfer learning (TL) and trained on 50, 70, and 100 μm resolution patches from the MEXBreast database, achieving accuracies of 98.32%, 99.27%, and 89.17%, respectively. For comparison purposes, models trained from scratch, without leveraging knowledge from the pretrained model, achieved 96.07%, 99.20%, and 83.59% accuracy for 50, 70, and 100 μm, respectively. Results demonstrate that TL improves MCC detection across resolutions by reusing pretrained knowledge. Full article
(This article belongs to the Special Issue Mathematical Methods in Artificial Intelligence for Image Processing)
Show Figures

Figure 1

25 pages, 9799 KB  
Article
A Diamond Approach to Develop Virtual Object Interaction: Fusing Augmented Reality and Kinesthetic Haptics
by Alma Rodriguez-Ramirez, Osslan Osiris Vergara Villegas, Manuel Nandayapa, Francesco Garcia-Luna and María Cristina Guevara Neri
Multimodal Technol. Interact. 2025, 9(2), 15; https://doi.org/10.3390/mti9020015 - 13 Feb 2025
Cited by 1 | Viewed by 1423
Abstract
Using the senses is essential to interacting with objects in real-world environments. However, not all the senses are available when interacting with virtual objects in virtual environments. This paper presents a diamond methodology to fuse two technologies to represent the senses of sight [...] Read more.
Using the senses is essential to interacting with objects in real-world environments. However, not all the senses are available when interacting with virtual objects in virtual environments. This paper presents a diamond methodology to fuse two technologies to represent the senses of sight and touch when interacting with a virtual object. The sense of sight is represented through augmented reality, and the sense of touch is represented through kinesthetic haptics. The diamond methodology is centered on the user experience and comprises five general stages: (i) experience design, (ii) sensory representation, (iii) development, (iv) display, and (v) fusion. The first stage is the expected, proposed, or needed user experience. Then, each technology takes its homologous activities from the second to the fourth stage, diverging from each other along their development. Finally, the technologies converge to the fifth stage for fusion in the user experience. The diamond methodology was tested by generating a user’s dual sensation when interacting with the elasticity of a tension virtual spring. The user can simultaneously perceive the visual and tactile change of the virtual spring during the interaction, representing the object’s deformation. The experimental results demonstrated that an interactive experience can be felt and seen in augmented reality following the diamond methodology. Full article
Show Figures

Graphical abstract

32 pages, 5136 KB  
Article
Fourier Features and Machine Learning for Contour Profile Inspection in CNC Milling Parts: A Novel Intelligent Inspection Method (NIIM)
by Manuel Meraz Méndez, Juan A. Ramírez Quintana, Elva Lilia Reynoso Jardón, Manuel Nandayapa and Osslan Osiris Vergara Villegas
Appl. Sci. 2024, 14(18), 8144; https://doi.org/10.3390/app14188144 - 10 Sep 2024
Cited by 1 | Viewed by 2626
Abstract
Form deviation generated during the milling profile process challenges the precision and functionality of industrial fixtures and product manufacturing across various sectors. Inspecting contour profile quality relies on commonly employed contact methods for measuring form deviation. However, the methods employed frequently face limitations [...] Read more.
Form deviation generated during the milling profile process challenges the precision and functionality of industrial fixtures and product manufacturing across various sectors. Inspecting contour profile quality relies on commonly employed contact methods for measuring form deviation. However, the methods employed frequently face limitations that can impact the reliability and overall accuracy of the inspection process. This paper introduces a novel approach, the novel intelligent inspection method (NIIM), developed to accurately inspect and categorize contour profiles in machined parts manufactured through the milling process by computer numerical control (CNC) machines. The NIIM integrates a calibration piece, a vision system (RAM-StarliteTM), and machine learning techniques to analyze the line profile and classify the quality of contour profile deformation generated during CNC milling. The calibration piece is specifically designed to identify form deviations in the contour profile during the milling process. The RAM-StarliteTM vision system captures contour profile images corresponding to curves, lines, and slopes. An algorithm generates a profile signature, extracting Fourier descriptor features from the contour profile to analyze form deviations compared to an image reference. A feed-forward neural network is employed to classify contour profiles based on quality properties. Experimental evaluations involving 60 machined calibration pieces, resulting in 356 images for training and testing, demonstrate the accuracy and computational efficiency of the proposed NIIM for profile line tolerance inspection. The results demonstrate that the NIIM offers 96.99% accuracy, low computational requirements, 100% inspection capability, and valuable information to improve machining parameters, as well as quality classification. Full article
Show Figures

Figure 1

22 pages, 3039 KB  
Article
Measuring Undergraduates’ Motivation Levels When Learning to Program in Virtual Worlds
by Juan Gabriel López Solórzano, Christian Jonathan Ángel Rueda and Osslan Osiris Vergara Villegas
Computers 2024, 13(8), 188; https://doi.org/10.3390/computers13080188 - 31 Jul 2024
Cited by 1 | Viewed by 2300
Abstract
Teaching/learning programming is complex, and conventional classes often fail to arouse students’ motivation in this discipline. Therefore, teachers should look for alternative methods for teaching programming. Information and communication technologies (ICTs) can be a valuable alternative, especially virtual worlds. This study measures the [...] Read more.
Teaching/learning programming is complex, and conventional classes often fail to arouse students’ motivation in this discipline. Therefore, teachers should look for alternative methods for teaching programming. Information and communication technologies (ICTs) can be a valuable alternative, especially virtual worlds. This study measures the students’ motivation level when using virtual worlds to learn introductory programming skills. Moreover, a comparison is conducted regarding their motivation levels when students learn in a traditional teaching setting. In this study, first-semester university students participated in a pedagogical experiment regarding the learning of the programming subject employing virtual worlds. A pre-test-post-test design was carried out. In the pre-test, 102 students participated, and the motivation level when a professor taught in a traditional modality was measured. Then, a post-test was applied to 60 students learning in virtual worlds. With this research, we have found that the activity conducted with virtual worlds presents higher motivation levels than traditional learning with the teacher. Moreover, regarding gender, women present higher confidence than men. We recommend that teachers try this innovation with their students based on our findings. However, teachers must design a didactic model to integrate virtual worlds into daily teaching activities. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

24 pages, 545 KB  
Article
Neural Architecture Comparison for Bibliographic Reference Segmentation: An Empirical Study
by Rodrigo Cuéllar Hidalgo, Raúl Pinto Elías, Juan-Manuel Torres-Moreno, Osslan Osiris Vergara Villegas , Gerardo Reyes Salgado and Andrea Magadán Salazar
Data 2024, 9(5), 71; https://doi.org/10.3390/data9050071 - 18 May 2024
Viewed by 2357
Abstract
In the realm of digital libraries, efficiently managing and accessing scientific publications necessitates automated bibliographic reference segmentation. This study addresses the challenge of accurately segmenting bibliographic references, a task complicated by the varied formats and styles of references. Focusing on the empirical evaluation [...] Read more.
In the realm of digital libraries, efficiently managing and accessing scientific publications necessitates automated bibliographic reference segmentation. This study addresses the challenge of accurately segmenting bibliographic references, a task complicated by the varied formats and styles of references. Focusing on the empirical evaluation of Conditional Random Fields (CRF), Bidirectional Long Short-Term Memory with CRF (BiLSTM + CRF), and Transformer Encoder with CRF (Transformer + CRF) architectures, this research employs Byte Pair Encoding and Character Embeddings for vector representation. The models underwent training on the extensive Giant corpus and subsequent evaluation on the Cora Corpus to ensure a balanced and rigorous comparison, maintaining uniformity across embedding layers, normalization techniques, and Dropout strategies. Results indicate that the BiLSTM + CRF architecture outperforms its counterparts by adeptly handling the syntactic structures prevalent in bibliographic data, achieving an F1-Score of 0.96. This outcome highlights the necessity of aligning model architecture with the specific syntactic demands of bibliographic reference segmentation tasks. Consequently, the study establishes the BiLSTM + CRF model as a superior approach within the current state-of-the-art, offering a robust solution for the challenges faced in digital library management and scholarly communication. Full article
Show Figures

Figure 1

26 pages, 5580 KB  
Article
Demystifying Deep Learning Building Blocks
by Humberto de Jesús Ochoa Domínguez, Vianey Guadalupe Cruz Sánchez and Osslan Osiris Vergara Villegas
Mathematics 2024, 12(2), 296; https://doi.org/10.3390/math12020296 - 17 Jan 2024
Cited by 1 | Viewed by 2630
Abstract
Building deep learning models proposed by third parties can become a simple task when specialized libraries are used. However, much mystery still surrounds the design of new models or the modification of existing ones. These tasks require in-depth knowledge of the different components [...] Read more.
Building deep learning models proposed by third parties can become a simple task when specialized libraries are used. However, much mystery still surrounds the design of new models or the modification of existing ones. These tasks require in-depth knowledge of the different components or building blocks and their dimensions. This information is limited and broken up in different literature. In this article, we collect and explain the building blocks used to design deep learning models in depth, starting from the artificial neuron to the concepts involved in building deep neural networks. Furthermore, the implementation of each building block is exemplified using the Keras library. Full article
(This article belongs to the Special Issue Deep Neural Networks: Theory, Algorithms and Applications)
Show Figures

Figure 1

23 pages, 3709 KB  
Article
Smart Multi-Level Tool for Remote Patient Monitoring Based on a Wireless Sensor Network and Mobile Augmented Reality
by Fernando Cornelio Jiménez González, Osslan Osiris Vergara Villegas, Dulce Esperanza Torres Ramírez, Vianey Guadalupe Cruz Sánchez and Humberto Ochoa Domínguez
Sensors 2014, 14(9), 17212-17234; https://doi.org/10.3390/s140917212 - 16 Sep 2014
Cited by 50 | Viewed by 15392
Abstract
Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this [...] Read more.
Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia. Full article
(This article belongs to the Special Issue Wireless Sensor Network for Pervasive Medical Care)
Show Figures

Back to TopTop