Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = ARCore

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7161 KB  
Article
Markerless AR Navigation for Smart Campuses: Lightweight Machine Learning for Infrastructure-Free Wayfinding
by Elohim Ramírez-Galván, Cesar Benavides-Alvarez, Carlos Avilés-Cruz, Arturo Zúñiga-López and José Félix Serrano-Talamantes
Electronics 2025, 14(24), 4834; https://doi.org/10.3390/electronics14244834 - 8 Dec 2025
Viewed by 721
Abstract
This paper presents a markerless augmented reality (AR) navigation system for guiding users across a university campus, independent of internet or wireless connectivity, integrating machine learning (ML) and deep learning techniques. The system employs computer vision to detect campus signage “Meeting Point” and [...] Read more.
This paper presents a markerless augmented reality (AR) navigation system for guiding users across a university campus, independent of internet or wireless connectivity, integrating machine learning (ML) and deep learning techniques. The system employs computer vision to detect campus signage “Meeting Point” and “Directory”, and classifies them through a binary classifier (BC) and convolutional neural networks (CNNs). The BC distinguishes between the two types of signs using RGB values with algorithms such as Perceptron, Bayesian classification, and k-Nearest Neighbors (KNN), while the CNN identifies the specific sign ID to link it to a campus location. Navigation routes are generated with the Floyd–Warshall algorithm, which computes the shortest path between nodes on a digital campus map. Directional arrows are then overlaid in AR on the user’s device via ARCore, updated every 200 milliseconds using sensor data and direction vectors. The prototype, developed in Android Studio, achieved over 99.5% accuracy with CNNs and 100% accuracy with the BC, even when signs were worn or partially occluded. A usability study with 27 participants showed that 85.2% successfully reached their destinations, with more than half rating the system as easy or very easy to use. Users also expressed strong interest in extending the application to other environments, such as shopping malls or airports. Overall, the solution is lightweight, scalable, and sustainable, requiring no additional infrastructure beyond existing campus signage. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

22 pages, 8968 KB  
Article
A Comparative Study of Authoring Performances Between In-Situ Mobile and Desktop Tools for Outdoor Location-Based Augmented Reality
by Komang Candra Brata, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, Prismahardi Aji Riyantoko, Noprianto and Mustika Mentari
Information 2025, 16(10), 908; https://doi.org/10.3390/info16100908 - 16 Oct 2025
Viewed by 852
Abstract
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, [...] Read more.
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, we proposed an in-situ mobile authoring tool as an efficient solution to this problem by offering direct authoring interactions in real-world environments using a smartphone. Currently, the evaluation through the comparison between the proposal and conventional ones is not sufficient to show superiority, particularly in terms of interaction, authoring performance, and cognitive workload, where our tool uses 6DoF device movement for spatial input, while desktop ones rely on mouse-pointing. In this paper, we present a comparative study of authoring performances between the tools across three authoring phases: (1) Point of Interest (POI) location acquisition, (2) AR object creation, and (3) AR object registration. For the conventional tool, we adopt Unity and ARCore SDK. As a real-world application, we target the LAR content creation for pedestrian landmark annotation across campus environments at Okayama University, Japan, and Brawijaya University, Indonesia, and identify task-level bottlenecks in both tools. In our experiments, we asked 20 participants aged 22 to 35 with different LAR development experiences to complete equivalent authoring tasks in an outdoor campus environment, creating various LAR contents. We measured task completion time, phase-wise contribution, and cognitive workload using NASA-TLX. The results show that our tool made faster creations with 60% lower cognitive loads, where the desktop tool required higher mental efforts with manual data input and object verifications. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

15 pages, 3156 KB  
Article
Adaptive AR Navigation: Real-Time Mapping for Indoor Environment Using Node Placement and Marker Localization
by Bagas Samuel Christiananta Putra, I. Kadek Dendy Senapartha, Jyun-Cheng Wang, Matahari Bhakti Nendya, Dan Daniel Pandapotan, Felix Nathanael Tjahjono and Halim Budi Santoso
Information 2025, 16(6), 478; https://doi.org/10.3390/info16060478 - 7 Jun 2025
Cited by 1 | Viewed by 4273
Abstract
Indoor navigation remains a challenge due to the limitations of GPS-based systems in enclosed environments. Current approaches, such as marker-based ones, have been developed for indoor navigation. However, it requires extensive manual mapping and makes indoor navigation time-consuming and difficult to scale. To [...] Read more.
Indoor navigation remains a challenge due to the limitations of GPS-based systems in enclosed environments. Current approaches, such as marker-based ones, have been developed for indoor navigation. However, it requires extensive manual mapping and makes indoor navigation time-consuming and difficult to scale. To enhance current approaches to indoor navigation, this study proposes a node-based mapping for indoor navigation, allowing users to dynamically construct navigation paths using a mobile device. The system leverages NavMesh, the A* algorithm for pathfinding, and is integrated into the ARCore for real-time AR guidance. Nodes are placed within the environment to define walkable paths, which can be stored and reused without requiring a full system to rebuild. Once the prototype has been developed, usability testing is conducted using the Handheld Augmented Reality Usability Scale (HARUS) to evaluate manipulability, comprehensibility, and overall usability. This study finds that using node-based mapping for indoor navigation can help enhance flexibility in mapping new indoor spaces and offers an effective AR-guided navigation experience. However, there are some areas of improvement, including interface clarity and system scalability, that can be considered for future research. This study contributes practically to improving current practices in adaptive indoor navigation systems using AR-based dynamic mapping techniques. Full article
Show Figures

Figure 1

19 pages, 11535 KB  
Article
A Study on the Automation of Fish Species Recognition and Body Length Measurement System
by Seung-Beom Kang, Seung-Gyu Kim, Sang-Hyun Lee and Tae-Ho Im
Fishes 2024, 9(9), 349; https://doi.org/10.3390/fishes9090349 - 3 Sep 2024
Cited by 2 | Viewed by 3543
Abstract
The rapid depletion of fishery resources has led to the global implementation of Total Allowable Catch (TAC) systems. However, the current manual survey methods employed by land-based inspectors show limitations in accuracy and efficiency. This study proposes an automated system for fish species [...] Read more.
The rapid depletion of fishery resources has led to the global implementation of Total Allowable Catch (TAC) systems. However, the current manual survey methods employed by land-based inspectors show limitations in accuracy and efficiency. This study proposes an automated system for fish species recognition and body length measurement, utilizing the RT-DETR (Real-Time Detection Transformer) model and ARCore technology to address these issues. The proposed system employs smartphone Time of Flight (ToF) functionality to measure object distance and automatically calculates the weight of 11 TAC-managed fish species by measuring their body length and height. Experimental results reveal that the RT-DETR-x model outperformed the YOLOv8x model by achieving an average mAP50 value 2.3% higher, with a mean recognition accuracy of 96.5% across the 11 species. Furthermore, the ARCore-based length measurement technique exhibited over 95% accuracy for all species. This system is expected to minimize data omissions and streamline labor-intensive processes, thereby contributing to the efficient operation of the TAC system and sustainable management of fishery resources. The study presents an innovative approach that significantly enhances the accuracy and efficiency of fishery resource management, providing a crucial technological foundation for the advancement of future fisheries management policies. Full article
Show Figures

Figure 1

21 pages, 4639 KB  
Article
WebAR as a Mediation Tool Focused on Reading and Understanding of Technical Drawings Regarding Tailor-Made Projects for the Scenographic Industry
by José Eduardo Araújo Lôbo, Walter Franklin Marques Correia, João Marcelo Teixeira, José Edeson de Melo Siqueira and Rafael Alves Roberto
Appl. Sci. 2023, 13(22), 12295; https://doi.org/10.3390/app132212295 - 14 Nov 2023
Cited by 1 | Viewed by 2107
Abstract
Among the leading immersive technologies, augmented reality is one of the most promising and empowering for supporting designers in production environments. This research investigates the application of mobile augmented reality, based on the Web, as a mediation tool focused on cognitive activities of [...] Read more.
Among the leading immersive technologies, augmented reality is one of the most promising and empowering for supporting designers in production environments. This research investigates the application of mobile augmented reality, based on the Web, as a mediation tool focused on cognitive activities of reading and understanding of technical drawings in the production and assembly of tailor-made projects of the scenographic industry. In this context, the research presents a method to use WebAR to improve the reading of technical drawings, seeking efficiency in the visualization of models and the exchange of information between professionals involved in the processes of design, production, and assembly of products, in the scope of scenography. This mediation tool was developed using Web AR platforms, compatible with native libraries (ARCore and ARKit) to ensure, first, compatibility with commonly used devices that workers or businesses can access, and second, to leverage hybrid tracking techniques that combine vision and sensors to improve the reliability of augmented reality viewing. The proposed solution adopts multiple tracking and navigation techniques in order to expand Space Skills components to provide greater exploratory freedom to users. The research process took place in light of the Design Science Research Methodology and the DSR-Model, since it aimed to develop a solution to a practical problem, as well as to produce knowledge from this process. Field experiments were conducted in two real companies, with end users on their respective mobile devices, in order to evaluate usability and behavioral intent, through the Acceptance, Intent, and Use of Technology questionnaires and perceived mental workload, NASA-TLX. The experimental results show that the adoption of this tool reduces the cognitive load in the process of reading technical drawings and project understanding. In general, its usability and intent to use provided significant levels of satisfaction, being positively accepted by all participants involved in the study. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality - 2nd Volume)
Show Figures

Figure 1

21 pages, 6823 KB  
Article
FILNet: Fast Image-Based Indoor Localization Using an Anchor Control Network
by Sikang Liu, Zhao Huang, Jiafeng Li, Anna Li and Xingru Huang
Sensors 2023, 23(19), 8140; https://doi.org/10.3390/s23198140 - 28 Sep 2023
Cited by 1 | Viewed by 2219
Abstract
This paper designs a fast image-based indoor localization method based on an anchor control network (FILNet) to improve localization accuracy and shorten the duration of feature matching. Particularly, two stages are developed for the proposed algorithm. The offline stage is to construct an [...] Read more.
This paper designs a fast image-based indoor localization method based on an anchor control network (FILNet) to improve localization accuracy and shorten the duration of feature matching. Particularly, two stages are developed for the proposed algorithm. The offline stage is to construct an anchor feature fingerprint database based on the concept of an anchor control network. This introduces detailed surveys to infer anchor features according to the information of control anchors using the visual–inertial odometry (VIO) based on Google ARcore. In addition, an affine invariance enhancement algorithm based on feature multi-angle screening and supplementation is developed to solve the image perspective transformation problem and complete the feature fingerprint database construction. In the online stage, a fast spatial indexing approach is adopted to improve the feature matching speed by searching for active anchors and matching only anchor features around the active anchors. Further, to improve the correct matching rate, a homography matrix filter model is used to verify the correctness of feature matching, and the correct matching points are selected. Extensive experiments in real-world scenarios are performed to evaluate the proposed FILNet. The experimental results show that in terms of affine invariance, compared with the initial local features, FILNet significantly improves the recall of feature matching from 26% to 57% when the angular deviation is less than 60 degrees. In the image feature matching stage, compared with the initial K-D tree algorithm, FILNet significantly improves the efficiency of feature matching, and the average time of the test image dataset is reduced from 30.3 ms to 12.7 ms. In terms of localization accuracy, compared with the benchmark method based on image localization, FILNet significantly improves the localization accuracy, and the percentage of images with a localization error of less than 0.1m increases from 31.61% to 55.89%. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

22 pages, 10959 KB  
Article
Automatic Tree Height Measurement Based on Three-Dimensional Reconstruction Using Smartphone
by Yulin Shen, Ruwei Huang, Bei Hua, Yuanguan Pan, Yong Mei and Minghao Dong
Sensors 2023, 23(16), 7248; https://doi.org/10.3390/s23167248 - 18 Aug 2023
Cited by 9 | Viewed by 4103
Abstract
Tree height is a crucial structural parameter in forest inventory as it provides a basis for evaluating stock volume and growth status. In recent years, close-range photogrammetry based on smartphone has attracted attention from researchers due to its low cost and non-destructive characteristics. [...] Read more.
Tree height is a crucial structural parameter in forest inventory as it provides a basis for evaluating stock volume and growth status. In recent years, close-range photogrammetry based on smartphone has attracted attention from researchers due to its low cost and non-destructive characteristics. However, such methods have specific requirements for camera angle and distance during shooting, and pre-shooting operations such as camera calibration and placement of calibration boards are necessary, which could be inconvenient to operate in complex natural environments. We propose a tree height measurement method based on three-dimensional (3D) reconstruction. Firstly, an absolute depth map was obtained by combining ARCore and MidasNet. Secondly, Attention-UNet was improved by adding depth maps as network input to obtain tree mask. Thirdly, the color image and depth map were fused to obtain the 3D point cloud of the scene. Then, the tree point cloud was extracted using the tree mask. Finally, the tree height was measured by extracting the axis-aligned bounding box of the tree point cloud. We built the method into an Android app, demonstrating its efficiency and automation. Our approach achieves an average relative error of 3.20% within a shooting distance range of 2–17 m, meeting the accuracy requirements of forest survey. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

25 pages, 2217 KB  
Article
Pedestrian Augmented Reality Navigator
by Tanmaya Mahapatra, Nikolaos Tsiamitros, Anton Moritz Rohr, Kailashnath K and Georgios Pipelidis
Sensors 2023, 23(4), 1816; https://doi.org/10.3390/s23041816 - 6 Feb 2023
Cited by 5 | Viewed by 3539
Abstract
Navigation is often regarded as one of the most-exciting use cases for Augmented Reality (AR). Current AR Head-Mounted Displays (HMDs) are rather bulky and cumbersome to use and, therefore, do not offer a satisfactory user experience for the mass market yet. However, the [...] Read more.
Navigation is often regarded as one of the most-exciting use cases for Augmented Reality (AR). Current AR Head-Mounted Displays (HMDs) are rather bulky and cumbersome to use and, therefore, do not offer a satisfactory user experience for the mass market yet. However, the latest-generation smartphones offer AR capabilities out of the box, with sometimes even pre-installed apps. Apple’s framework ARKit is available on iOS devices, free to use for developers. Android similarly features a counterpart, ARCore. Both systems work well for small spatially confined applications, but lack global positional awareness. This is a direct result of one limitation in current mobile technology. Global Navigation Satellite Systems (GNSSs) are relatively inaccurate and often cannot work indoors due to the restriction of the signal to penetrate through solid objects, such as walls. In this paper, we present the Pedestrian Augmented Reality Navigator (PAReNt) iOS app as a solution to this problem. The app implements a data fusion technique to increase accuracy in global positioning and showcases AR navigation as one use case for the improved data. ARKit provides data about the smartphone’s motion, which is fused with GNSS data and a Bluetooth indoor positioning system via a Kalman Filter (KF). Four different KFs with different underlying models have been implemented and independently evaluated to find the best filter. The evaluation measures the app’s accuracy against a ground truth under controlled circumstances. Two main testing methods were introduced and applied to determine which KF works best. Depending on the evaluation method, this novel approach improved the accuracy by 57% (when GPS and AR were used) or 32% (when Bluetooth and AR were used) over the raw sensor data. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

14 pages, 7439 KB  
Article
A Benchmark Comparison of Four Off-the-Shelf Proprietary Visual–Inertial Odometry Systems
by Pyojin Kim, Jungha Kim, Minkyeong Song, Yeoeun Lee, Moonkyeong Jung and Hyeong-Geun Kim
Sensors 2022, 22(24), 9873; https://doi.org/10.3390/s22249873 - 15 Dec 2022
Cited by 16 | Viewed by 6263
Abstract
Commercial visual–inertial odometry (VIO) systems have been gaining attention as cost-effective, off-the-shelf, six-degree-of-freedom (6-DoF) ego-motion-tracking sensors for estimating accurate and consistent camera pose data, in addition to their ability to operate without external localization from motion capture or global positioning systems. It is [...] Read more.
Commercial visual–inertial odometry (VIO) systems have been gaining attention as cost-effective, off-the-shelf, six-degree-of-freedom (6-DoF) ego-motion-tracking sensors for estimating accurate and consistent camera pose data, in addition to their ability to operate without external localization from motion capture or global positioning systems. It is unclear from existing results, however, which commercial VIO platforms are the most stable, consistent, and accurate in terms of state estimation for indoor and outdoor robotic applications. We assessed four popular proprietary VIO systems (Apple ARKit, Google ARCore, Intel RealSense T265, and Stereolabs ZED 2) through a series of both indoor and outdoor experiments in which we showed their positioning stability, consistency, and accuracy. After evaluating four popular VIO sensors in challenging real-world indoor and outdoor scenarios, Apple ARKit showed the most stable and high accuracy/consistency, and the relative pose error was a drift error of about 0.02 m per second. We present our complete results as a benchmark comparison for the research community. Full article
(This article belongs to the Special Issue Sensors for Navigation and Control Systems)
Show Figures

Figure 1

12 pages, 3268 KB  
Article
A 3D Scene Information Enhancement Method Applied in Augmented Reality
by Bo Li, Xiangfeng Wang, Qiang Gao, Zhimei Song, Cunyu Zou and Siyuan Liu
Electronics 2022, 11(24), 4123; https://doi.org/10.3390/electronics11244123 - 10 Dec 2022
Cited by 3 | Viewed by 1985
Abstract
Aiming at the problem that the detection of small planes with unobvious texture is easy to be missed in augmented reality scene, a 3D scene information enhancement method to grab the planes for augmented reality scene is proposed based on a series of [...] Read more.
Aiming at the problem that the detection of small planes with unobvious texture is easy to be missed in augmented reality scene, a 3D scene information enhancement method to grab the planes for augmented reality scene is proposed based on a series of images of a real scene taken by a monocular camera. Firstly, we extract the feature points from the images. Secondly, we match the feature points from different images, and build the three-dimensional sparse point cloud data of the scene based on the feature points and the camera internal parameters. Thirdly, we estimate the position and size of the planes based on the sparse point cloud. The planes can be used to provide extra structural information for augmented reality. In this paper, an optimized feature points extraction and matching algorithm based on Scale Invariant Feature Transform (SIFT) is proposed, and a fast spatial planes recognition method based on a RANdom SAmple Consensus (RANSAC) is established. Experiments show that the method can achieve higher accuracy compared to the Oriented Fast and Rotated Brief (ORB), Binary Robust Invariant Scalable Keypoints (BRISK) and Super Point. The proposed method can effectively solve the problem of missing detection of faces in ARCore, and improve the integration effect between virtual objects and real scenes. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 6513 KB  
Article
Interactive Scientific Visualization of Fluid Flow Simulation Data Using AR Technology-Open-Source Library OpenVisFlow
by Dennis Teutscher, Timo Weckerle, Ömer F. Öz and Mathias J. Krause
Multimodal Technol. Interact. 2022, 6(9), 81; https://doi.org/10.3390/mti6090081 - 14 Sep 2022
Cited by 3 | Viewed by 5038
Abstract
Computational fluid dynamics (CFD) are being used more and more in the industry to understand and optimize processes such as fluid flows. At the same time, tools such as augmented reality (AR) are becoming increasingly important with the realization of Industry 5.0 to [...] Read more.
Computational fluid dynamics (CFD) are being used more and more in the industry to understand and optimize processes such as fluid flows. At the same time, tools such as augmented reality (AR) are becoming increasingly important with the realization of Industry 5.0 to make data and processes more tangible. Placing the two together paves the way for a new method of active learning and also for an interesting and engaging way of presenting industry processes. It also enables students to reinforce their understanding of the fundamental concepts of fluid dynamics in an interactive way. However, this is not really being utilized yet. For this reason, in this paper, we aim to combine these two powerful tools. Furthermore, we present the framework of a modular open-source library for scientific visualization of fluid flow “OpenVisFlow” which simplifies the creation of such applications and enables seamless visualization without other software by allowing users to integrate the visualization step into the simulation code. Using this framework and the open-source extension AR-Core, we show how a new markerless visualization tool can be implemented. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

17 pages, 40659 KB  
Article
Benchmarking Built-In Tracking Systems for Indoor AR Applications on Popular Mobile Devices
by Emanuele Marino, Fabio Bruno, Loris Barbieri and Antonio Lagudi
Sensors 2022, 22(14), 5382; https://doi.org/10.3390/s22145382 - 19 Jul 2022
Cited by 13 | Viewed by 7725
Abstract
As one of the most promising technologies for next-generation mobile platforms, Augmented Reality (AR) has the potential to radically change the way users interact with real environments enriched with various digital information. To achieve this potential, it is of fundamental importance to track [...] Read more.
As one of the most promising technologies for next-generation mobile platforms, Augmented Reality (AR) has the potential to radically change the way users interact with real environments enriched with various digital information. To achieve this potential, it is of fundamental importance to track and maintain accurate registration between real and computer-generated objects. Thus, it is crucially important to assess tracking capabilities. In this paper, we present a benchmark evaluation of the tracking performances of some of the most popular AR handheld devices, which can be regarded as a representative set of devices for sale in the global market. In particular, eight different next-gen devices including smartphones and tablets were considered. Experiments were conducted in a laboratory by adopting an external tracking system. The experimental methodology consisted of three main stages: calibration, data acquisition, and data evaluation. The results of the experimentation showed that the selected devices, in combination with the AR SDKs, have different tracking performances depending on the covered trajectory. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

12 pages, 14061 KB  
Article
Efficient and Scalable Object Localization in 3D on Mobile Device
by Neetika Gupta and Naimul Mefraz Khan
J. Imaging 2022, 8(7), 188; https://doi.org/10.3390/jimaging8070188 - 8 Jul 2022
Cited by 6 | Viewed by 3016
Abstract
Two-Dimensional (2D) object detection has been an intensely discussed and researched field of computer vision. With numerous advancements made in the field over the years, we still need to identify a robust approach to efficiently conduct classification and localization of objects in our [...] Read more.
Two-Dimensional (2D) object detection has been an intensely discussed and researched field of computer vision. With numerous advancements made in the field over the years, we still need to identify a robust approach to efficiently conduct classification and localization of objects in our environment by just using our mobile devices. Moreover, 2D object detection limits the overall understanding of the detected object and does not provide any additional information in terms of its size and position in the real world. This work proposes an object localization solution in Three-Dimension (3D) for mobile devices using a novel approach. The proposed method works by combining a 2D object detection Convolutional Neural Network (CNN) model with Augmented Reality (AR) technologies to recognize objects in the environment and determine their real-world coordinates. We leverage the in-built Simultaneous Localization and Mapping (SLAM) capability of Google’s ARCore to detect planes and know the camera information for generating cuboid proposals from an object’s 2D bounding box. The proposed method is fast and efficient for identifying everyday objects in real-world space and, unlike mobile offloading techniques, the method is well designed to work with limited resources of a mobile device. Full article
(This article belongs to the Special Issue Advanced Scene Perception for Augmented Reality)
Show Figures

Figure 1

33 pages, 8277 KB  
Article
Efficacy of Vafidemstat in Experimental Autoimmune Encephalomyelitis Highlights the KDM1A/RCOR1/HDAC Epigenetic Axis in Multiple Sclerosis
by Fernando Cavalcanti, Elena Gonzalez-Rey, Mario Delgado, Clara P. Falo, Leyre Mestre, Carmen Guaza, Francisco O’Valle, Michele M. P. Lufino, Jordi Xaus, Cristina Mascaró, Serena Lunardi, Natalia Sacilotto, Paola Dessanti, David Rotllant, Xavier Navarro, Mireia Herrando-Grabulosa, Carlos Buesa and Tamara Maes
Pharmaceutics 2022, 14(7), 1420; https://doi.org/10.3390/pharmaceutics14071420 - 6 Jul 2022
Cited by 9 | Viewed by 4645
Abstract
Lysine specific demethylase 1 (LSD1; also known as KDM1A), is an epigenetic modulator that modifies the histone methylation status. KDM1A forms a part of protein complexes that regulate the expression of genes involved in the onset and progression of diseases such as cancer, [...] Read more.
Lysine specific demethylase 1 (LSD1; also known as KDM1A), is an epigenetic modulator that modifies the histone methylation status. KDM1A forms a part of protein complexes that regulate the expression of genes involved in the onset and progression of diseases such as cancer, central nervous system (CNS) disorders, viral infections, and others. Vafidemstat (ORY-2001) is a clinical stage inhibitor of KDM1A in development for the treatment of neurodegenerative and psychiatric diseases. However, the role of ORY-2001 targeting KDM1A in neuroinflammation remains to be explored. Here, we investigated the effect of ORY-2001 on immune-mediated and virus-induced encephalomyelitis, two experimental models of multiple sclerosis and neuronal damage. Oral administration of ORY-2001 ameliorated clinical signs, reduced lymphocyte egress and infiltration of immune cells into the spinal cord, and prevented demyelination. Interestingly, ORY-2001 was more effective and/or faster acting than a sphingosine 1-phosphate receptor antagonist in the effector phase of the disease and reduced the inflammatory gene expression signature characteristic ofEAE in the CNS of mice more potently. In addition, ORY-2001 induced gene expression changes concordant with a potential neuroprotective function in the brain and spinal cord and reduced neuronal glutamate excitotoxicity-derived damage in explants. These results pointed to ORY-2001 as a promising CNS epigenetic drug able to target neuroinflammatory and neurodegenerative diseases and provided preclinical support for the subsequent design of early-stage clinical trials. Full article
Show Figures

Figure 1

12 pages, 3180 KB  
Article
Learn2Write: Augmented Reality and Machine Learning-Based Mobile App to Learn Writing
by Md. Nahidul Islam Opu, Md. Rakibul Islam, Muhammad Ashad Kabir, Md. Sabir Hossain and Mohammad Mainul Islam
Computers 2022, 11(1), 4; https://doi.org/10.3390/computers11010004 - 27 Dec 2021
Cited by 13 | Viewed by 8554
Abstract
Augmented reality (AR) has been widely used in education, particularly for child education. This paper presents the design and implementation of a novel mobile app, Learn2Write, using machine learning techniques and augmented reality to teach alphabet writing. The app has two main [...] Read more.
Augmented reality (AR) has been widely used in education, particularly for child education. This paper presents the design and implementation of a novel mobile app, Learn2Write, using machine learning techniques and augmented reality to teach alphabet writing. The app has two main features: (i) guided learning to teach users how to write the alphabet and (ii) on-screen and AR-based handwriting testing using machine learning. A learner needs to write on the mobile screen in on-screen testing, whereas AR-based testing allows one to evaluate writing on paper or a board in a real world environment. We implement a novel approach to use machine learning for AR-based testing to detect an alphabet written on a board or paper. It detects the handwritten alphabet using our developed machine learning model. After that, a 3D model of that alphabet appears on the screen with its pronunciation/sound. The key benefit of our approach is that it allows the learner to use a handwritten alphabet. As we have used marker-less augmented reality, it does not require a static image as a marker. The app was built with ARCore SDK for Unity. We further evaluated and quantified the performance of our app on multiple devices. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education)
Show Figures

Figure 1

Back to TopTop