Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (461)

Search Parameters:
Keywords = digital scene

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3528 KiB  
Article
High-Precision Optimization of BIM-3D GIS Models for Digital Twins: A Case Study of Santun River Basin
by Zhengbing Yang, Mahemujiang Aihemaiti, Beilikezi Abudureheman and Hongfei Tao
Sensors 2025, 25(15), 4630; https://doi.org/10.3390/s25154630 - 26 Jul 2025
Viewed by 466
Abstract
The integration of Building Information Modeling (BIM) and 3D Geographic Information System (3D GIS) models provides high-precision spatial data for digital twin watersheds. To tackle the challenges of large data volumes and rendering latency in integrated models, this study proposes a three-step framework [...] Read more.
The integration of Building Information Modeling (BIM) and 3D Geographic Information System (3D GIS) models provides high-precision spatial data for digital twin watersheds. To tackle the challenges of large data volumes and rendering latency in integrated models, this study proposes a three-step framework that uses Industry Foundation Classes (IFCs) as the base model and Open Scene Graph Binary (OSGB) as the target model: (1) geometric optimization through an angular weighting (AW)-controlled Quadric Error Metrics (QEM) algorithm; (2) Level of Detail (LOD) hierarchical mapping to establish associations between the IFC and OSGB models, and redesign scene paging logic; (3) coordinate registration by converting the IFC model’s local coordinate system to the global coordinate system and achieving spatial alignment via the seven-parameter method. Applied to the Santun River Basin digital twin project, experiments with 10 water gate models show that the AW-QEM algorithm reduces average loading time by 15% compared to traditional QEM, while maintaining 97% geometric accuracy, demonstrating the method’s efficiency in balancing precision and rendering performance. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 2942 KiB  
Article
Visual Perception and Fixation Patterns in an Individual with Ventral Simultanagnosia, Integrative Agnosia and Bilateral Visual Field Loss
by Isla Williams, Andrea Phillipou, Elsdon Storey, Peter Brotchie and Larry Abel
Neurol. Int. 2025, 17(7), 105; https://doi.org/10.3390/neurolint17070105 - 10 Jul 2025
Viewed by 238
Abstract
Background/Objectives: As high-acuity vision is limited to a very small visual angle, examination of a scene requires multiple fixations. Simultanagnosia, a disorder wherein elements of a scene can be perceived correctly but cannot be integrated into a coherent whole, has been parsed into [...] Read more.
Background/Objectives: As high-acuity vision is limited to a very small visual angle, examination of a scene requires multiple fixations. Simultanagnosia, a disorder wherein elements of a scene can be perceived correctly but cannot be integrated into a coherent whole, has been parsed into dorsal and ventral forms. In ventral simultanagnosia, limited visual integration is possible. This case study was the first to record gaze during the presentation of a series of visual stimuli, which required the processing of local and global elements. We hypothesised that gaze patterns would differ with successful processing and that feature integration could be disrupted by distractors. Methods: The patient received a neuropsychological assessment and underwent CT and MRI. Eye movements were recorded during the following tasks: (1) famous face identification, (2) facial emotion recognition, (3) identification of Ishihara colour plates, and (4) identification of both local and global letters in Navon composite letters, presented either alone or surrounded by filled black circles, which we hypothesised would impair global processing by disrupting fixation. Results: The patients identified no famous faces but scanned them qualitatively normally. The only emotion to be consistently recognised was happiness, whose scanpath differed from the other emotions. She identified none of the Ishihara plates, although her colour vision was normal on the FM-15, even mapping an unseen digit with fixations and tracing it with her finger. For plain Navon figures, she correctly identified 20/20 local and global letters; for the “dotted” figures, she was correct 19/20 times for local letters and 0/20 for global letters (chi-squared NS for local, p < 0.0001, global), with similar fixation of salient elements for both. Conclusions: Contrary to our hypothesis, gaze behaviour was largely independent of the ability to process global stimuli, showing for the first time that normal acquisition of visual information did not ensure its integration into a percept. The core defect lay in processing, not acquisition. In the novel Navon task, adding distractors abolished feature integration without affecting the fixation of the salient elements, confirming for the first time that distractors could disrupt the processing, not the acquisition, of visual information in this disorder. Full article
Show Figures

Figure 1

21 pages, 3533 KiB  
Article
Artificial Intelligence for Forensic Image Analysis in Bullet Hole Comparison: A Preliminary Study
by Guilherme Pina Cardim, Thiago de Souza Duarte, Henrique Pina Cardim, Wallace Casaca, Rogério Galante Negri, Flávio Camargo Cabrera, Renivaldo José dos Santos, Erivaldo Antônio da Silva and Mauricio Araujo Dias
NDT 2025, 3(3), 16; https://doi.org/10.3390/ndt3030016 - 8 Jul 2025
Viewed by 357
Abstract
The application of artificial intelligence within forensic image analysis marks a significant step forward for the non-destructive examination of evidence, a crucial practice for maintaining the integrity of a crime scene. While non-destructive testing (NDT) methods are established, the integration of AI, particularly [...] Read more.
The application of artificial intelligence within forensic image analysis marks a significant step forward for the non-destructive examination of evidence, a crucial practice for maintaining the integrity of a crime scene. While non-destructive testing (NDT) methods are established, the integration of AI, particularly for analyzing ballistic evidence, requires further exploration. This preliminary study directly addresses this gap by focusing on the use of deep learning to automate the analysis of bullet holes. This work investigated the performance of two state-of-the-art convolutional neural networks (CNNs), YOLOv8 and R-CNN, for detecting ballistic markings in digital images. The approach treats digital image analysis itself as a form of non-destructive testing, thereby preserving the original evidence. The findings demonstrate the potential of AI to augment forensic investigations by providing an objective, data-driven alternative to traditional assessments and increasing the efficiency of evidence processing. This research confirms the feasibility and relevance of leveraging advanced AI models to develop powerful new tools for Forensic Science. It is expected that this study will contribute worldwide to help (1) the police indict criminals and prove innocence; (2) the justice system judges and proves people guilty of their crimes. Full article
Show Figures

Figure 1

36 pages, 401 KiB  
Article
The Democracy-Promotion Metanarrative as a Set of Frames: Is There an Indigenous Counter-Narrative?
by Hajer Ben Hadj Salem
Religions 2025, 16(7), 850; https://doi.org/10.3390/rel16070850 - 27 Jun 2025
Viewed by 461
Abstract
The Tunisian uprisings projected an elusive surrealistic scene that was an aberration in a part of the world where Islamic ideology had been considered the only rallying force and a midwife for regime change. However, this sense of exceptionalism was short-lived, as the [...] Read more.
The Tunisian uprisings projected an elusive surrealistic scene that was an aberration in a part of the world where Islamic ideology had been considered the only rallying force and a midwife for regime change. However, this sense of exceptionalism was short-lived, as the religiously zealous Islamist expats and their militant executive wings infiltrated the power vacuum to resume their suspended Islamization project of the 1980s. Brandishing electoral “legitimacy”, they attempted to reframe the bourgeoning indigenous democratization project, rooted in an evolving Tunisian intellectual and cultural heritage, along the neocolonial ideological underpinnings of the “Arab Spring” metanarrative, which proffers the thesis that democracy can be promoted in the Muslim world through so-called “Moderate Muslims”. This paper challenges this dominant narrative by offering a counter-narrative about the political transition in Tunisia. It takes stock of the multidisciplinary conceptual and analytical frameworks elaborated upon in postcolonial theory, social movement theory, cognitive neuroscience theories, and digital communication theories. It draws heavily on socio-narrative translation theory. The corpus analyzed in this work consists of disparate yet corroborating narratives cutting across modes, genres, and cultural and linguistic boundaries, and is grounded in insider participant observation. This work opens an alternative inquiry into how the processes of cross-cultural knowledge production and the power dynamics they sustain have helped shape the course of the transition since 2011. Full article
(This article belongs to the Special Issue Transitions of Islam and Democracy: Thinking Political Theology)
9 pages, 1819 KiB  
Proceeding Paper
Magic of Water: Exploration of Production Process with Fluid Effects in Film and Advertisement in Computer-Aided Design
by Nan-Hu Lu
Eng. Proc. 2025, 98(1), 20; https://doi.org/10.3390/engproc2025098020 - 27 Jun 2025
Viewed by 291
Abstract
Fluid effects are important in films and advertisements, where their realism and aesthetic quality directly impact the visual experience. With the rapid advancement of digital technology and computer-aided design (CAD), modern visual effects are used to simulate various water-related phenomena, such as flowing [...] Read more.
Fluid effects are important in films and advertisements, where their realism and aesthetic quality directly impact the visual experience. With the rapid advancement of digital technology and computer-aided design (CAD), modern visual effects are used to simulate various water-related phenomena, such as flowing water, ocean waves, and raindrops. However, creating these realistic effects is not solely dependent on advanced software and hardware; it also requires an understanding of the technical and artistic aspects of visual effects artists. In the creation process, the artist must possess a keen aesthetic sense and innovative thinking to craft stunning visual effects to overcome technological constraints. Whether depicting the grandeur of turbulent ocean scenes or the romance of gentle rain, the artist needs to transform fluid effects into expressive visual language to enhance emotional impact, aligning with the storyline and the director’s vision. The production process of fluid effects typically involves the following critical steps. First, the visual effects artist utilizes CAD-based tools, particle systems, or fluid simulation software to model the dynamic behavior of water. This process demands a solid foundation in physics and the ability to adjust parameters flexibly according to the specific needs of the scene, ensuring that the fluid motion appears natural and smooth. Next, in the rendering stage, the simulated fluid is transformed into realistic imagery, requiring significant computational power and precise handling of lighting effects. Finally, in the compositing stage, the fluid effects are seamlessly integrated with live-action footage, making the visual effects appear as though they are parts of the actual scene. In this study, the technical details of creating fluid effects using free software such as Blender were explored. How advanced CAD tools are utilized to achieve complex water effects was also elucidated. Additionally, case studies were conducted to illustrate the creative processes involved in visual effects production to understand how to seamlessly blend technology with artistry to create unforgettable visual spectacles. Full article
Show Figures

Figure 1

29 pages, 3799 KiB  
Article
Forest Three-Dimensional Reconstruction Method Based on High-Resolution Remote Sensing Image Using Tree Crown Segmentation and Individual Tree Parameter Extraction Model
by Guangsen Ma, Gang Yang, Hao Lu and Xue Zhang
Remote Sens. 2025, 17(13), 2179; https://doi.org/10.3390/rs17132179 - 25 Jun 2025
Viewed by 423
Abstract
Efficient and accurate acquisition of tree distribution and three-dimensional geometric information in forest scenes, along with three-dimensional reconstructions of entire forest environments, hold significant application value in precision forestry and forestry digital twins. However, due to complex vegetation structures, fine geometric details, and [...] Read more.
Efficient and accurate acquisition of tree distribution and three-dimensional geometric information in forest scenes, along with three-dimensional reconstructions of entire forest environments, hold significant application value in precision forestry and forestry digital twins. However, due to complex vegetation structures, fine geometric details, and severe occlusions in forest environments, existing methods—whether vision-based or LiDAR-based—still face challenges such as high data acquisition costs, feature extraction difficulties, and limited reconstruction accuracy. This study focuses on reconstructing tree distribution and extracting key individual tree parameters, and it proposes a forest 3D reconstruction framework based on high-resolution remote sensing images. Firstly, an optimized Mask R-CNN model was employed to segment individual tree crowns and extract distribution information. Then, a Tree Parameter and Reconstruction Network (TPRN) was constructed to directly estimate key structural parameters (height, DBH etc.) from crown images and generate tree 3D models. Subsequently, the 3D forest scene could be reconstructed by combining the distribution information and tree 3D models. In addition, to address the data scarcity, a hybrid training strategy integrating virtual and real data was proposed for crown segmentation and individual tree parameter estimation. Experimental results demonstrated that the proposed method could reconstruct an entire forest scene within seconds while accurately preserving tree distribution and individual tree attributes. In two real-world plots, the tree counting accuracy exceeded 90%, with an average tree localization error under 0.2 m. The TPRN achieved parameter extraction accuracies of 92.7% and 96% for tree height, and 95.4% and 94.1% for DBH. Furthermore, the generated individual tree models achieved average Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) scores of 11.24 and 0.53, respectively, validating the quality of the reconstruction. This approach enables fast and effective large-scale forest scene reconstruction using only a single remote sensing image as input, demonstrating significant potential for applications in both dynamic forest resource monitoring and forestry-oriented digital twin systems. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Figure 1

22 pages, 7106 KiB  
Article
Enhancing Highway Scene Understanding: A Novel Data Augmentation Approach for Vehicle-Mounted LiDAR Point Cloud Segmentation
by Dalong Zhou, Yuanyang Yi, Yu Wang, Zhenfeng Shao, Yanjun Hao, Yuyan Yan, Xiaojin Zhao and Junkai Guo
Remote Sens. 2025, 17(13), 2147; https://doi.org/10.3390/rs17132147 - 23 Jun 2025
Viewed by 394
Abstract
The intelligent extraction of highway assets is pivotal for advancing transportation infrastructure and autonomous systems, yet traditional methods relying on manual inspection or 2D imaging struggle with sparse, occluded environments, and class imbalance. This study proposes an enhanced MinkUNet-based framework to address data [...] Read more.
The intelligent extraction of highway assets is pivotal for advancing transportation infrastructure and autonomous systems, yet traditional methods relying on manual inspection or 2D imaging struggle with sparse, occluded environments, and class imbalance. This study proposes an enhanced MinkUNet-based framework to address data scarcity, occlusion, and imbalance in highway point cloud segmentation. A large-scale dataset (PEA-PC Dataset) was constructed, covering six key asset categories, addressing the lack of specialized highway datasets. A hybrid conical masking augmentation strategy was designed to simulate natural occlusions and enhance local feature retention, while semi-supervised learning prioritized foreground differentiation. The experimental results showed that the overall mIoU reached 73.8%, with the IoU of bridge railings and emergency obstacles exceeding 95%. The IoU of columnar assets increased from 2.6% to 29.4% through occlusion perception enhancement, demonstrating the effectiveness of this method in improving object recognition accuracy. The framework balances computational efficiency and robustness, offering a scalable solution for sparse highway scenes. However, challenges remain in segmenting vegetation-occluded pole-like assets due to partial data loss. This work highlights the efficacy of tailored augmentation and semi-supervised strategies in refining 3D segmentation, advancing applications in intelligent transportation and digital infrastructure. Full article
Show Figures

Figure 1

18 pages, 14746 KiB  
Article
PRJ: Perception–Retrieval–Judgement for Generated Images
by Qiang Fu, Zonglei Jing, Zonghao Ying and Xiaoqian Li
Electronics 2025, 14(12), 2354; https://doi.org/10.3390/electronics14122354 - 9 Jun 2025
Viewed by 423
Abstract
The rapid progress of generative AI has enabled remarkable creative capabilities, yet it also raises urgent concerns regarding the safety of AI-generated visual content in real-world applications such as content moderation, platform governance, and digital media regulation. This includes unsafe material such as [...] Read more.
The rapid progress of generative AI has enabled remarkable creative capabilities, yet it also raises urgent concerns regarding the safety of AI-generated visual content in real-world applications such as content moderation, platform governance, and digital media regulation. This includes unsafe material such as sexually explicit images, violent scenes, hate symbols, propaganda, and unauthorized imitations of copyrighted artworks. Existing image safety systems often rely on rigid category filters and produce binary outputs, lacking the capacity to interpret context or reason about nuanced, adversarially induced forms of harm. In addition, standard evaluation metrics (e.g., attack success rate) fail to capture the semantic severity and dynamic progression of toxicity. To address these limitations, we propose Perception–Retrieval–Judgement (PRJ), a cognitively inspired framework that models toxicity detection as a structured reasoning process. PRJ follows a three-stage design: it first transforms an image into descriptive language (perception), then retrieves external knowledge related to harm categories and traits (retrieval), and finally evaluates toxicity based on legal or normative rules (judgement). This language-centric structure enables the system to detect both explicit and implicit harms with improved interpretability and categorical granularity. In addition, we introduce a dynamic scoring mechanism based on a contextual toxicity risk matrix to quantify harmfulness across different semantic dimensions. Experiments show that PRJ surpasses existing safety checkers in detection accuracy and robustness while uniquely supporting structured category-level toxicity interpretation. Full article
(This article belongs to the Special Issue Trustworthy Deep Learning in Practice)
Show Figures

Figure 1

31 pages, 9733 KiB  
Article
Gamifying Sociological Surveys Through Serious Games—A Data Analysis Approach Applied to Multiple-Choice Question Responses Datasets
by Alexandros Gazis and Eleftheria Katsiri
Computers 2025, 14(6), 224; https://doi.org/10.3390/computers14060224 - 7 Jun 2025
Viewed by 727
Abstract
E-polis is a serious digital game designed to gamify sociological surveys studying young people’s political opinions. In this platform game, players navigate a digital world, encountering quests posing sociological questions. Players’ answers shape the city-game world, altering building structures based on their choices. [...] Read more.
E-polis is a serious digital game designed to gamify sociological surveys studying young people’s political opinions. In this platform game, players navigate a digital world, encountering quests posing sociological questions. Players’ answers shape the city-game world, altering building structures based on their choices. E-polis is a serious game, not a government simulation, aiming to understand players’ behaviors and opinions; thus, we do not train the players but rather understand them and help them visualize their choices in shaping a city’s future. Also, it is noticed that no correct or incorrect answers apply. Moreover, our game utilizes a novel middleware architecture for development, diverging from typical asset-prefab-scene and script segregation. This article presents the data layer of our game’s middleware, specifically focusing on data analysis based on respondents’ gameplay answers. E-polis represents an innovative approach to gamifying sociological research, providing a unique platform for gathering and analyzing data on political opinions among youth and contributing to the broader field of serious games. Full article
Show Figures

Figure 1

19 pages, 3237 KiB  
Article
Therapeutic Potentials of Virtual Blue Spaces: A Study on the Physiological and Psychological Health Benefits of Virtual Waterscapes
by Su-Hsin Lee, Yi-Chien Chu, Li-Wen Wang and Shu-Chen Tsai
Healthcare 2025, 13(11), 1353; https://doi.org/10.3390/healthcare13111353 - 5 Jun 2025
Viewed by 758
Abstract
Background: Physical and mental health issues are increasingly becoming a global focus of attention, and telemedicine is widely attracting academic interest. Objectives: This exploratory study aimed to investigate the therapeutic potential of immersive virtual blue spaces for individuals with distinct lifestyle backgrounds—specifically, office [...] Read more.
Background: Physical and mental health issues are increasingly becoming a global focus of attention, and telemedicine is widely attracting academic interest. Objectives: This exploratory study aimed to investigate the therapeutic potential of immersive virtual blue spaces for individuals with distinct lifestyle backgrounds—specifically, office workers and retirees. The research explores how different virtual waterscapes influence emotional and physiological states in populations with varying stress profiles and life rhythms. Methods: A mixed-methods design was employed, combining quantitative measurements with qualitative interviews. In September 2023, forty participants (20 office workers and 20 retirees) from Hualien, Taiwan, were exposed to 360° VR simulations of three blue environments: a forest stream, a forest waterfall, and a beach scene. Pre- and post-session assessments included physiological indicators (blood pressure and heart rate) and emotional states measured using the Profile of Mood States (POMS) scale. Results: Significant physiological relaxation was observed among retirees. Office workers demonstrated greater emotional improvements, with noticeable variation depending on the type of virtual environment. Comparative analysis highlighted the stream landscape’s unique benefit for reducing depression and enhancing positive mood states. Thematic findings from post-session interviews further indicated that emotional responses were moderated by individual background and prior emotional experiences. Conclusions: These findings underscore the short-term therapeutic potential of virtual blue spaces for diverse user groups and reveal the influence of personal context on their effectiveness. The study supports the integration of VR-based nature exposure into personalized digital healthcare interventions and offers a foundation for future development in immersive therapeutic technologies. Full article
Show Figures

Figure 1

22 pages, 12284 KiB  
Article
EcoDetect-YOLOv2: A High-Performance Model for Multi-Scale Waste Detection in Complex Surveillance Environments
by Jing Su, Ruihan Chen, Mingzhi Li, Shenlin Liu, Guobao Xu and Zanhong Zheng
Sensors 2025, 25(11), 3451; https://doi.org/10.3390/s25113451 - 30 May 2025
Cited by 1 | Viewed by 577
Abstract
Conventional waste monitoring relies heavily on manual inspection, while most detection models are trained on close-range, simplified datasets, limiting their applicability for real-world surveillance. Even with surveillance imagery, challenges such as cluttered backgrounds, scale variation, and small object sizes often lead to missed [...] Read more.
Conventional waste monitoring relies heavily on manual inspection, while most detection models are trained on close-range, simplified datasets, limiting their applicability for real-world surveillance. Even with surveillance imagery, challenges such as cluttered backgrounds, scale variation, and small object sizes often lead to missed detections and reduced robustness. To address these challenges, this study introduces EcoDetect-YOLOv2, a lightweight and high-efficiency object detection model developed using the Intricate Environment Waste Exposure Detection (IEWED) dataset. Building upon the YOLOv8s architecture, EcoDetect-YOLOv2 incorporates a small object detection P2 detection layer to enhance sensitivity to small objects. The integration of an efficient multi-scale attention (EMA) mechanism prior to the P2 head further improves the model’s capacity to detect small-scale targets, while bolstering robustness against cluttered backgrounds and environmental noise, as well as generalizability across scale variations. In the feature fusion stage, a Dynamic Upsampling Module (Dysample) replaces traditional nearest-neighbor upsampling to yield higher-quality feature maps, thereby facilitating improved discrimination of overlapping and degraded waste particles. To reduce computational overhead and inference latency without sacrificing detection accuracy, Ghost Convolution (GhostConv) replaces conventional convolution layers within the neck. Based on this, a GhostResBottleneck structure is proposed, along with a novel ResGhostCSP module—designed via a one-shot aggregation strategy—to replace the original C2f module. Experiments conducted on the IEWED dataset, which features multi-object, multi-class, and highly complex real-world scenes, demonstrate that EcoDetect-YOLOv2 outperforms the baseline YOLOv8s by 1.0%, 4.6%, 4.8%, and 3.1% in precision, recall, mAP50, and mAP50:95, respectively, while reducing the parameter count by 19.3%. These results highlight the model’s effectiveness in real-time, multi-object waste detection, providing a scalable and efficient tool for automated urban and digital governance. Full article
Show Figures

Figure 1

25 pages, 3655 KiB  
Article
A Multi-Sensor Fusion Approach Combined with RandLA-Net for Large-Scale Point Cloud Segmentation in Power Grid Scenario
by Tianyi Li, Shuanglin Li, Zihan Xu, Nizar Faisal Alkayem, Qiao Bao and Qiang Wang
Sensors 2025, 25(11), 3350; https://doi.org/10.3390/s25113350 - 26 May 2025
Viewed by 731
Abstract
With the continuous expansion of power grids, traditional manual inspection methods face numerous challenges, including low efficiency, high costs, and significant safety risks. As critical infrastructure in power transmission systems, power grid towers require intelligent recognition and monitoring to ensure the reliable and [...] Read more.
With the continuous expansion of power grids, traditional manual inspection methods face numerous challenges, including low efficiency, high costs, and significant safety risks. As critical infrastructure in power transmission systems, power grid towers require intelligent recognition and monitoring to ensure the reliable and stable operation of power grids. However, existing methods struggle with accuracy and efficiency when processing large-scale point cloud data in complex environments. To address these challenges, this paper presents a comprehensive approach combining multi-sensor fusion and deep learning for power grid tower recognition. A data acquisition scheme that integrates LiDAR and a binocular depth camera, implementing the FAST-LIO algorithm, is proposed to achieve the spatiotemporal synchronization and fusion of sensor data. This integration enables the construction of a colored point cloud dataset with rich visual and geometric features. Based on the RandLA-Net framework, an efficient processing method for large-scale point cloud segmentation is developed and optimized explicitly for power grid tower scenarios. Experimental validation demonstrates that the proposed method achieves 90.8% precision in tower body recognition and maintains robust performance under various environmental conditions. The proposed approach successfully processes point cloud data containing over ten million points while effectively handling challenges such as uneven point distribution and environmental interference. These results validate the reliability of the proposed method in providing technical support for intelligent inspection and the management of power grid infrastructure. Full article
(This article belongs to the Special Issue Progress in LiDAR Technologies and Applications)
Show Figures

Figure 1

27 pages, 27742 KiB  
Article
Denoising Autoencoder and Contrast Enhancement for RGB and GS Images with Gaussian Noise
by Armando Adrián Miranda-González, Alberto Jorge Rosales-Silva, Dante Mújica-Vargas, Edwards Ernesto Sánchez-Ramírez, Juan Pablo Francisco Posadas-Durán, Dilan Uriostegui-Hernandez, Erick Velázquez-Lozada and Francisco Javier Gallegos-Funes
Mathematics 2025, 13(10), 1621; https://doi.org/10.3390/math13101621 - 15 May 2025
Viewed by 457
Abstract
Robust image processing systems require input images that closely resemble real-world scenes. However, external factors, such as adverse environmental conditions or errors in data transmission, can alter the captured image, leading to information loss. These factors may include poor lighting conditions at the [...] Read more.
Robust image processing systems require input images that closely resemble real-world scenes. However, external factors, such as adverse environmental conditions or errors in data transmission, can alter the captured image, leading to information loss. These factors may include poor lighting conditions at the time of image capture or the presence of noise, necessitating procedures to restore the data to a representation as close as possible to the real scene. This research project proposes an architecture based on an autoencoder capable of handling both poor lighting conditions and noise in digital images simultaneously, rather than processing them separately. The proposed methodology has been demonstrated to outperform competing techniques specialized in noise reduction or contrast enhancement. This is supported by both objective numerical metrics and visual evaluations using a validation set with varying lighting characteristics. The results indicate that the proposed methodology effectively restores images by improving contrast and reducing noise without requiring separate processing steps. Full article
Show Figures

Graphical abstract

31 pages, 2150 KiB  
Article
A Self-Supervised Point Cloud Completion Method for Digital Twin Smart Factory Scenario Construction
by Yongjie Xu, Haihua Zhu and Barmak Honarvar Shakibaei Asli
Electronics 2025, 14(10), 1934; https://doi.org/10.3390/electronics14101934 - 9 May 2025
Cited by 1 | Viewed by 1000
Abstract
In the development of digital twin (DT) workshops, constructing accurate DT models has become a key step toward enabling intelligent manufacturing. To address challenges such as incomplete data acquisition, noise sensitivity, and the heavy reliance on manual annotations in traditional modeling methods, this [...] Read more.
In the development of digital twin (DT) workshops, constructing accurate DT models has become a key step toward enabling intelligent manufacturing. To address challenges such as incomplete data acquisition, noise sensitivity, and the heavy reliance on manual annotations in traditional modeling methods, this paper proposes a self-supervised deep learning approach for point cloud completion. The proposed model integrates self-supervised learning strategies for inferring missing regions, a Feature Pyramid Network (FPN), and cross-attention mechanisms to extract critical geometric and structural features from incomplete point clouds, thereby reducing dependence on labeled data and improving robustness to noise and incompleteness. Building on this foundation, a point cloud-based DT workshop modeling framework is introduced, incorporating transfer learning techniques to enable domain adaptation from synthetic to real-world industrial datasets, which significantly reduces the reliance on high-quality industrial point cloud data. Experimental results demonstrate that the proposed method achieves superior completion and reconstruction performance on both public benchmarks and real-world workshop scenarios, achieving an average CD-2 score of 15.96 on the 3D-EPN dataset. Furthermore, the method produces high-fidelity models in practical applications, providing a solid foundation for the precise construction and deployment of virtual scenes in DT workshops. Full article
Show Figures

Figure 1

20 pages, 9601 KiB  
Article
Design, Simulation and Experimental Validation of a Pneumatic Actuation Method for Automating Manual Pipetting Devices
by Valentin Ciupe, Erwin-Christian Lovasz, Robert Kristof, Melania-Olivia Sandu and Carmen Sticlaru
Machines 2025, 13(5), 389; https://doi.org/10.3390/machines13050389 - 7 May 2025
Viewed by 523
Abstract
This study provides a set of designs, simulations and experiments for developing an actuating method for manual pipettes. The goal is to enable robotic manipulation and automatic pipetting, while using manual pipetting devices. This automation is designed to be used as a flexible [...] Read more.
This study provides a set of designs, simulations and experiments for developing an actuating method for manual pipettes. The goal is to enable robotic manipulation and automatic pipetting, while using manual pipetting devices. This automation is designed to be used as a flexible alternative tool in small and medium-sized biochemistry laboratories that do not possess proper automated pipetting technology, in order to relieve the lab technicians from the tedious, repetitive and error-prone process of manual pipetting needed for the preparation of biological samples. The selected approach is to use a set of pressure-controlled pneumatic cylinders in order to control the actuation and force of the pipettes’ manual buttons. This paper presents a mechanical design, analysis, pneumatic simulation and functional robotic simulation of the developed device, and a comparison of possible pneumatic solutions is presented to explain the selected actuation method. Remote pneumatic pressure sensing is employed in order to avoid electrical sensors, connectors and wires in the area of the actuators, thus expanding the possibility of working in some electromagnetic-compatible environments and to simplify the connecting and cleaning process of the entire device. A functional simulation is conducted using a combination of software packages: Fluidsim for pneumatic simulation, URSim for robot programming and CoppeliaSim for application integration and visualization. Experimental validation is conducted using off-the-shelf pneumatic components, assembled with 3D-printed parts and mounted onto an existing pneumatic gripper. This complete assembly is attached to an industrial collaborative robot, as an end effector, and a program is written to test and validate the functions of the complete device. The in-process actuators’ working pressure is recorded and analyzed to determine the suitability of the proposed method and pipetting ability. Supplemental digital data are provided in the form of pneumatic circuit diagrams, a robot program, simulation scene and recorded values, to facilitate experimental replication and further development. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

Back to TopTop