Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = visual perception intelligent sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 7330 KB  
Article
A LoRa-Based Multi-Node System for Laboratory Safety Monitoring and Intelligent Early-Warning: Towards Multi-Source Sensing and Heterogeneous Networks
by Haiting Qin, Chuanshuang Jin, Ta Zhou and Wenjing Zhou
Sensors 2025, 25(21), 6516; https://doi.org/10.3390/s25216516 - 22 Oct 2025
Viewed by 526
Abstract
Laboratories are complex and dynamic environments where diverse hazards—including toxic gas leakage, volatile solvent combustion, and unexpected fire ignition—pose serious threats to personnel safety and property. Traditional monitoring systems relying on single-type sensors or manual inspections often fail to provide timely warnings or [...] Read more.
Laboratories are complex and dynamic environments where diverse hazards—including toxic gas leakage, volatile solvent combustion, and unexpected fire ignition—pose serious threats to personnel safety and property. Traditional monitoring systems relying on single-type sensors or manual inspections often fail to provide timely warnings or comprehensive hazard perception, resulting in delayed response and potential escalation of incidents. To address these limitations, this study proposes a multi-node laboratory safety monitoring and early warning system integrating multi-source sensing, heterogeneous communication, and cloud–edge collaboration. The system employs a LoRa-based star-topology network to connect distributed sensing and actuation nodes, ensuring long-range, low-power communication. A Raspberry Pi-based module performs real-time facial recognition for intelligent access control, while an OpenMV module conducts lightweight flame detection using color-space blob analysis for early fire identification. These edge-intelligent components are optimized for embedded operation under resource constraints. The cloud–edge–app collaborative architecture supports real-time data visualization, remote control, and adaptive threshold configuration, forming a closed-loop safety management cycle from perception to decision and execution. Experimental results show that the facial recognition module achieves 95.2% accuracy at the optimal threshold, and the flame detection algorithm attains the best balance of precision, recall, and F1-score at an area threshold of around 60. The LoRa network maintains stable communication up to 0.8 km, and the system’s emergency actuation latency ranges from 0.3 s to 5.5 s, meeting real-time safety requirements. Overall, the proposed system significantly enhances early fire warning, multi-source environmental monitoring, and rapid hazard response, demonstrating strong applicability and scalability in modern laboratory safety management. Full article
Show Figures

Figure 1

24 pages, 7007 KB  
Article
M4MLF-YOLO: A Lightweight Semantic Segmentation Framework for Spacecraft Component Recognition
by Wenxin Yi, Zhang Zhang and Liang Chang
Remote Sens. 2025, 17(18), 3144; https://doi.org/10.3390/rs17183144 - 10 Sep 2025
Cited by 1 | Viewed by 743
Abstract
With the continuous advancement of on-orbit services and space intelligence sensing technologies, the efficient and accurate identification of spacecraft components has become increasingly critical. However, complex lighting conditions, background interference, and limited onboard computing resources present significant challenges to existing segmentation algorithms. To [...] Read more.
With the continuous advancement of on-orbit services and space intelligence sensing technologies, the efficient and accurate identification of spacecraft components has become increasingly critical. However, complex lighting conditions, background interference, and limited onboard computing resources present significant challenges to existing segmentation algorithms. To address these challenges, this paper proposes a lightweight spacecraft component segmentation framework for on-orbit applications, termed M4MLF-YOLO. Based on the YOLOv5 architecture, we propose a refined lightweight design strategy that aims to balance segmentation accuracy and resource consumption in satellite-based scenarios. MobileNetV4 is adopted as the backbone network to minimize computational overhead. Additionally, a Multi-Scale Fourier Adaptive Calibration Module (MFAC) is designed to enhance multi-scale feature modeling and boundary discrimination capabilities in the frequency domain. We also introduce a Linear Deformable Convolution (LDConv) to explicitly control the spatial sampling span and distribution of the convolution kernel, thereby linearly adjusting the receptive field coverage range to improve feature extraction capabilities while effectively reducing computational costs. Furthermore, the efficient C3-Faster module is integrated to enhance channel interaction and feature fusion efficiency. A high-quality spacecraft image dataset, comprising both real and synthetic images, was constructed, covering various backgrounds and component types, including solar panels, antennas, payload instruments, thrusters, and optical payloads. Environment-aware preprocessing and enhancement strategies were applied to improve model robustness. Experimental results demonstrate that M4MLF-YOLO achieves excellent segmentation performance while maintaining low model complexity, with precision reaching 95.1% and recall reaching 88.3%, representing improvements of 1.9% and 3.9% over YOLOv5s, respectively. The mAP@0.5 also reached 93.4%. In terms of lightweight design, the model parameter count and computational complexity were reduced by 36.5% and 24.6%, respectively. These results validate that the proposed method significantly enhances deployment efficiency while preserving segmentation accuracy, showcasing promising potential for satellite-based visual perception applications. Full article
Show Figures

Figure 1

7 pages, 1496 KB  
Proceeding Paper
Multimodal Learning Resources: A Way to Engage Students’ Senses
by Mima Trifonova and Gabriela Kiryakova
Eng. Proc. 2025, 103(1), 13; https://doi.org/10.3390/engproc2025103013 - 12 Aug 2025
Viewed by 859
Abstract
In the modern digital age, traditional teaching methods are increasingly giving way to innovative approaches that actively engage all students’ senses. Integrating diverse media formats into learning stimulates students’ visual, auditory, and practical perception through active interaction with learning content. Information technology teachers [...] Read more.
In the modern digital age, traditional teaching methods are increasingly giving way to innovative approaches that actively engage all students’ senses. Integrating diverse media formats into learning stimulates students’ visual, auditory, and practical perception through active interaction with learning content. Information technology teachers must possess competencies for creating quality digital resources using artificial intelligence (AI) tools, key skills for understanding the principles of multimodal learning to critically evaluate generated content, and adapt it to specific educational goals. The present study presents the importance of multimodal learning resources as an effective tool for increasing learning motivation and achieving better educational outcomes. Specific examples of key requests to AI systems for creating educational resources, including text, images, animations, and interactive content, are presented. In addition, practical guidelines for formulating effective requests to AI tools, critical evaluation of the generated resources, and possible approaches for their improvement are proposed. Full article
(This article belongs to the Proceedings of The 8th Eurasian Conference on Educational Innovation 2025)
Show Figures

Figure 1

31 pages, 4333 KB  
Review
Research Progress and Development Trend of Visual Detection Methods for Selective Fruit Harvesting Robots
by Wenbo Wang, Chenshuo Li, Yidan Xi, Jinan Gu, Xinzhou Zhang, Man Zhou and Yuchun Peng
Agronomy 2025, 15(8), 1926; https://doi.org/10.3390/agronomy15081926 - 10 Aug 2025
Viewed by 1672
Abstract
The rapid development of artificial intelligence technologies has promoted the emergence of Agriculture 4.0, where the machines participating in agricultural activities are made smart with the capacities of self-sensing, self-decision-making, and self-execution. As representative implementations of Agriculture 4.0, intelligent selective fruit harvesting robots [...] Read more.
The rapid development of artificial intelligence technologies has promoted the emergence of Agriculture 4.0, where the machines participating in agricultural activities are made smart with the capacities of self-sensing, self-decision-making, and self-execution. As representative implementations of Agriculture 4.0, intelligent selective fruit harvesting robots demonstrate significant potential to alleviate labor-intensive demands in modern agriculture, where visual detection serves as the foundational component. However, the accurate detection of fruits remains a challenging issue due to the complex and unstructured nature of fruit orchards. This paper comprehensively reviews the recent progress in visual detection methods for selective fruit harvesting robots, covering cameras, traditional detection based on handcrafted feature methods, detection based on deep learning methods, and tree branch detection methods. Furthermore, the potential challenges and future trends of the visual detection system of selective fruit harvesting robots are critically discussed, facilitating a thorough comprehension of contemporary progress in this research area. The primary objective of this work is to highlight the pivotal role of visual perception in intelligent fruit harvesting robots. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

30 pages, 10312 KB  
Review
Ferroelectric-Based Optoelectronic Synapses for Visual Perception: From Materials to Systems
by Yuqing Hu, Yixin Zhu, Xinli Chen and Qing Wan
Nanomaterials 2025, 15(11), 863; https://doi.org/10.3390/nano15110863 - 4 Jun 2025
Viewed by 1736
Abstract
More than 70% of the information humans acquire from the external environment is derived through the visual system, where photosensitive function plays a pivotal role in the biological perception system. With the rapid development of artificial intelligence and robotics technology, achieving human-like visual [...] Read more.
More than 70% of the information humans acquire from the external environment is derived through the visual system, where photosensitive function plays a pivotal role in the biological perception system. With the rapid development of artificial intelligence and robotics technology, achieving human-like visual perception has attracted a great amount of attention. The neuromorphic visual perception system provides a novel solution for achieving efficient and low-power visual information processing by simulating the working principle of the biological visual system. In recent years, ferroelectric materials have shown broad application prospects in the field of neuromorphic visual perception due to their unique spontaneous polarization characteristics and non-volatile response behavior under external field regulation. Especially in achieving tunable retinal neural synapses, visual information storage processing, and constructing dynamic visual sensing, ferroelectric materials have shown unique performance advantages. In this review, recent progress in neuromorphic visual perception based on ferroelectric materials is discussed, elaborating in detail on device structure, material systems, and applications, and exploring the potential future development trends and challenges faced in this field. Full article
(This article belongs to the Special Issue Advanced Nanoscale Materials and (Flexible) Devices)
Show Figures

Graphical abstract

18 pages, 3976 KB  
Proceeding Paper
Survey on Comprehensive Visual Perception Technology for Future Air–Ground Intelligent Transportation Vehicles in All Scenarios
by Guixin Ren, Fei Chen, Shichun Yang, Fan Zhou and Bin Xu
Eng. Proc. 2024, 80(1), 50; https://doi.org/10.3390/engproc2024080050 - 30 May 2025
Viewed by 705
Abstract
As an essential part of the low-altitude economy, low-altitude carriers are an important cornerstone of its development and a new industry that cannot be ignored strategically. However, it is difficult for the existing two-dimensional vehicle autonomous driving perception scheme to meet the needs [...] Read more.
As an essential part of the low-altitude economy, low-altitude carriers are an important cornerstone of its development and a new industry that cannot be ignored strategically. However, it is difficult for the existing two-dimensional vehicle autonomous driving perception scheme to meet the needs of general key technologies for all-scene perception such as the global high-precision map construction of low-altitude vehicles in a three-dimensional space, the perception identification of local environmental traffic participants, and the extraction of key visual information under extreme conditions. Therefore, it is urgent to explore the development and verification of all-scene universal sensing technology for low-altitude intelligent vehicles. In this paper, the literature on vision-based urban rail transit and general perception technology in low-altitude flight environment is studied, and the paper summarizes the research status and innovation points from five aspects, namely the environment perception algorithm based on visual SLAM, the environment perception algorithm based on BEV, the environment perception algorithm based on image enhancement, the performance optimization of the perception algorithm using cloud computing, and the rapid deployment of the perception algorithm using edge nodes, and puts forward the future optimization direction of this topic. Full article
(This article belongs to the Proceedings of 2nd International Conference on Green Aviation (ICGA 2024))
Show Figures

Figure 1

27 pages, 2009 KB  
Article
A Dual-Channel and Frequency-Aware Approach for Lightweight Video Instance Segmentation
by Mingzhu Liu, Wei Zhang and Haoran Wei
Sensors 2025, 25(2), 459; https://doi.org/10.3390/s25020459 - 14 Jan 2025
Viewed by 1230
Abstract
Video instance segmentation, a key technology for intelligent sensing in visual perception, plays a key role in automated surveillance, robotics, and smart cities. These scenarios rely on real-time and efficient target-tracking capabilities for accurate perception and intelligent analysis of dynamic environments. However, traditional [...] Read more.
Video instance segmentation, a key technology for intelligent sensing in visual perception, plays a key role in automated surveillance, robotics, and smart cities. These scenarios rely on real-time and efficient target-tracking capabilities for accurate perception and intelligent analysis of dynamic environments. However, traditional video instance segmentation methods face complex models, high computational overheads, and slow segmentation speeds in time-series feature extraction, especially in resource-constrained environments. To address these challenges, a Dual-Channel and Frequency-Aware Approach for Lightweight Video Instance Segmentation (DCFA-LVIS) is proposed in this paper. In feature extraction, a DCEResNet backbone network structure based on a dual-channel feature enhancement mechanism is designed to improve the model’s accuracy by enhancing the feature extraction and representation capabilities. In instance tracking, a dual-frequency perceptual enhancement network structure is constructed, which uses an independent instance query mechanism to capture temporal information and combines with a frequency-aware attention mechanism to capture instance features on different attention layers of high and low frequencies, respectively, to effectively reduce the complexity of the model, decrease the number of parameters, and improve the segmentation efficiency. Experiments show that the model proposed in this paper achieves state-of-the-art segmentation performance with few parameters on the YouTube-VIS dataset, demonstrating its efficiency and practicality. This method significantly enhances the application efficiency and adaptability of visual perception intelligent sensing technology in video data acquisition and processing, providing strong support for its widespread deployment. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

11 pages, 2622 KB  
Article
Self-Powered, Flexible, Transparent Tactile Sensor Integrating Sliding and Proximity Sensing
by Kesheng Wang, Shouxin Du, Jiali Kong, Minghui Zheng, Shengtao Li, Enqiang Liang and Xiaoying Zhu
Materials 2025, 18(2), 322; https://doi.org/10.3390/ma18020322 - 13 Jan 2025
Cited by 2 | Viewed by 1250
Abstract
Tactile sensing is currently a research hotspot in the fields of intelligent perception and robotics. The method of converting external stimuli into electrical signals for sensing is a very effective strategy. Herein, we proposed a self-powered, flexible, transparent tactile sensor integrating sliding and [...] Read more.
Tactile sensing is currently a research hotspot in the fields of intelligent perception and robotics. The method of converting external stimuli into electrical signals for sensing is a very effective strategy. Herein, we proposed a self-powered, flexible, transparent tactile sensor integrating sliding and proximity sensing (SFTTS). The principle of electrostatic induction and contact electrification is used to achieve tactile response when external objects approach and slide. Experiments show that the material type, speed, and pressure of the perceived object can cause the changes of the electrical signal. In addition, fluorinated ethylene propylene (FEP) is used as the contact electrification layer, and indium tin oxide (ITO) is used as the electrostatic induction electrode to achieve transparency and flexibility of the entire device. By utilizing the transparency characteristics of this sensor to integrate with optical cameras, it is possible to achieve integrated perception of tactile and visual senses. This has great advantages for applications in the field of intelligent perception and is expected to be integrated with different types of optical sensors in the future to achieve multimodal intelligent perception and sensing technology, which will contribute to the intelligence and integration of robot sensing. Full article
(This article belongs to the Special Issue Advanced Piezoelectric Nanomaterials: Fundamentals and Applications)
Show Figures

Graphical abstract

30 pages, 10193 KB  
Review
Review on the Application of the Attention Mechanism in Sensing Information Processing for Dynamic Welding Processes
by Jingyuan Xu, Qiang Liu, Yuqing Xu, Runquan Xiao, Zhen Hou and Shanben Chen
J. Manuf. Mater. Process. 2024, 8(1), 22; https://doi.org/10.3390/jmmp8010022 - 28 Jan 2024
Cited by 6 | Viewed by 3700
Abstract
Arc welding is the common method used in traditional welding, which constitutes the majority of total welding production. The traditional manual and manual teaching welding method has problems with high labor costs and limited efficiency when faced with mass production. With the advancement [...] Read more.
Arc welding is the common method used in traditional welding, which constitutes the majority of total welding production. The traditional manual and manual teaching welding method has problems with high labor costs and limited efficiency when faced with mass production. With the advancement in technology, intelligent welding technology is expected to become a solution to this problem in the future. To achieve the intelligent welding process, modern sensing technology can be employed to effectively simulate the welder’s sensory perception and cognitive abilities. Recent studies have advanced the application of sensing technologies, leading to the advancement in intelligent welding process. The review is divided into two aspects. First, the theory and applications of various sensing technologies (visual, sound, arc, spectral signal, etc.) are summarized. Then, combined with the generalization of neural networks and attention mechanisms, the development trends in welding sensing information processing and modeling technology are discussed. Based on the existing research results, the feasibility, advantages, and development direction of attention mechanisms in the welding field are analyzed. In the end, a brief conclusion and remarks are presented. Full article
(This article belongs to the Special Issue Industry 4.0: Manufacturing and Materials Processing)
Show Figures

Figure 1

26 pages, 4994 KB  
Review
Visual Sensing and Depth Perception for Welding Robots and Their Industrial Applications
by Ji Wang, Leijun Li and Peiquan Xu
Sensors 2023, 23(24), 9700; https://doi.org/10.3390/s23249700 - 8 Dec 2023
Cited by 15 | Viewed by 7577
Abstract
With the rapid development of vision sensing, artificial intelligence, and robotics technology, one of the challenges we face is installing more advanced vision sensors on welding robots to achieve intelligent welding manufacturing and obtain high-quality welding components. Depth perception is one of the [...] Read more.
With the rapid development of vision sensing, artificial intelligence, and robotics technology, one of the challenges we face is installing more advanced vision sensors on welding robots to achieve intelligent welding manufacturing and obtain high-quality welding components. Depth perception is one of the bottlenecks in the development of welding sensors. This review provides an assessment of active and passive sensing methods for depth perception and classifies and elaborates on the depth perception mechanisms based on monocular vision, binocular vision, and multi-view vision. It explores the principles and means of using deep learning for depth perception in robotic welding processes. Further, the application of welding robot visual perception in different industrial scenarios is summarized. Finally, the problems and countermeasures of welding robot visual perception technology are analyzed, and developments for the future are proposed. This review has analyzed a total of 2662 articles and cited 152 as references. The potential future research topics are suggested to include deep learning for object detection and recognition, transfer deep learning for welding robot adaptation, developing multi-modal sensor fusion, integrating models and hardware, and performing a comprehensive requirement analysis and system evaluation in collaboration with welding experts to design a multi-modal sensor fusion architecture. Full article
(This article belongs to the Special Issue Intelligent Robotics Sensing Control System)
Show Figures

Figure 1

25 pages, 2532 KB  
Article
Digital Twin Simulation Tools, Spatial Cognition Algorithms, and Multi-Sensor Fusion Technology in Sustainable Urban Governance Networks
by Elvira Nica, Gheorghe H. Popescu, Milos Poliak, Tomas Kliestik and Oana-Matilda Sabie
Mathematics 2023, 11(9), 1981; https://doi.org/10.3390/math11091981 - 22 Apr 2023
Cited by 95 | Viewed by 7297
Abstract
Relevant research has investigated how predictive modeling algorithms, deep-learning-based sensing technologies, and big urban data configure immersive hyperconnected virtual spaces in digital twin cities: digital twin modeling tools, monitoring and sensing technologies, and Internet-of-Things-based decision support systems articulate big-data-driven urban geopolitics. This systematic [...] Read more.
Relevant research has investigated how predictive modeling algorithms, deep-learning-based sensing technologies, and big urban data configure immersive hyperconnected virtual spaces in digital twin cities: digital twin modeling tools, monitoring and sensing technologies, and Internet-of-Things-based decision support systems articulate big-data-driven urban geopolitics. This systematic review aims to inspect the recently published literature on digital twin simulation tools, spatial cognition algorithms, and multi-sensor fusion technology in sustainable urban governance networks. We integrate research developing on how blockchain-based digital twins, smart infrastructure sensors, and real-time Internet of Things data assist urban computing technologies. The research problems are whether: data-driven smart sustainable urbanism requires visual recognition tools, monitoring and sensing technologies, and simulation-based digital twins; deep-learning-based sensing technologies, spatial cognition algorithms, and environment perception mechanisms configure digital twin cities; and digital twin simulation modeling, deep-learning-based sensing technologies, and urban data fusion optimize Internet-of-Things-based smart city environments. Our analyses particularly prove that virtual navigation tools, geospatial mapping technologies, and Internet of Things connected sensors enable smart urban governance. Digital twin simulation, data visualization tools, and ambient sound recognition software configure sustainable urban governance networks. Virtual simulation algorithms, deep learning neural network architectures, and cyber-physical cognitive systems articulate networked smart cities. Throughout January and March 2023, a quantitative literature review was carried out across the ProQuest, Scopus, and Web of Science databases, with search terms comprising “sustainable urban governance networks” + “digital twin simulation tools”, “spatial cognition algorithms”, and “multi-sensor fusion technology”. A Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) flow diagram was generated using a Shiny App. AXIS (Appraisal tool for Cross-Sectional Studies), Dedoose, MMAT (Mixed Methods Appraisal Tool), and the Systematic Review Data Repository (SRDR) were used to assess the quality of the identified scholarly sources. Dimensions and VOSviewer were employed for bibliometric mapping through spatial and data layout algorithms. The findings gathered from our analyses clarify that Internet-of-Things-based smart city environments integrate 3D virtual simulation technology, intelligent sensing devices, and digital twin modeling. Full article
Show Figures

Figure 1

18 pages, 5078 KB  
Article
Effects of AR-Based Home Appliance Agents on User’s Perception and Maintenance Behavior
by Takeru Baba, Naoya Isoyama, Hideaki Uchiyama, Nobuchika Sakata and Kiyoshi Kiyokawa
Sensors 2023, 23(8), 4135; https://doi.org/10.3390/s23084135 - 20 Apr 2023
Cited by 3 | Viewed by 2931
Abstract
Maintenance of home appliances can be tedious. Maintenance work can be physically demanding and it is not always easy to know the cause of a malfunctioning appliance. Many users need to motivate themselves to perform maintenance work and consider it ideal for home [...] Read more.
Maintenance of home appliances can be tedious. Maintenance work can be physically demanding and it is not always easy to know the cause of a malfunctioning appliance. Many users need to motivate themselves to perform maintenance work and consider it ideal for home appliances to be maintenance-free. On the other hand, pets and other living creatures can be taken care of with joy and without much pain, even if they are difficult to take care of. To alleviate the hassle associated with the maintenance of home appliances, we propose an augmented reality (AR) system to superimpose an agent over the home appliance of concern who changes their behavior according to the internal state of the appliance. Taking a refrigerator as an example, we verify whether such AR agent visualization motivates users to perform maintenance work and reduces the associated discomfort. We designed a cartoon-like agent and implemented a prototype system using a HoloLens 2, which can switch between several animations depending on the internal state of the refrigerator. Using the prototype system, a Wizard of Oz user study comparing three conditions was conducted. We compared the proposed method (Animacy condition), an additional behavior method (Intelligence condition), and a text-based method as a baseline for presenting the refrigerator state. In the Intelligence condition, the agent looked at the participants from time to time as if it was aware of them and exhibited help-seeking behavior only when it was considered that they could take a short break. The results show that both the Animacy and Intelligence conditions induced animacy perception and a sense of intimacy. It was also evident that the agent visualization made the participants feel more pleasant. On the other hand, the sense of discomfort was not reduced by the agent visualization and the Intelligence condition did not improve the perceived intelligence or the sense of coercion further compared to the Animacy condition. Full article
(This article belongs to the Special Issue Human Computer Interaction in Emerging Technologies)
Show Figures

Figure 1

16 pages, 7771 KB  
Article
AI-Based Environmental Color System in Achieving Sustainable Urban Development
by Pohsun Wang, Wu Song, Junling Zhou, Yongsheng Tan and Hongkong Wang
Systems 2023, 11(3), 135; https://doi.org/10.3390/systems11030135 - 2 Mar 2023
Cited by 20 | Viewed by 4071
Abstract
Confronting the age of artificial intelligence, exploring art through technology has become one of the directions of interdisciplinary development. Not only does artificial intelligence technology explore sustainability on a technical level; it can also take advantage of itself to focus on the visual [...] Read more.
Confronting the age of artificial intelligence, exploring art through technology has become one of the directions of interdisciplinary development. Not only does artificial intelligence technology explore sustainability on a technical level; it can also take advantage of itself to focus on the visual perception of the living environment. People frequently interpret environmental features through their eyes, and the use of intuitive eye-tracking can provide effective data that can contribute to environmental sustainability in managing the environment and color planning to enhance the image of cities. This research investigates the visual responses of people viewing the historic city of Macau through an eye movement experiment to understand how the color characteristics of the physical environment are perceived. The research reveals that the buildings and plantings in the historic district of Macau are the most visible objects in the environment, while the smaller scale of St. Dominic’s Square, the Company of Jesus Square, and St. Augustine’s Square, which have a sense of spatial extension, have also become iconic environmental landscapes. This also draws visual attention and guides the direction of travel. The overall impressions of the Historic Centre of Macau, as expressed by the participants after the eye movement experiment, were mainly described as “multiculturalism”, “architectural style”, “traditional architecture”, “color scheme”, and “garden planting”. The 60 colors representing the urban color of Macau are then organized around these deep feelings about the environment. Therefore, for future inspiration, the 60 colors can be applied through design practice to create color expressions that fit the local characteristics, and thereby enhance the overall visual image of the city. Full article
Show Figures

Figure 1

18 pages, 3430 KB  
Article
RoadFormer: Road Extraction Using a Swin Transformer Combined with a Spatial and Channel Separable Convolution
by Xiangzeng Liu, Ziyao Wang, Jinting Wan, Juli Zhang, Yue Xi, Ruyi Liu and Qiguang Miao
Remote Sens. 2023, 15(4), 1049; https://doi.org/10.3390/rs15041049 - 15 Feb 2023
Cited by 38 | Viewed by 4913
Abstract
The accurate detection and extraction of roads using remote sensing technology are crucial to the development of the transportation industry and intelligent perception tasks. Recently, in view of the advantages of CNNs in feature extraction, its related road extraction methods have been proposed [...] Read more.
The accurate detection and extraction of roads using remote sensing technology are crucial to the development of the transportation industry and intelligent perception tasks. Recently, in view of the advantages of CNNs in feature extraction, its related road extraction methods have been proposed successively. However, due to the limitation of kernel size, they perform less effectively at capturing long-range information and global context, which are crucial for road targets distributed over long distances and highly structured. To deal with this problem, a novel model named RoadFormer with a Swin Transformer as the backbone is developed in this paper. Firstly, to extract long-range information effectively, a Swin Transformer multi-scale encoder is adopted in our model. Secondly, to enhance the feature representation capability of the model, we design an innovative bottleneck module, in which the spatial and channel separable convolution is employed to obtain fine-grained and globe features, and then a dilated block is connected after the spatial convolution module to capture more integrated road structures. Finally, a lightweight decoder consisting of transposed convolution and skip connection generates the final extraction results. Extensive experimental results confirm the advantages of RoadFormer on the Deepglobe and Massachusetts datasets. The comparative results of visualization and quantification demonstrate that our model outperforms comparable methods. Full article
Show Figures

Figure 1

32 pages, 1434 KB  
Review
Remote Big Data Management Tools, Sensing and Computing Technologies, and Visual Perception and Environment Mapping Algorithms in the Internet of Robotic Things
by Mihai Andronie, George Lăzăroiu, Oana Ludmila Karabolevski, Roxana Ștefănescu, Iulian Hurloiu, Adrian Dijmărescu and Irina Dijmărescu
Electronics 2023, 12(1), 22; https://doi.org/10.3390/electronics12010022 - 21 Dec 2022
Cited by 125 | Viewed by 8149
Abstract
The purpose of our systematic review was to inspect the recently published research on Internet of Robotic Things (IoRT) and harmonize the assimilations it articulates on remote big data management tools, sensing and computing technologies, and visual perception and environment mapping algorithms. The [...] Read more.
The purpose of our systematic review was to inspect the recently published research on Internet of Robotic Things (IoRT) and harmonize the assimilations it articulates on remote big data management tools, sensing and computing technologies, and visual perception and environment mapping algorithms. The research problems were whether robotic manufacturing processes and industrial wireless sensor networks shape IoRT and lead to improved product quality by use of remote big data management tools, whether IoRT devices communicate autonomously regarding event modeling and forecasting by leveraging machine learning and clustering algorithms, sensing and computing technologies, and image processing tools, and whether smart connected objects, situational awareness algorithms, and edge computing technologies configure IoRT systems and cloud robotics in relation to distributed task coordination through visual perception and environment mapping algorithms. A Shiny app was harnessed for Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines to configure the flow diagram integrating evidence-based gathered and processed data (the search outcomes and screening procedures). A quantitative literature review of ProQuest, Scopus, and the Web of Science databases was carried out throughout June and October 2022, with search terms including “Internet of Robotic Things” + “remote big data management tools”, “sensing and computing technologies”, and “visual perception and environment mapping algorithms”. Artificial intelligence and intelligent workflows by use of AMSTAR (Assessing the Methodological Quality of Systematic Reviews), Dedoose, DistillerSR, and SRDR (Systematic Review Data Repository) have been deployed as data extraction tools for literature collection, screening, and evaluation, for document flow monitoring, for inspecting qualitative and mixed methods research, and for establishing robust outcomes and correlations. For bibliometric mapping by use of data visualization, Dimensions AI was leveraged and with regards to layout algorithms, VOSviewer was harnessed. Full article
Show Figures

Figure 1

Back to TopTop