Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,032)

Search Parameters:
Keywords = mobile cameras

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10439 KiB  
Article
Camera-Based Vital Sign Estimation Techniques and Mobile App Development
by Tae Wuk Bae, Young Choon Kim, In Ho Sohng and Kee Koo Kwon
Appl. Sci. 2025, 15(15), 8509; https://doi.org/10.3390/app15158509 (registering DOI) - 31 Jul 2025
Viewed by 73
Abstract
In this paper, we propose noncontact heart rate (HR), oxygen saturation (SpO2), and respiratory rate (RR) detection methods using a smartphone camera. HR frequency is detected through filtering after obtaining a remote PPG (rPPG) signal and its power spectral density (PSD) is detected [...] Read more.
In this paper, we propose noncontact heart rate (HR), oxygen saturation (SpO2), and respiratory rate (RR) detection methods using a smartphone camera. HR frequency is detected through filtering after obtaining a remote PPG (rPPG) signal and its power spectral density (PSD) is detected using color difference signal amplification and the plane-orthogonal-to-the-skin method. Additionally, the SpO2 is detected using the HR frequency and the absorption ratio of the G and B color channels based on oxyhemoglobin absorption and reflectance theory. After this, the respiratory frequency is detected using the PSD of rPPG through respiratory frequency band filtering. For the image sequences recorded under various imaging conditions, the proposed method demonstrated superior HR detection accuracy compared to existing methods. The confidence intervals for HR and SpO2 detection were analyzed using Bland–Altman plots. Furthermore, the proposed RR detection method was also verified to be reliable. Full article
Show Figures

Figure 1

13 pages, 2435 KiB  
Article
Preliminary Evaluation of Spherical Over-Refraction Measurement Using a Smartphone
by Rosa Maria Salmeron-Campillo, Gines Martinez-Ros, Jose Angel Diaz-Guirado, Tania Orenes-Nicolas, Mateusz Jaskulski and Norberto Lopez-Gil
Photonics 2025, 12(8), 772; https://doi.org/10.3390/photonics12080772 (registering DOI) - 31 Jul 2025
Viewed by 162
Abstract
Background: Smartphones offer a promising tool for monitoring refractive error, especially in underserved areas where there is a shortage of eye-care professionals. We propose a novel method for measuring spherical over-refraction using smartphones. Methods: Specific levels of myopia using positive spherical trial lenses, [...] Read more.
Background: Smartphones offer a promising tool for monitoring refractive error, especially in underserved areas where there is a shortage of eye-care professionals. We propose a novel method for measuring spherical over-refraction using smartphones. Methods: Specific levels of myopia using positive spherical trial lenses, ranging from 0.00 D to 1.50 D in 0.25 D increments, were induced in 30 young participants (22 ± 5 years). A comparison was conducted between the induced over-refraction and the measurements obtained using a non-commercial mobile application based on the face–device distance measurement using the front camera while the subject was performing a resolution task. Results: Calibrated mobile app over-refraction results showed that 89.5% of the estimates had an error ≤ 0.25 D, and no errors exceeding 0.50 D. Bland–Altman analysis revealed no significant bias between app and clinical over-refraction, with a mean difference of 0.00 D ± 0.44 D (p = 0.981), indicating high accuracy and precision of the method. Conclusions: The methodology used shows high accuracy and precision in the measurement of the spherical over-refraction with only the use of a smartphone, allowing self-monitorization of potential myopia progression. Full article
Show Figures

Figure 1

28 pages, 4007 KiB  
Article
Voting-Based Classification Approach for Date Palm Health Detection Using UAV Camera Images: Vision and Learning
by Abdallah Guettaf Temam, Mohamed Nadour, Lakhmissi Cherroun, Ahmed Hafaifa, Giovanni Angiulli and Fabio La Foresta
Drones 2025, 9(8), 534; https://doi.org/10.3390/drones9080534 - 29 Jul 2025
Viewed by 206
Abstract
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method [...] Read more.
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method to ensure stability and accurate image acquisition. These deep learning models are implemented by a voting-based classification (VBC) system that combines multiple CNN architectures, including MobileNet, a handcrafted CNN, VGG16, and VGG19, to enhance classification accuracy and robustness. The classifiers independently generate predictions, and a voting mechanism determines the final classification. This hybridization of image-based visual servoing (IBVS) and classifiers makes immediate adaptations to changing conditions, providing straightforward and smooth flying as well as vision classification. The dataset used in this study was collected using a dual-camera UAV, which captures high-resolution images to detect pests in date palm leaves. After applying the proposed classification strategy, the implemented voting method achieved an impressive accuracy of 99.16% on the test set for detecting health conditions in date palm leaves, surpassing individual classifiers. The obtained results are discussed and compared to show the effectiveness of this classification technique. Full article
Show Figures

Figure 1

12 pages, 2500 KiB  
Article
Deep Learning-Based Optical Camera Communication with a 2D MIMO-OOK Scheme for IoT Networks
by Huy Nguyen and Yeng Min Jang
Electronics 2025, 14(15), 3011; https://doi.org/10.3390/electronics14153011 - 29 Jul 2025
Viewed by 287
Abstract
Radio frequency (RF)-based wireless systems are broadly used in communication systems such as mobile networks, satellite links, and monitoring applications. These systems offer outstanding advantages over wired systems, particularly in terms of ease of installation. However, researchers are looking for safer alternatives as [...] Read more.
Radio frequency (RF)-based wireless systems are broadly used in communication systems such as mobile networks, satellite links, and monitoring applications. These systems offer outstanding advantages over wired systems, particularly in terms of ease of installation. However, researchers are looking for safer alternatives as a result of worries about possible health problems connected to high-frequency radiofrequency transmission. Using the visible light spectrum is one promising approach; three cutting-edge technologies are emerging in this regard: Optical Camera Communication (OCC), Light Fidelity (Li-Fi), and Visible Light Communication (VLC). In this paper, we propose a Multiple-Input Multiple-Output (MIMO) modulation technology for Internet of Things (IoT) applications, utilizing an LED array and time-domain on-off keying (OOK). The proposed system is compatible with both rolling shutter and global shutter cameras, including commercially available models such as CCTV, webcams, and smart cameras, commonly deployed in buildings and industrial environments. Despite the compact size of the LED array, we demonstrate that, by optimizing parameters such as exposure time, camera focal length, and channel coding, our system can achieve up to 20 communication links over a 20 m distance with low bit error rate. Full article
(This article belongs to the Special Issue Advances in Optical Communications and Optical Networks)
Show Figures

Figure 1

17 pages, 4667 KiB  
Article
Workspace Analysis and Dynamic Modeling of 6-DoF Multi-Pattern Cable-Driven Hybrid Mobile Robot
by Jiahao Song, Meiqi Wang, Jiabao Wu, Qing Liu and Shuofei Yang
Machines 2025, 13(8), 659; https://doi.org/10.3390/machines13080659 - 28 Jul 2025
Viewed by 241
Abstract
A cable-driven hybrid mobile robot is a kind of robot consisting of two modules connected in series, which uses multiple parallel cables to drive the moving platforms. Cable-driven robots benefit from a large workspace, low inertia, excellent dynamic performance due to the lightweight [...] Read more.
A cable-driven hybrid mobile robot is a kind of robot consisting of two modules connected in series, which uses multiple parallel cables to drive the moving platforms. Cable-driven robots benefit from a large workspace, low inertia, excellent dynamic performance due to the lightweight and high extensibility of cables, making them ideal for a wide range of applications, such as sports cameras, large radio telescopes, and planetary exploration. Considering the fundamental dynamic constraint imposed by the unilateral constraint of cables, the workspace and dynamic modeling for cable-driven robots require specialized study. In this paper, a novel cable-driven hybrid robot, which has two motion patterns, is designed, and an arc intersection method for analyzing workspace is applied to solve the robot workspace of two motion patterns. Based on the workspace analysis, a dynamic model for the cable-driven hybrid robot is established, laying the foundation for subsequent trajectory planning. Simulation results in MATLAB R2021a demonstrate that the cable-driven hybrid robot has a large workspace in both motion patterns and is capable of meeting various motion requirements, indicating promising application potential. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

17 pages, 13125 KiB  
Article
Evaluating the Accuracy and Repeatability of Mobile 3D Imaging Applications for Breast Phantom Reconstruction
by Elena Botti, Bart Jansen, Felipe Ballen-Moreno, Ayush Kapila and Redona Brahimetaj
Sensors 2025, 25(15), 4596; https://doi.org/10.3390/s25154596 - 24 Jul 2025
Viewed by 419
Abstract
Three-dimensional imaging technologies are increasingly used in breast reconstructive and plastic surgery due to their potential for efficient and accurate preoperative assessment and planning. This study systematically evaluates the accuracy and consistency of six commercially available 3D scanning applications (apps)—Structure Sensor, 3D Scanner [...] Read more.
Three-dimensional imaging technologies are increasingly used in breast reconstructive and plastic surgery due to their potential for efficient and accurate preoperative assessment and planning. This study systematically evaluates the accuracy and consistency of six commercially available 3D scanning applications (apps)—Structure Sensor, 3D Scanner App, Heges, Polycam, SureScan, and Kiri—in reconstructing the female torso. To avoid variability introduced by human subjects, a silicone breast mannequin model was scanned, with fiducial markers placed at known anatomical landmarks. Manual distance measurements were obtained using calipers by two independent evaluators and compared to digital measurements extracted from 3D reconstructions in Blender software. Each scan was repeated six times per application to ensure reliability. SureScan demonstrated the lowest mean error (2.9 mm), followed by Structure Sensor (3.0 mm), Heges (3.6 mm), 3D Scanner App (4.4 mm), Kiri (5.0 mm), and Polycam (21.4 mm), which showed the highest error and variability. Even the app using an external depth sensor (Structure Sensor) showed no statistically significant accuracy advantage over those using only the iPad’s built-in camera (except for Polycam), underscoring that software is the primary driver of performance, not hardware (alone). This work provides practical insights for selecting mobile 3D scanning tools in clinical workflows and highlights key limitations, such as scaling errors and alignment artifacts. Future work should include patient-based validation and explore deep learning to enhance reconstruction quality. Ultimately, this study lays the foundation for more accessible and cost-effective 3D imaging in surgical practice, showing that smartphone-based tools can produce clinically useful scans. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Figure 1

9 pages, 2459 KiB  
Proceeding Paper
Beyond the Red and Green: Exploring the Capabilities of Smart Traffic Lights in Malaysia
by Mohd Fairuz Muhamad@Mamat, Mohamad Nizam Mustafa, Lee Choon Siang, Amir Izzuddin Hasani Habib and Azimah Mohd Hamdan
Eng. Proc. 2025, 102(1), 4; https://doi.org/10.3390/engproc2025102004 - 22 Jul 2025
Viewed by 260
Abstract
Traffic congestion poses a significant challenge to modern urban environments, impacting both driver satisfaction and road safety. This paper investigates the effectiveness of a smart traffic light system (STL), a solution developed under the Intelligent Transportation System (ITS) initiative by the Ministry of [...] Read more.
Traffic congestion poses a significant challenge to modern urban environments, impacting both driver satisfaction and road safety. This paper investigates the effectiveness of a smart traffic light system (STL), a solution developed under the Intelligent Transportation System (ITS) initiative by the Ministry of Works Malaysia, to address these issues in Malaysia. The system integrates a network of sensors, AI-enabled cameras, and Automatic Number Plate Recognition (ANPR) technology to gather real-time data on traffic volume and vehicle classification at congested intersections. This data is utilized to dynamically adjust traffic light timings, prioritizing traffic flow on heavily congested roads while maintaining safety standards. To evaluate the system’s performance, a comprehensive study was conducted at a selected intersection. Traffic patterns were automatically analyzed using camera systems, and the performance of the STL was compared to that of traditional traffic signal systems. The average travel time from the start to the end intersection was measured and compared. Preliminary findings indicate that the STL significantly reduces travel times and improves overall traffic flow at the intersection, with average travel time reductions ranging from 7.1% to 28.6%, depending on site-specific factors. While further research is necessary to quantify the full extent of the system’s impact, these initial results demonstrate the promising potential of STL technology to enhance urban mobility and more efficient and safer roadways by moving beyond traditional traffic signal functionalities. Full article
Show Figures

Figure 1

27 pages, 6578 KiB  
Article
Evaluating Neural Radiance Fields for ADA-Compliant Sidewalk Assessments: A Comparative Study with LiDAR and Manual Methods
by Hang Du, Shuaizhou Wang, Linlin Zhang, Mark Amo-Boateng and Yaw Adu-Gyamfi
Infrastructures 2025, 10(8), 191; https://doi.org/10.3390/infrastructures10080191 - 22 Jul 2025
Viewed by 332
Abstract
An accurate assessment of sidewalk conditions is critical for ensuring compliance with the Americans with Disabilities Act (ADA), particularly to safeguard mobility for wheelchair users. This paper presents a novel 3D reconstruction framework based on neural radiance field (NeRF), which utilize a monocular [...] Read more.
An accurate assessment of sidewalk conditions is critical for ensuring compliance with the Americans with Disabilities Act (ADA), particularly to safeguard mobility for wheelchair users. This paper presents a novel 3D reconstruction framework based on neural radiance field (NeRF), which utilize a monocular video input from consumer-grade cameras to generate high-fidelity 3D models of sidewalk environments. The framework enables automatic extraction of ADA-relevant geometric features, including the running slope, the cross slope, and vertical displacements, facilitating an efficient and scalable compliance assessment process. A comparative study is conducted across three surveying methods—manual measurements, LiDAR scanning, and the proposed NeRF-based approach—evaluated on four sidewalks and one curb ramp. Each method was assessed based on accuracy, cost, time, level of automation, and scalability. The NeRF-based approach achieved high agreement with LiDAR-derived ground truth, delivering an F1 score of 96.52%, a precision of 96.74%, and a recall of 96.34% for ADA compliance classification. These results underscore the potential of NeRF to serve as a cost-effective, automated alternative to traditional and LiDAR-based methods, with sufficient precision for widespread deployment in municipal sidewalk audits. Full article
Show Figures

Figure 1

21 pages, 2941 KiB  
Article
Dynamic Proxemic Model for Human–Robot Interactions Using the Golden Ratio
by Tomáš Spurný, Ján Babjak, Zdenko Bobovský and Aleš Vysocký
Appl. Sci. 2025, 15(15), 8130; https://doi.org/10.3390/app15158130 - 22 Jul 2025
Viewed by 237
Abstract
This paper presents a novel approach to determine dynamic safety and comfort zones in human–robot interactions (HRIs), with a focus on service robots operating in dynamic environments with people. The proposed proxemic model leverages the golden ratio-based comfort zone distribution and ISO safety [...] Read more.
This paper presents a novel approach to determine dynamic safety and comfort zones in human–robot interactions (HRIs), with a focus on service robots operating in dynamic environments with people. The proposed proxemic model leverages the golden ratio-based comfort zone distribution and ISO safety standards to define adaptive proxemic boundaries for robots around humans. Unlike traditional fixed-threshold approaches, this novel method proposes a gradual and context-sensitive modulation of robot behaviour based on human position, orientation, and relative velocity. The system was implemented on an NVIDIA Jetson Xavier NX platform using a ZED 2i stereo depth camera Stereolabs, New York, USA and tested on two mobile robotic platforms: Go1 Unitree, Hangzhou, China (quadruped) and Scout Mini Agilex, Dongguan, China (wheeled). The initial verification of proposed proxemic model through experimental comfort validation was conducted using two simple interaction scenarios, and subjective feedback was collected from participants using a modified Godspeed Questionnaire Series. The results show that the participants felt comfortable during the experiments with robots. This acceptance of the proposed methodology plays an initial role in supporting further research of the methodology. The proposed solution also facilitates integration into existing navigation frameworks and opens pathways towards socially aware robotic systems. Full article
(This article belongs to the Special Issue Intelligent Robotics: Design and Applications)
Show Figures

Figure 1

19 pages, 2016 KiB  
Article
A Robust and Energy-Efficient Control Policy for Autonomous Vehicles with Auxiliary Tasks
by Yabin Xu, Chenglin Yang and Xiaoxi Gong
Electronics 2025, 14(15), 2919; https://doi.org/10.3390/electronics14152919 - 22 Jul 2025
Viewed by 253
Abstract
We present a lightweight autonomous driving method that uses a low-cost camera, a simple end-to-end convolutional neural network architecture, and smoother driving techniques to achieve energy-efficient vehicle control. Instead of directly constructing a mapping from raw sensory input to the action, our network [...] Read more.
We present a lightweight autonomous driving method that uses a low-cost camera, a simple end-to-end convolutional neural network architecture, and smoother driving techniques to achieve energy-efficient vehicle control. Instead of directly constructing a mapping from raw sensory input to the action, our network takes the frame-to-frame visual difference as one of the crucial inputs to produce control commands, including the steering angle and the speed value at each time step. This choice of input allows highlighting the most relevant parts on raw image pairs to decrease the unnecessary visual complexity caused by different road and weather conditions. Additionally, our network achieves the prediction of the vehicle’s upcoming control commands by incorporating a view synthesis component into the model. The view synthesis, as an auxiliary task, aims to infer a novel view for the future from the historical environment transformation cue. By combining both the current and upcoming control commands, our framework achieves driving smoothness, which is highly associated with energy efficiency. We perform experiments on benchmarks to evaluate the reliability under different driving conditions in terms of control accuracy. We deploy a mobile robot outdoors to evaluate the power consumption of different control policies. The quantitative results demonstrate that our method can achieve energy efficiency in the real world. Full article
(This article belongs to the Special Issue Simultaneous Localization and Mapping (SLAM) of Mobile Robots)
Show Figures

Figure 1

30 pages, 10173 KiB  
Article
Integrated Robust Optimization for Lightweight Transformer Models in Low-Resource Scenarios
by Hui Huang, Hengyu Zhang, Yusen Wang, Haibin Liu, Xiaojie Chen, Yiling Chen and Yuan Liang
Symmetry 2025, 17(7), 1162; https://doi.org/10.3390/sym17071162 - 21 Jul 2025
Viewed by 360
Abstract
With the rapid proliferation of artificial intelligence (AI) applications, an increasing number of edge devices—such as smartphones, cameras, and embedded controllers—are being tasked with performing AI-based inference. Due to constraints in storage capacity, computational power, and network connectivity, these devices are often categorized [...] Read more.
With the rapid proliferation of artificial intelligence (AI) applications, an increasing number of edge devices—such as smartphones, cameras, and embedded controllers—are being tasked with performing AI-based inference. Due to constraints in storage capacity, computational power, and network connectivity, these devices are often categorized as operating in resource-constrained environments. In such scenarios, deploying powerful Transformer-based models like ChatGPT and Vision Transformers is highly impractical because of their large parameter sizes and intensive computational requirements. While lightweight Transformer models, such as MobileViT, offer a promising solution to meet storage and computational limitations, their robustness remains insufficient. This poses a significant security risk for AI applications, particularly in critical edge environments. To address this challenge, our research focuses on enhancing the robustness of lightweight Transformer models under resource-constrained conditions. First, we propose a comprehensive robustness evaluation framework tailored for lightweight Transformer inference. This framework assesses model robustness across three key dimensions: noise robustness, distributional robustness, and adversarial robustness. It further investigates how model size and hardware limitations affect robustness, thereby providing valuable insights for robustness-aware model design. Second, we introduce a novel adversarial robustness enhancement strategy that integrates lightweight modeling techniques. This approach leverages methods such as gradient clipping and layer-wise unfreezing, as well as decision boundary optimization techniques like TRADES and SMART. Together, these strategies effectively address challenges related to training instability and decision boundary smoothness, significantly improving model robustness. Finally, we deploy the robust lightweight Transformer models in real-world resource-constrained environments and empirically validate their inference robustness. The results confirm the effectiveness of our proposed methods in enhancing the robustness and reliability of lightweight Transformers for edge AI applications. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

23 pages, 5173 KiB  
Article
Improvement of Cooperative Localization for Heterogeneous Mobile Robots
by Efe Oğuzhan Karcı, Ahmet Mustafa Kangal and Sinan Öncü
Drones 2025, 9(7), 507; https://doi.org/10.3390/drones9070507 - 19 Jul 2025
Viewed by 346
Abstract
This research focuses on enhancing cooperative localization for heterogeneous mobile robots composed of a quadcopter and an unmanned ground vehicle. The study employs sensor fusion techniques, particularly the Extended Kalman Filter, to fuse data from various sensors, including GPSs, IMUs, and cameras. By [...] Read more.
This research focuses on enhancing cooperative localization for heterogeneous mobile robots composed of a quadcopter and an unmanned ground vehicle. The study employs sensor fusion techniques, particularly the Extended Kalman Filter, to fuse data from various sensors, including GPSs, IMUs, and cameras. By integrating these sensors and optimizing fusion strategies, the research aims to improve the precision and reliability of cooperative localization in complex and dynamic environments. The primary objective is to develop a practical framework for cooperative localization that addresses the challenges posed by the differences in mobility and sensing capabilities among heterogeneous robots. Sensor fusion is used to compensate for the limitations of individual sensors, providing more accurate and robust localization results. Moreover, a comparative analysis of different sensor combinations and fusion strategies helps to identify the optimal configuration for each robot. This research focuses on the improvement of cooperative localization, path planning, and collaborative tasks for heterogeneous robots. The findings have broad applications in fields such as autonomous transportation, agricultural operation, and disaster response, where the cooperation of diverse robotic platforms is crucial for mission success. Full article
Show Figures

Figure 1

17 pages, 2840 KiB  
Article
A Digital Twin System for the Sitting-to-Standing Motion of the Knee Joint
by Tian Liu, Liangzheng Sun, Chaoyue Sun, Zhijie Chen, Jian Li and Peng Su
Electronics 2025, 14(14), 2867; https://doi.org/10.3390/electronics14142867 - 18 Jul 2025
Viewed by 229
Abstract
(1) Background: A severe decline in knee joint function significantly affects the mobility of the elderly, making it a key concern in the field of geriatric health. To alleviate the pressure on the knee joints of the elderly during daily movements such as [...] Read more.
(1) Background: A severe decline in knee joint function significantly affects the mobility of the elderly, making it a key concern in the field of geriatric health. To alleviate the pressure on the knee joints of the elderly during daily movements such as sitting and standing, effective biomechanical solutions are required. (2) Methods: In this study, a biomechanical framework was established based on mechanical analysis to derive the transfer relationship between the ground reaction force and the knee joint moment. Experiments were designed to collect knee joint data on the elderly during the sit-to-stand process. Meanwhile, magnetic resonance imaging (MRI) images were processed through a medical imaging control system to construct a detailed digital 3D knee joint model. A finite element analysis was used to verify the model to ensure the accuracy of its structure and mechanical properties. An improved radial basis function was used to fit the pressure during the entire sit-to-stand conversion process to reduce the computational workload, with an error of less than 5%. In addition, a small-target human key point recognition network was developed to analyze the image sequences captured by the camera. The knee joint angle and the knee joint pressure distribution during the sit-to-stand conversion process were mapped to a three-dimensional interactive platform to form a digital twin system. (3) Results: The system can effectively capture the biomechanical behavior of the knee joint during movement and shows high accuracy in joint angle tracking and structure simulation. (4) Conclusions: This study provides an accurate and comprehensive method for analyzing the biomechanical characteristics of the knee joint during the movement of the elderly, laying a solid foundation for clinical rehabilitation research and the design of assistive devices in the field of rehabilitation medicine. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2281 KiB  
Article
Multilayer Network Modeling for Brand Knowledge Discovery: Integrating TF-IDF and TextRank in Heterogeneous Semantic Space
by Peng Xu, Rixu Zang, Zongshui Wang and Zhuo Sun
Information 2025, 16(7), 614; https://doi.org/10.3390/info16070614 - 17 Jul 2025
Viewed by 222
Abstract
In the era of homogenized competition, brand knowledge has become a critical factor that influences consumer purchasing decisions. However, traditional single-layer network models fail to capture the multi-dimensional semantic relationships embedded in brand-related textual data. To address this gap, this study proposes a [...] Read more.
In the era of homogenized competition, brand knowledge has become a critical factor that influences consumer purchasing decisions. However, traditional single-layer network models fail to capture the multi-dimensional semantic relationships embedded in brand-related textual data. To address this gap, this study proposes a BKMN framework integrating TF-IDF and TextRank algorithms for comprehensive brand knowledge discovery. By analyzing 19,875 consumer reviews of a mobile phone brand from JD website, we constructed a tri-layer network comprising TF-IDF-derived keywords, TextRank-derived keywords, and their overlapping nodes. The model incorporates co-occurrence matrices and centrality metrics (degree, closeness, betweenness, eigenvector) to identify semantic hubs and interlayer associations. The results reveal that consumers prioritize attributes such as “camera performance”, “operational speed”, “screen quality”, and “battery life”. Notably, the overlap layer exhibits the highest node centrality, indicating convergent consumer focus across algorithms. The network demonstrates small-world characteristics (average path length = 1.627) with strong clustering (average clustering coefficient = 0.848), reflecting cohesive consumer discourse around key features. Meanwhile, this study proposes the Mul-LSTM model for sentiment analysis of reviews, achieving a 93% sentiment classification accuracy, revealing that consumers have a higher proportion of positive attitudes towards the brand’s cell phones, which provides a quantitative basis for enterprises to understand users’ emotional tendencies and optimize brand word-of-mouth management. This research advances brand knowledge modeling by synergizing heterogeneous algorithms and multilayer network analysis. Its practical implications include enabling enterprises to pinpoint competitive differentiators and optimize marketing strategies. Future work could extend the framework to incorporate sentiment dynamics and cross-domain applications in smart home or cosmetic industries. Full article
Show Figures

Figure 1

31 pages, 1059 KiB  
Article
Adaptive Traffic Light Management for Mobility and Accessibility in Smart Cities
by Malik Almaliki, Amna Bamaqa, Mahmoud Badawy, Tamer Ahmed Farrag, Hossam Magdy Balaha and Mostafa A. Elhosseini
Sustainability 2025, 17(14), 6462; https://doi.org/10.3390/su17146462 - 15 Jul 2025
Viewed by 569
Abstract
Urban road traffic congestion poses significant challenges to sustainable mobility in smart cities. Traditional traffic light systems, reliant on static or semi-fixed timers, fail to adapt to dynamic traffic conditions, exacerbating congestion and limiting inclusivity. To address these limitations, this paper proposes H-ATLM [...] Read more.
Urban road traffic congestion poses significant challenges to sustainable mobility in smart cities. Traditional traffic light systems, reliant on static or semi-fixed timers, fail to adapt to dynamic traffic conditions, exacerbating congestion and limiting inclusivity. To address these limitations, this paper proposes H-ATLM (a hybrid adaptive traffic lights management), a system utilizing the deep deterministic policy gradient (DDPG) reinforcement learning algorithm to optimize traffic light timings dynamically based on real-time data. The system integrates advanced sensing technologies, such as cameras and inductive loops, to monitor traffic conditions and adaptively adjust signal phases. Experimental results demonstrate significant improvements, including reductions in congestion (up to 50%), increases in throughput (up to 149%), and decreases in clearance times (up to 84%). These findings open the door for integrating accessibility-focused features such as adaptive signaling for accessible vehicles, dedicated lanes for paratransit services, and prioritized traffic flows for inclusive mobility. Full article
Show Figures

Figure 1

Back to TopTop