Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (410)

Search Parameters:
Keywords = computer graphic design

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5862 KiB  
Article
ICP-Based Mapping and Localization System for AGV with 2D LiDAR
by Felype de L. Silva, Eisenhawer de M. Fernandes, Péricles R. Barros, Levi da C. Pimentel, Felipe C. Pimenta, Antonio G. B. de Lima and João M. P. Q. Delgado
Sensors 2025, 25(15), 4541; https://doi.org/10.3390/s25154541 - 22 Jul 2025
Abstract
This work presents the development of a functional real-time SLAM system designed to enhance the perception capabilities of an Automated Guided Vehicle (AGV) using only a 2D LiDAR sensor. The proposal aims to address recurring gaps in the literature, such as the need [...] Read more.
This work presents the development of a functional real-time SLAM system designed to enhance the perception capabilities of an Automated Guided Vehicle (AGV) using only a 2D LiDAR sensor. The proposal aims to address recurring gaps in the literature, such as the need for low-complexity solutions that are independent of auxiliary sensors and capable of operating on embedded platforms with limited computational resources. The system integrates scan alignment techniques based on the Iterative Closest Point (ICP) algorithm. Experimental validation in a controlled environment indicated better performance using Gauss–Newton optimization and the point-to-plane metric, achieving pose estimation accuracy of 99.42%, 99.6%, and 99.99% in the position (x, y) and orientation (θ) components, respectively. Subsequently, the system was adapted for operation with data from the onboard sensor, integrating a lightweight graphical interface for real-time visualization of scans, estimated pose, and the evolving map. Despite the moderate update rate, the system proved effective for robotic applications, enabling coherent localization and progressive environment mapping. The modular architecture developed allows for future extensions such as trajectory planning and control. The proposed solution provides a robust and adaptable foundation for mobile platforms, with potential applications in industrial automation, academic research, and education in mobile robotics. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

16 pages, 2270 KiB  
Article
Performance Evaluation of FPGA, GPU, and CPU in FIR Filter Implementation for Semiconductor-Based Systems
by Muhammet Arucu and Teodor Iliev
J. Low Power Electron. Appl. 2025, 15(3), 40; https://doi.org/10.3390/jlpea15030040 - 21 Jul 2025
Viewed by 214
Abstract
This study presents a comprehensive performance evaluation of field-programmable gate array (FPGA), graphics processing unit (GPU), and central processing unit (CPU) platforms for implementing finite impulse response (FIR) filters in semiconductor-based digital signal processing (DSP) systems. Utilizing a standardized FIR filter designed with [...] Read more.
This study presents a comprehensive performance evaluation of field-programmable gate array (FPGA), graphics processing unit (GPU), and central processing unit (CPU) platforms for implementing finite impulse response (FIR) filters in semiconductor-based digital signal processing (DSP) systems. Utilizing a standardized FIR filter designed with the Kaiser window method, we compare computational efficiency, latency, and energy consumption across the ZYNQ XC7Z020 FPGA, Tesla K80 GPU, and Arm-based CPU, achieving processing times of 0.004 s, 0.008 s, and 0.107 s, respectively, with FPGA power consumption of 1.431 W and comparable energy profiles for GPU and CPU. The FPGA is 27 times faster than the CPU and 2 times faster than the GPU, demonstrating its suitability for low-latency DSP tasks. A detailed analysis of resource utilization and scalability underscores the FPGA’s reconfigurability for optimized DSP implementations. This work provides novel insights into platform-specific optimizations, addressing the demand for energy-efficient solutions in edge computing and IoT applications, with implications for advancing sustainable DSP architectures. Full article
(This article belongs to the Topic Advanced Integrated Circuit Design and Application)
Show Figures

Figure 1

14 pages, 4648 KiB  
Article
Cyber-Physical System and 3D Visualization for a SCADA-Based Drinking Water Supply: A Case Study in the Lerma Basin, Mexico City
by Gabriel Sepúlveda-Cervantes, Eduardo Vega-Alvarado, Edgar Alfredo Portilla-Flores and Eduardo Vivanco-Rodríguez
Future Internet 2025, 17(7), 306; https://doi.org/10.3390/fi17070306 - 17 Jul 2025
Viewed by 213
Abstract
Cyber-physical systems such as Supervisory Control and Data Acquisition (SCADA) have been applied in industrial automation and infrastructure management for decades. They are hybrid tools for administration, monitoring, and continuous control of real physical systems through their computational representation. SCADA systems have evolved [...] Read more.
Cyber-physical systems such as Supervisory Control and Data Acquisition (SCADA) have been applied in industrial automation and infrastructure management for decades. They are hybrid tools for administration, monitoring, and continuous control of real physical systems through their computational representation. SCADA systems have evolved along with computing technology, from their beginnings with low-performance computers, monochrome monitors and communication networks with a range of a few hundred meters, to high-performance systems with advanced 3D graphics and wired and wireless computer networks. This article presents a methodology for the design of a SCADA system with a 3D Visualization for Drinking Water Supply, and its implementation in the Lerma Basin System of Mexico City as a case study. The monitoring of water consumption from the wells is presented, as well as the pressure levels throughout the system. The 3D visualization is generated from the GIS information and the communication is carried out using a hybrid radio frequency transmission system, satellite, and telephone network. The pumps that extract water from each well are teleoperated and monitored in real time. The developed system can be scaled to generate a simulator of water behavior of the Lerma Basin System and perform contingency planning. Full article
Show Figures

Figure 1

24 pages, 6554 KiB  
Article
Modeling Mechanical Properties of Industrial C-Mn Cast Steels Using Artificial Neural Networks
by Saurabh Tiwari, Seongjun Heo, Nokeun Park and Nagireddy Gari S. Reddy
Metals 2025, 15(7), 790; https://doi.org/10.3390/met15070790 - 12 Jul 2025
Viewed by 215
Abstract
This study develops a comprehensive artificial neural network (ANN) model for predicting the mechanical properties of carbon–manganese cast steel, specifically, the yield strength (YS), tensile strength (TS), elongation (El), and reduction of area (RA), based on the chemical composition (16 alloying elements) and [...] Read more.
This study develops a comprehensive artificial neural network (ANN) model for predicting the mechanical properties of carbon–manganese cast steel, specifically, the yield strength (YS), tensile strength (TS), elongation (El), and reduction of area (RA), based on the chemical composition (16 alloying elements) and heat treatment parameters. The neural network model, employing a 20-44-44-4 architecture and trained on 400 samples from an industrial dataset of 500 samples, achieved 90% of test predictions within a 5% deviation from actual values, with mean prediction errors of 3.45% for YS and 4.9% for %EL. A user-friendly graphical interface was developed to make these predictive capabilities accessible, without requiring programming expertise. Sensitivity analyses revealed that increasing the copper content from 0.05% to 0.2% enhanced the yield strength from 320 to 360 MPa while reducing the ductility, whereas niobium functioned as an effective grain refiner, improving both the strength and ductility. The combined effects of carbon and manganese demonstrated complex synergistic behavior, with the yield strength varying between 280 and 460 MPa and the tensile strength ranging from 460 to 740 MPa across the composition space. Optimal strength–ductility balance was achieved at moderate compositions of 1.0–1.2 wt% Mn and 0.20–0.24 wt% C. The model provides an efficient alternative to costly experimental trials for optimizing C-Mn steels, with prediction errors consistently below 6% compared with 8–20% for traditional empirical methods. This approach establishes quantitative guidelines for designing complex multi-element alloys with targeted mechanical properties, representing a significant advancement in computational material engineering for industrial applications. Full article
(This article belongs to the Special Issue Advances in Constitutive Modeling for Metals and Alloys)
Show Figures

Graphical abstract

19 pages, 2374 KiB  
Article
Tracking and Registration Technology Based on Panoramic Cameras
by Chao Xu, Guoxu Li, Ye Bai, Yuzhuo Bai, Zheng Cao and Cheng Han
Appl. Sci. 2025, 15(13), 7397; https://doi.org/10.3390/app15137397 - 1 Jul 2025
Viewed by 253
Abstract
Augmented reality (AR) has become a research focus in computer vision and graphics, with growing applications driven by advances in artificial intelligence and the emergence of the metaverse. Panoramic cameras offer new opportunities for AR due to their wide field of view but [...] Read more.
Augmented reality (AR) has become a research focus in computer vision and graphics, with growing applications driven by advances in artificial intelligence and the emergence of the metaverse. Panoramic cameras offer new opportunities for AR due to their wide field of view but also pose significant challenges for camera pose estimation because of severe distortion and complex scene textures. To address these issues, this paper proposes a lightweight, unsupervised deep learning model for panoramic camera pose estimation. The model consists of a depth estimation sub-network and a pose estimation sub-network, both optimized for efficiency using network compression, multi-scale rectangular convolutions, and dilated convolutions. A learnable occlusion mask is incorporated into the pose network to mitigate errors caused by complex relative motion. Furthermore, a panoramic view reconstruction model is constructed to obtain effective supervisory signals from the predicted depth, pose information, and corresponding panoramic images and is trained using a designed spherical photometric consistency loss. The experimental results demonstrate that the proposed method achieves competitive accuracy while maintaining high computational efficiency, making it well-suited for real-time AR applications with panoramic input. Full article
Show Figures

Figure 1

18 pages, 4529 KiB  
Article
KlyH: 1D Disk Model-Based Large-Signal Simulation Software for Klystrons
by Hezhang Zhao, Hu He, Shifeng Li, Hua Huang, Zhengbang Liu, Limin Sun, Ke He and Dongwenlong Wu
Electronics 2025, 14(11), 2223; https://doi.org/10.3390/electronics14112223 - 30 May 2025
Viewed by 412
Abstract
This paper presents KlyH, a new 1D (one-dimensional) large-signal simulation software for klystrons, designed to deliver efficient and accurate simulation and optimization tools. KlyH integrates a Fortran-based dynamic link library (DLL) as its computational core, which employs high-performance numerical algorithms to rapidly compute [...] Read more.
This paper presents KlyH, a new 1D (one-dimensional) large-signal simulation software for klystrons, designed to deliver efficient and accurate simulation and optimization tools. KlyH integrates a Fortran-based dynamic link library (DLL) as its computational core, which employs high-performance numerical algorithms to rapidly compute critical parameters such as efficiency, gain, and bandwidth. Compared with traditional 1D simulation tools, which often lack open interfaces and extensibility, KlyH is built with a modular and open architecture that supports seamless integration with advanced optimization and intelligent design algorithms. KlyH incorporates multi-objective optimization frameworks, notably the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) and Optimized Multi-Objective Particle Swarm Optimization (OMOPSO), enabling automated parameter tuning for efficiency maximization and interaction length optimization. Its bandwidth-of-klystron-analysis module predicts gain and output power across operational bandwidths, with optimization algorithms further enhancing bandwidth performance. A Java-based graphical user interface (GUI) provides an intuitive workflow for parameter configuration and real-time visualization of simulation results. The open architecture also lays the foundation for future integration of artificial intelligence algorithms, promoting intelligent and automated klystron design workflows. The accuracy of KlyH and its potential for parameter optimization are confirmed by a case study on an X-band relativistic klystron amplifier. Discrepancies observed between 1D simulations and 3D PIC (three-dimensional particle-in-cell) simulation results are analyzed to identify model limitations, providing critical insights for advancing high-performance klystron designs. Full article
Show Figures

Figure 1

20 pages, 2451 KiB  
Article
Enhancing Efficiency and Creativity in Mechanical Drafting: A Comparative Study of General-Purpose CAD Versus Specialized Toolsets
by Simón Gutiérrez de Ravé, Eduardo Gutiérrez de Ravé and Francisco J. Jiménez-Hornero
Appl. Syst. Innov. 2025, 8(3), 74; https://doi.org/10.3390/asi8030074 - 29 May 2025
Viewed by 1155
Abstract
Computer-Aided Design (CAD) plays a critical role in modern engineering education by supporting technical accuracy and fostering innovation in design. This study compares the performance of beginner CAD users employing general-purpose AutoCAD 2025 with those using the specialized AutoCAD Mechanical 2025. Fifty undergraduate [...] Read more.
Computer-Aided Design (CAD) plays a critical role in modern engineering education by supporting technical accuracy and fostering innovation in design. This study compares the performance of beginner CAD users employing general-purpose AutoCAD 2025 with those using the specialized AutoCAD Mechanical 2025. Fifty undergraduate mechanical engineering students, all with less than one year of CAD experience and no prior exposure to AutoCAD Mechanical, were randomly assigned to complete six mechanical drawing tasks using one of the two software environments. Efficiency was evaluated through command usage, frequency, and task completion time, while creativity was assessed using a rubric covering originality, functionality, tool proficiency, and graphical quality. Results show that AutoCAD Mechanical significantly improved workflow efficiency, reducing task execution time by approximately 50%. Creativity scores were also notably higher among users of AutoCAD Mechanical, particularly in functionality and tool usage. These gains are attributed to automation features such as parametric constraints, standard part libraries, and automated dimensioning, which lower cognitive load and support iterative design. The findings suggest that integrating specialized CAD tools into engineering curricula can enhance both technical and creative outcomes. Limitations and future research directions include longitudinal studies, diverse user populations, and exploration of student feedback and tool adaptation. Full article
Show Figures

Figure 1

19 pages, 1619 KiB  
Article
A Structured Method to Generate Self-Test Libraries for Tensor Cores
by Robert Limas Sierra, Juan David Guerrero Balaguera, Josie E. Rodriguez Condia and Matteo Sonza Reorda
Electronics 2025, 14(11), 2148; https://doi.org/10.3390/electronics14112148 - 25 May 2025
Viewed by 486
Abstract
Modern computing systems increasingly rely on specialized hardware accelerators, such as Graphics Processing Units (GPUs), to meet growing computational demands. GPUs are essential for accelerating a wide range of applications, from machine learning and scientific computing to safety-critical domains like autonomous systems and [...] Read more.
Modern computing systems increasingly rely on specialized hardware accelerators, such as Graphics Processing Units (GPUs), to meet growing computational demands. GPUs are essential for accelerating a wide range of applications, from machine learning and scientific computing to safety-critical domains like autonomous systems and aerospace. To enhance performance, modern GPUs integrate dedicated in-chip units, such as Tensor Cores(TCs), which are designed for efficient mixed-precision matrix operations. However, as semiconductor technologies scale down, reliability challenges emerge. Permanent hardware faults caused by aging, process variations, or environmental stress can lead to Silent Data Corruptions, which silently compromise computation results. In order to detect such faults, self-test libraries (STLs) are widely used, corresponding to suitably crafted pieces of code, able to activate faults and propagate their effects to visible points (e.g., the memory) and possibly signal their occurrence. This work introduces a structured method for generating STLs to detect permanent hardware faults that may arise in TCs. By leveraging the parallelism and regular structure of TCs, the method facilitates the creation of effective STLs for in-field fault detection without hardware modifications and with minimal requirements in terms of test time and memory. The proposed approach was validated on an NVIDIA GeForce RTX 3060 Ti GPU, installed in a Hewlett-Packard Z2 G5 workstation with an Intel Core i9-10800 CPU and 32 GB RAM, available at the Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, Turin, Italy.This setup was used to address stuck-at faults in the arithmetic units of TCs. The results demonstrate that the methodology offers a practical, scalable, and non-intrusive solution for enhancing GPU reliability, applicable in both high-performance and safety-critical environments. Full article
Show Figures

Figure 1

25 pages, 8307 KiB  
Article
Time-Shifted Maps for Industrial Data Analysis: Monitoring Production Processes and Predicting Undesirable Situations
by Tomasz Blachowicz, Sara Bysko, Szymon Bysko, Alina Domanowska, Jacek Wylezek and Zbigniew Sokol
Sensors 2025, 25(11), 3311; https://doi.org/10.3390/s25113311 - 24 May 2025
Viewed by 444
Abstract
The rapid advancement of computing power, combined with the ability to collect vast amounts of data, has unlocked new possibilities for industrial applications. While traditional time–domain industrial signals generally do not allow for direct stability assessment or the detection of abnormal situations, alternative [...] Read more.
The rapid advancement of computing power, combined with the ability to collect vast amounts of data, has unlocked new possibilities for industrial applications. While traditional time–domain industrial signals generally do not allow for direct stability assessment or the detection of abnormal situations, alternative representations can reveal hidden patterns. This paper introduces time-shifted maps (TSMs) as a novel method for analyzing industrial data—an approach that is not yet widely adopted in the field. Unlike contemporary machine learning techniques, TSM relies on a simple and interpretable algorithm designed to process data from standard industrial automation systems. By creating clear, visual representations, TSM facilitates the monitoring and control of production process. Specifically, TSMs are constructed from time series data collected by an acceleration sensor mounted on a robot base. To evaluate the effectiveness of TSM, its results are compared with those obtained using classical signal processing methods, such as the fast Fourier transform (FFT) and wavelet transform. Additionally, TSMs are classified using computed correlation dimensions and entropy measures. To further validate the method, we numerically simulate three distinct anomalous scenarios and present their corresponding TSM-based graphical representations. Full article
Show Figures

Figure 1

20 pages, 3349 KiB  
Article
Multi-Level Particle System Modeling Algorithm with WRF
by Julong Chen, Bin Wang, Rundong Gan, Xuepeng Mou, Shiping Yang and Ling Tan
Atmosphere 2025, 16(5), 571; https://doi.org/10.3390/atmos16050571 - 9 May 2025
Viewed by 333
Abstract
In the fields of meteorological simulation and computer graphics, precise simulation of clouds has been a recent research hotspot. The existing cloud modeling methods often ignore the differentiated characteristics of cloud layers at different heights, and suffer from high computational costs under long-range [...] Read more.
In the fields of meteorological simulation and computer graphics, precise simulation of clouds has been a recent research hotspot. The existing cloud modeling methods often ignore the differentiated characteristics of cloud layers at different heights, and suffer from high computational costs under long-range conditions, making them unsuitable for large-scale scenes. Therefore, we propose a multi-level particle system 3D cloud modeling algorithm based on the Weather Research and Forecasting Model (WRF), which combines particle weight adjustment with a Proportional Integral Derivative (PID) feedback mechanism to represent cloud features of different heights and types. Based on the multi-scale mean-shift clustering algorithm, Adaptive Kernel Density Estimation (AKDE) is introduced to map density to bandwidth, achieving adaptive adjustment of clustering bandwidth while reducing computational resources and improving cloud hierarchy. Meanwhile, selecting the optimal control points based on the correlation between particle density in the edge region and cloud contour can ensure the integrity of the internal structure of the cloud and the clarity of the external contour. To improve modeling efficiency, cascade Bezier curves are designed at different line-of-sights (LoSs), utilizing the weight information of boundary particles to optimize cloud contours. Experimental results show that, compared with similar algorithms, our algorithm reduces the average running time by 37.5%, indicating enhanced computational efficiency and real-time capability, and the average number of required particles by 30.1%, reducing the cost of long-range computing. Our algorithm can fully demonstrate cloud characteristics and interlayer differences, significantly improve modeling efficiency, and can be used for accurate modeling of large-scale cloud scenes, providing strong support for meteorological and climate prediction. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

14 pages, 2851 KiB  
Article
Asynchronized Jacobi Solver on Heterogeneous Mobile Devices
by Ziqiang Liao, Xiayun Hong, Yao Cheng, Liyan Chen, Xuan Cheng and Juncong Lin
Electronics 2025, 14(9), 1768; https://doi.org/10.3390/electronics14091768 - 27 Apr 2025
Viewed by 316
Abstract
Many vision and graphics applications involve the efficient solving of various linear systems, which has been a popular topic for decades. With mobile devices arising and becoming popularized, designing a high-performance solver tailored for them, to ensure the smooth migration of various applications [...] Read more.
Many vision and graphics applications involve the efficient solving of various linear systems, which has been a popular topic for decades. With mobile devices arising and becoming popularized, designing a high-performance solver tailored for them, to ensure the smooth migration of various applications from PC to mobile devices, has become urgent. However, the unique features of mobile devices present new challenges. Mainstream mobile devices are equipped with so-called heterogeneous multiprocessor systems-on-chips (MPSoCs), which consist of processors with different architectures and performances. Designing algorithms to push the limits of of MPSoCs is attractive yet difficult. Different cores are suitable for different tasks. Further, data sharing among different cores can easily neutralize performance gains. Fortunately, the comparable performance of CPUs and GPUs on MPSoCs make the heterogeneous systems promising, compared to their counterparts on PCs. This paper is devoted to a high-performance mobile linear solver for a sparse system with a tailored asynchronous algorithm, to fully exploit the computing power of heterogeneous processors on mobile devices while alleviating the data-sharing overhead. Comprehensive evaluations are performed, with in-depth discussion to shed light on the future design of other numerical solvers. Full article
(This article belongs to the Special Issue Ubiquitous Computing and Mobile Computing)
Show Figures

Figure 1

13 pages, 1181 KiB  
Article
Design of an Emotional Facial Recognition Task in a 3D Environment
by Gemma Quirantes-Gutierrez, Ángeles F. Estévez, Gabriel Artés Ordoño and Ginesa López-Crespo
Computers 2025, 14(4), 153; https://doi.org/10.3390/computers14040153 - 18 Apr 2025
Viewed by 421
Abstract
The recognition of emotional facial expressions is a key skill for social adaptation. Previous studies have shown that clinical and subclinical populations, such as those diagnosed with schizophrenia or autism spectrum disorder, have a significant deficit in the recognition of emotional facial expressions. [...] Read more.
The recognition of emotional facial expressions is a key skill for social adaptation. Previous studies have shown that clinical and subclinical populations, such as those diagnosed with schizophrenia or autism spectrum disorder, have a significant deficit in the recognition of emotional facial expressions. These studies suggest that this may be the cause of their social dysfunction. Given the importance of this type of recognition in social functioning, the present study aims to design a tool to measure the recognition of emotional facial expressions using Unreal Engine 4 software to develop computer graphics in a 3D environment. Additionally, we tested it in a small pilot study with a sample of 37 university students, aged between 18 and 40, to compare the results with a more classical emotional facial recognition task. We also administered the SEES Scale and a set of custom-formulated questions to both groups to assess potential differences in activation levels between the two modalities (3D environment vs. classical format). The results of this initial pilot study suggest that students who completed the task in the classical format exhibited a greater lack of activation compared to those who completed the task in the 3D environment. Regarding the recognition of emotional facial expressions, both tasks were similar in two of the seven emotions evaluated. We believe that this study represents the beginning of a new line of research that could have important clinical implications. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

19 pages, 4646 KiB  
Article
Computational Tool for Curve Smoothing Methods Analysis and Surface Plasmon Resonance Biosensor Characterization
by Mariana Rodrigues Villarim, Andréa Willa Rodrigues Villarim, Mario Gazziro, Marco Roberto Cavallari, Diomadson Rodrigues Belfort and Oswaldo Hideo Ando Junior
Inventions 2025, 10(2), 31; https://doi.org/10.3390/inventions10020031 - 18 Apr 2025
Viewed by 840
Abstract
Biosensors based on the surface plasmon resonance (SPR) technique are widely used for analyte detection due to their high selectivity and real-time detection capabilities. However, conventional SPR spectrum analysis can be affected by experimental noise and environmental variations, reducing the accuracy of results. [...] Read more.
Biosensors based on the surface plasmon resonance (SPR) technique are widely used for analyte detection due to their high selectivity and real-time detection capabilities. However, conventional SPR spectrum analysis can be affected by experimental noise and environmental variations, reducing the accuracy of results. To address these limitations, this study presents the development of an open-source computational tool to optimize SPR biosensor characterization, implemented using MATLAB App Designer (Version R2024b). The tool enables the importation of experimental data, application of different smoothing methods, and integration of traditional and hybrid approaches to enhance accuracy in determining the resonance angle. The proposed tool offers several innovations, such as integration of both traditional and hybrid (angle vs wavelength) analysis modes, implementation of four advanced curve smoothing techniques, including Gaussian filter, Savitzky–Golay, smoothing splines, and EWMA, as well as a user-friendly graphical interface supporting real-time data visualization, experimental data import, and result export. Unlike conventional approaches, the hybrid framework enables multidimensional optimization of SPR parameters, resulting in greater accuracy and robustness in detecting resonance conditions. Experimental validation demonstrated a marked reduction in spectral noise and improved consistency in resonance angle detection across conditions. The results confirm the effectiveness and practical relevance of the tool, contributing to the advancement of SPR biosensor analysis. Full article
(This article belongs to the Section Inventions and Innovation in Biotechnology and Materials)
Show Figures

Figure 1

22 pages, 4223 KiB  
Article
Algorithmic Identification of Conflicting Traffic Lights: A Large-Scale Approach with a Network Conflict Matrix
by Sergio Rojas-Blanco, Alberto Cerezo-Narváez, Sol Sáez-Martínez and Manuel Otero-Mateo
Systems 2025, 13(4), 290; https://doi.org/10.3390/systems13040290 - 15 Apr 2025
Viewed by 559
Abstract
Efficient urban traffic management is crucial for mitigating congestion and enhancing road safety. This study introduces a novel algorithm, with code provided, to generate a traffic light conflict matrix, identifying potential signal conflicts solely based on road network topology. Unlike existing graphical approaches [...] Read more.
Efficient urban traffic management is crucial for mitigating congestion and enhancing road safety. This study introduces a novel algorithm, with code provided, to generate a traffic light conflict matrix, identifying potential signal conflicts solely based on road network topology. Unlike existing graphical approaches that are difficult to execute automatically, our method leverages readily available topological data and adjacency matrices, ensuring broad applicability and automation. While our approach deliberately focuses on topology as a stable foundation, it is designed to complement rather than replace dynamic traffic analysis, serving as an essential preprocessing layer for subsequent temporal optimization. Implemented in MATLAB, with specific functionality for Vissim users, the algorithm has been tested on various networks with up to 547 traffic lights, demonstrating high efficiency, even in complex scenarios. This tool enables focused allocation of computational resources for traffic light optimization and is particularly valuable for prioritizing emergency vehicles. Our findings make a significant contribution to traffic management strategies by offering a scalable and efficient tool that bridges critical gaps in current research. As urban areas continue to grow, this algorithm represents a step forward in developing sustainable solutions for modern transportation challenges. Full article
(This article belongs to the Special Issue Modelling and Simulation of Transportation Systems)
Show Figures

Figure 1

28 pages, 8992 KiB  
Article
Synthesis of Four-Link Initial Kinematic Chains with Spherical Pairs for Spatial Mechanisms
by Samal Abdreshova, Algazy Zhauyt, Kuanysh Alipbayev, Serikbay Kosbolov, Alisher Aden and Aray Orazaliyeva
Appl. Sci. 2025, 15(7), 3602; https://doi.org/10.3390/app15073602 - 25 Mar 2025
Viewed by 358
Abstract
This research addresses the problem of the initial synthesis of kinematic chains with spherical kinematic pairs, which are essential in the design of spatial mechanisms used in robotics, aerospace, and mechanical systems. The goal is to establish the existence of solutions for defining [...] Read more.
This research addresses the problem of the initial synthesis of kinematic chains with spherical kinematic pairs, which are essential in the design of spatial mechanisms used in robotics, aerospace, and mechanical systems. The goal is to establish the existence of solutions for defining the geometric and motion constraints of these kinematic chains, ensuring that the synthesized mechanism achieves the desired motion with precision. By formulating the synthesis problem in terms of nonlinear algebraic equations derived from the spatial positions and orientations of the links, we analyze the conditions under which a valid solution exists. We explore both analytical and numerical methods to solve these equations, highlighting the significance of parameter selection in determining feasible solutions. Specifically, our approach demonstrates the visualization of fixed points, such as A, B, and C, alongside their spatial differences with respect to reference points and transformation matrices. We detail methods for plotting transformation components, including rotation matrix elements (e, m, and n) and derived products from these matrices, as well as the representation of angular parameters (θi, ψi, and φi) in a three-dimensional context. The proposed techniques not only facilitate the debugging and analysis of complex kinematic behaviors but also provide a flexible tool for researchers in robotics, computer graphics, and mechanical design. By offering a clear and interactive visualization strategy, this framework enhances the understanding of spatial relationships and transformation dynamics inherent in multi-body systems. Full article
Show Figures

Figure 1

Back to TopTop