Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,103)

Search Parameters:
Keywords = intuitive processes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 6313 KB  
Article
Research on the Internal Force Solution for Statically Indeterminate Structures Under a Local Trapezoidal Load
by Pengyun Wei, Shunjun Hong, Lin Li, Junhong Hu and Haizhong Man
Computation 2025, 13(10), 229; https://doi.org/10.3390/computation13100229 - 1 Oct 2025
Abstract
The calculation of internal forces is a critical aspect in the design of statically indeterminate structures. Local trapezoidal loads, as a common loading configuration in practical engineering (e.g., earth pressure, uneven surcharge), make it essential to investigate how to compute the internal forces [...] Read more.
The calculation of internal forces is a critical aspect in the design of statically indeterminate structures. Local trapezoidal loads, as a common loading configuration in practical engineering (e.g., earth pressure, uneven surcharge), make it essential to investigate how to compute the internal forces of statically indeterminate structures under such loads by using the displacement method. The key to displacement-based analysis lies in deriving the fixed-end moment formulas for local trapezoidal loads. Traditional methods, such as the force method, virtual beam method, or integral method, often involve complex computations. Therefore, this study aims to derive a general formula for fixed-end moments in statically indeterminate beams subjected to local trapezoidal loads by using the integral method, providing a more efficient and clear theoretical tool for engineering practice while addressing the limitations of existing educational and applied methodologies. The integral method is employed to derive fixed-end moment expressions for three types of statically indeterminate beams: (1) a beam fixed at both ends, (2) an an-end-fixed another-end-simple-support beam, and (3) a beam fixed at one end and sliding at the other. This approach eliminates the redundant equations of the traditional force method or the indirect transformations of the virtual beam method, directly linking boundary conditions through integral operations on load distributions, thereby significantly simplifying the solving process. Three representative numerical examples validate the correctness and universality of the derived formulas. The results demonstrate that the solutions obtained via the integral method align with software-calculated results, yet the proposed method yields analytical expressions for structural internal forces. Comparative analysis shows that the integral method surpasses traditional approaches (e.g., force method, virtual beam method) in terms of conceptual clarity and computational efficiency, making it particularly suitable for instructional demonstrations and rapid engineering calculations. The proposed integral method provides a systematic analytical framework for the internal force analysis of statically indeterminate structures under local trapezoidal loads, combining mathematical rigor with engineering practicality. The derived formulas can be directly applied to real-world designs, substantially reducing computational complexity. Moreover, this method offers a more intuitive theoretical case for structural mechanics education, enhancing students’ understanding of the mathematical–mechanical relationship between loads and internal forces. The research outcomes hold both theoretical significance and practical engineering value, establishing a solving paradigm for the displacement-based analysis of statically indeterminate structures under complex local trapezoidal loading conditions. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

27 pages, 9151 KB  
Article
A Dynamic Digital Twin Framework for Sustainable Facility Management in a Smart Campus: A Case Study of Chiang Mai University
by Sattaya Manokeaw, Pattaraporn Khuwuthyakorn, Ying-Chieh Chan, Naruephorn Tengtrairat, Manissaward Jintapitak, Orawit Thinnukool, Chinnapat Buachart, Thepparit Sinthamrongruk, Thidarat Kridakorn Na Ayutthaya, Natee Suriyanon, Somjintana Kanangkaew and Damrongsak Rinchumphu
Technologies 2025, 13(10), 439; https://doi.org/10.3390/technologies13100439 - 30 Sep 2025
Abstract
This study presents the development and deployment of a modular digital twin system designed to enhance sustainable facility management within a smart campus context. The system was implemented at the Faculty of Engineering, Chiang Mai University, and integrates 3D spatial modeling, real-time environmental [...] Read more.
This study presents the development and deployment of a modular digital twin system designed to enhance sustainable facility management within a smart campus context. The system was implemented at the Faculty of Engineering, Chiang Mai University, and integrates 3D spatial modeling, real-time environmental and energy sensor data, and multiscale dashboard visualization. Grounded in stakeholder-driven requirements, the platform emphasizes energy management, which is the top priority among campus administrators and technicians. The development process followed a four-phase methodology: (1) stakeholder consultation and requirement analysis; (2) physical data acquisition and 3D model generation; (3) sensor deployment using IoT technologies with NB-IoT and LoRaWAN protocols; and (4) real-time data integration via Firebase and standardized APIs. A suite of dashboards was developed to support interactive monitoring across faculty, building, floor, and room levels. System testing with campus users demonstrated high usability, intuitive spatial navigation, and actionable insights for energy consumption analysis. Feedback indicated strong interest in features supporting data export and predictive analytics. The platform’s modular and hardware-agnostic architecture enables future extensions, including occupancy tracking, water monitoring, and automated control systems. Overall, the digital twin system offers a replicable and scalable model for data-driven facility management aligned with sustainability goals. Its real-time, multiscale capabilities contribute to operational transparency, resource optimization, and climate-responsive campus governance, setting the foundation for broader applications in smart cities and built environment innovation. Full article
Show Figures

Figure 1

24 pages, 1641 KB  
Article
Intellectual Property Protection Through Blockchain: Introducing the Novel SmartRegistry-IP for Secure Digital Ownership
by Abeer S. Al-Humaimeedy
Future Internet 2025, 17(10), 444; https://doi.org/10.3390/fi17100444 - 29 Sep 2025
Abstract
The rise of digital content has made the need for reliable and practical intellectual property (IP) management systems more critical than ever. Most traditional IP systems are prone to issues such as delays, inefficiency, and data security breaches. This paper introduces SmartRegistry-IP, a [...] Read more.
The rise of digital content has made the need for reliable and practical intellectual property (IP) management systems more critical than ever. Most traditional IP systems are prone to issues such as delays, inefficiency, and data security breaches. This paper introduces SmartRegistry-IP, a system developed to simplify the registration, licensing, and transfer of intellectual property assets in a secure and scalable decentralized environment. By utilizing the InterPlanetary File System (IPFS) for decentralized storage, SmartRegistry-IP achieves a low storage latency of 300 milliseconds, outperforming both cloud storage (500 ms) and local storage (700 ms). The system also supports a high transaction throughput of 120 transactions per second. Through the use of smart contracts, licensing agreements are automatically and securely enforced, reducing the need for intermediaries and lowering operational costs. Additionally, the proof-of-work process verifies all transactions, ensuring higher security and maintaining data consistency. The platform integrates an intuitive graphical user interface that enables seamless asset uploads, license management, and analytics visualization in real time. SmartRegistry-IP demonstrates superior efficiency compared to traditional systems, achieving a blockchain delay of 300 ms, which is half the latency of standard systems, averaging 600 ms. According to this study, adopting SmartRegistry-IP provides IP organizations with enhanced security and transparent management, ensuring they can overcome operational challenges regardless of their size. As a result, the use of blockchain for intellectual property management is expected to increase, helping maintain precise records and reducing time spent on online copyright registration. Full article
Show Figures

Figure 1

35 pages, 3558 KB  
Article
Realistic Performance Assessment of Machine Learning Algorithms for 6G Network Slicing: A Dual-Methodology Approach with Explainable AI Integration
by Sümeye Nur Karahan, Merve Güllü, Deniz Karhan, Sedat Çimen, Mustafa Serdar Osmanca and Necaattin Barışçı
Electronics 2025, 14(19), 3841; https://doi.org/10.3390/electronics14193841 - 27 Sep 2025
Abstract
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized [...] Read more.
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized conditions and their actual effectiveness in realistic deployment scenarios. This study presents a comprehensive comparative analysis of two distinct preprocessing methodologies for 6G network slicing classification: Pure Raw Data Analysis (PRDA) and Literature-Validated Realistic Transformations (LVRTs). We evaluate the impact of these strategies on algorithm performance, resilience characteristics, and practical deployment feasibility to bridge the laboratory–reality gap in 6G network optimization. Our experimental methodology involved testing eleven machine learning algorithms—including traditional ML, ensemble methods, and deep learning approaches—on a dataset comprising 10,000 network slicing samples (expanded to 21,033 through realistic transformations) across five network slice types. The LVRT methodology incorporates realistic operational impairments including market-driven class imbalance (9:1 ratio), multi-layer interference patterns, and systematic missing data reflecting authentic 6G deployment challenges. The experimental results revealed significant differences in algorithm behavior between the two preprocessing approaches. Under PRDA conditions, deep learning models achieved perfect accuracy (100% for CNN and FNN), while traditional algorithms ranged from 60.9% to 89.0%. However, LVRT results exposed dramatic performance variations, with accuracies spanning from 58.0% to 81.2%. Most significantly, we discovered that algorithms achieving excellent laboratory performance experience substantial degradation under realistic conditions, with CNNs showing an 18.8% accuracy loss (dropping from 100% to 81.2%), FNNs experiencing an 18.9% loss (declining from 100% to 81.1%), and Naive Bayes models suffering a 34.8% loss (falling from 89% to 58%). Conversely, SVM (RBF) and Logistic Regression demonstrated counter-intuitive resilience, improving by 14.1 and 10.3 percentage points, respectively, under operational stress, demonstrating superior adaptability to realistic network conditions. This study establishes a resilience-based classification framework enabling informed algorithm selection for diverse 6G deployment scenarios. Additionally, we introduce a comprehensive explainable artificial intelligence (XAI) framework using SHAP analysis to provide interpretable insights into algorithm decision-making processes. The XAI analysis reveals that Packet Loss Budget emerges as the dominant feature across all algorithms, while Slice Jitter and Slice Latency constitute secondary importance features. Cross-scenario interpretability consistency analysis demonstrates that CNN, LSTM, and Naive Bayes achieve perfect or near-perfect consistency scores (0.998–1.000), while SVM and Logistic Regression maintain high consistency (0.988–0.997), making them suitable for regulatory compliance scenarios. In contrast, XGBoost shows low consistency (0.106) despite high accuracy, requiring intensive monitoring for deployment. This research contributes essential insights for bridging the critical gap between algorithm development and deployment success in next-generation wireless networks, providing evidence-based guidelines for algorithm selection based on accuracy, resilience, and interpretability requirements. Our findings establish quantitative resilience boundaries: algorithms achieving >99% laboratory accuracy exhibit 58–81% performance under realistic conditions, with CNN and FNN maintaining the highest absolute accuracy (81.2% and 81.1%, respectively) despite experiencing significant degradation from laboratory conditions. Full article
Show Figures

Figure 1

29 pages, 5349 KB  
Article
Novel Approach to Modeling Investor Decision-Making Using the Dual-Process Theory: Synthesizing Experimental Methods from Within-Subjects to Between-Subjects Designs
by Rachel Lipshits, Kelly Goldstein, Alon Goldstein, Ron Eichel and Ayelet Goldstein
Mathematics 2025, 13(19), 3090; https://doi.org/10.3390/math13193090 - 26 Sep 2025
Abstract
This paper addresses a central contradiction in dual-process theories of reasoning: identical tasks produce different outcomes under within-subjects and between-subjects experimental designs. Drawing on two prior studies that exemplify this divergence, we synthesize the empirical patterns into a unified theoretical account. We propose [...] Read more.
This paper addresses a central contradiction in dual-process theories of reasoning: identical tasks produce different outcomes under within-subjects and between-subjects experimental designs. Drawing on two prior studies that exemplify this divergence, we synthesize the empirical patterns into a unified theoretical account. We propose a conceptual framework in which the research design itself serves as a cognitive moderator, influencing the dominance of System 1 (intuitive) or System 2 (analytical) processing. To formalize this synthesis, we introduce a mathematical model that captures the functional relationship between methodological framing, cognitive system engagement, and decision accuracy. The model supports both forward prediction and Bayesian inference, offering a scalable foundation for future empirical calibration. This integration of experimental design and cognitive processing contributes to resolving theoretical ambiguity in dual-process research and opens avenues for predictive modeling of reasoning performance. By formalizing dual-process cognition through dynamic system analogies, this study contributes a continuous modeling approach to performance fluctuations under methodological asymmetry. Full article
(This article belongs to the Section E5: Financial Mathematics)
Show Figures

Figure 1

19 pages, 4409 KB  
Article
Numerical and Experimental Research on the Effects of Hydrogen Injection Timing on the Performance of Hydrogen/N-Butanol Dual-Fuel Engine with Hydrogen Direct Injection
by Weiwei Shang, Xintong Shi, Zezhou Guo and Xiaoxue Xing
Energies 2025, 18(18), 4987; https://doi.org/10.3390/en18184987 - 19 Sep 2025
Viewed by 183
Abstract
Hydrogen injection timing (HIT) plays a crucial role in the combustion and emission characteristics of a hydrogen/n-butanol dual-fuel engine with hydrogen direct injection. This study employed an integrated approach combining three-dimensional simulation modeling and engine test bench experiments to investigate the effects of [...] Read more.
Hydrogen injection timing (HIT) plays a crucial role in the combustion and emission characteristics of a hydrogen/n-butanol dual-fuel engine with hydrogen direct injection. This study employed an integrated approach combining three-dimensional simulation modeling and engine test bench experiments to investigate the effects of HIT on engine performance. In order to have a more intuitive understanding of the physical and chemical change processes, such as the stratification state and combustion status of hydrogen in the cylinder, and to essentially explore the internal mechanism and fundamental reasons for the improvement in performance of n-butanol engines by hydrogen addition, a numerical study was conducted to examine the effects of HIT on hydrogen stratification and combustion behavior. The simulation results demonstrated that within the investigated range, at 100 °CA BTDC hydrogen injection time, hydrogen forms an ideal hydrogen stratification state in the cylinder; that is, a locally enriched hydrogen zone near the spark plug, while there is a certain distribution of hydrogen in the cylinder. Meanwhile, the combustion state also reaches the optimal level at this hydrogen injection moment. Consequently, 100 °CA BTDC is identified as the optimal HIT for a hydrogen/n-butanol dual-fuel engine. At the same time, an experimental study was performed to capture the actual complex processes and comprehensively evaluate combustion and emission characteristics. The experimental results indicate that both dynamic performance (torque) and combustion characteristics (cylinder pressure, flame development period, etc.) achieve optimal values at the HIT of 100 °CA BTDC. Notably, under lean-burn conditions, the combustion parameters exhibit greater sensitivity to HIT. Regarding emissions, the CO and HC emissions initially decreased slightly, then gradually increased with advanced injection timing. The 100 °CA BTDC timing effectively reduced the CO emissions at λ = 0.9 and 1.0. CO emissions at λ = 1.2, and showed minimal sensitivity to the injection timing variations. Therefore, optimized HIT facilitates enhanced combustion efficiency and emission performance in hydrogen-direct-injection n-butanol engines. Full article
Show Figures

Figure 1

16 pages, 2069 KB  
Article
“Can I Use My Leg Too?” Dancing with Uncertainty: Exploring Probabilistic Thinking Through Embodied Learning in a Jerusalem Art High School Classroom
by Dafna Efron and Alik Palatnik
Educ. Sci. 2025, 15(9), 1248; https://doi.org/10.3390/educsci15091248 - 18 Sep 2025
Viewed by 179
Abstract
Despite increased interest in embodied learning, the role of sensorimotor activity in shaping students’ probabilistic reasoning remains underexplored. This design-based study examines how high school students develop key probabilistic concepts, including sample space, certainty, and event probability, through whole-body movement activities situated in [...] Read more.
Despite increased interest in embodied learning, the role of sensorimotor activity in shaping students’ probabilistic reasoning remains underexplored. This design-based study examines how high school students develop key probabilistic concepts, including sample space, certainty, and event probability, through whole-body movement activities situated in an authentic classroom setting. Grounded in embodied cognition theory, we introduce a two-axis interpretive framework. One axis spans sensorimotor exploration and formal reasoning, drawing from established continuums in the literature. The second axis, derived inductively from our analysis, contrasts engagement with distraction, foregrounding the affective and attentional dimensions of embodied participation. Students engaged in structured yet open-ended movement sequences that elicited intuitive insights. This approach, epitomized by one student’s spontaneous question, “Can I use my leg too?”, captures the agentive and improvisational character of the embodied learning environment. Through five analyzed classroom episodes, we trace how students shifted between bodily exploration and formalization, often through nonlinear trajectories shaped by play, uncertainty, and emotionally driven reflection. While moments of insight emerged organically, they were also fragile, as they were affected by ambiguity and the difficulty in translating physical actions into mathematical language. Our findings underscore the pedagogical potential of embodied design for probabilistic learning while also highlighting the need for responsive teaching that balances structure with improvisation and supports affective integration throughout the learning process. Full article
Show Figures

Figure 1

31 pages, 19437 KB  
Interesting Images
Fringes, Flows, and Fractures—A Schlieren Study of Fluid and Optical Discontinuities
by Emilia Georgiana Prisăcariu, Raluca Andreea Roșu and Valeriu Drăgan
Fluids 2025, 10(9), 243; https://doi.org/10.3390/fluids10090243 - 16 Sep 2025
Viewed by 305
Abstract
This article presents a collection of schlieren visualizations captured using a custom-built, laboratory-based imaging system, designed to explore a wide range of flow and refractive phenomena. The experiments were conducted as a series of observational case studies, serving as educational bloc notes for [...] Read more.
This article presents a collection of schlieren visualizations captured using a custom-built, laboratory-based imaging system, designed to explore a wide range of flow and refractive phenomena. The experiments were conducted as a series of observational case studies, serving as educational bloc notes for students and researchers working in fluid mechanics, optics, and high-speed imaging. High-resolution images illustrate various phenomena including shockwave propagation from bursting balloons, vapor plume formation from volatile liquids, optical surface imperfections in transparent materials, and the dynamic collapse of soap bubbles. Each image is accompanied by brief experimental context and interpretation, highlighting the physical principles revealed through the schlieren technique. The resulting collection emphasizes the accessibility of flow visualization in a teaching laboratory, and its value in making invisible physical processes intuitively understandable. Full article
(This article belongs to the Special Issue Physical and Chemical Phenomena in High-Speed Flows)
Show Figures

Figure 1

12 pages, 965 KB  
Article
SeismicNoiseAnalyzer: A Deep-Learning Tool for Automatic Quality Control of Seismic Stations
by Alessandro Pignatelli, Paolo Casale, Veronica Vignoli and Flavia Tavani
Computers 2025, 14(9), 392; https://doi.org/10.3390/computers14090392 - 16 Sep 2025
Viewed by 288
Abstract
SeismicNoiseAnalyzer 1.0 is a software tool designed to automatically assess the quality of seismic stations through the classification of spectral diagrams. By leveraging convolutional neural networks trained on expert-labeled data, the software emulates human visual inspection of probability density function (PDF) plots. It [...] Read more.
SeismicNoiseAnalyzer 1.0 is a software tool designed to automatically assess the quality of seismic stations through the classification of spectral diagrams. By leveraging convolutional neural networks trained on expert-labeled data, the software emulates human visual inspection of probability density function (PDF) plots. It supports both individual image analysis and batch processing from compressed archives, providing detailed reports that summarize station health. Two classification networks are available: a binary model that distinguishes between working and malfunctioning stations and a ternary model that introduces an intermediate “doubtful” category to capture ambiguous cases. The system demonstrates high agreement with expert evaluations and enables efficient instrumentation control across large seismic networks. Its intuitive graphical interface and automated workflow make it a valuable tool for routine monitoring and data validation. Full article
Show Figures

Graphical abstract

15 pages, 3354 KB  
Article
CAFM-Enhanced YOLOv8: A Two-Stage Optimization for Precise Strawberry Disease Detection in Complex Field Conditions
by Hua Li, Jixing Liu, Ke Han and Xiaobo Cai
Appl. Sci. 2025, 15(18), 10025; https://doi.org/10.3390/app151810025 - 13 Sep 2025
Viewed by 210
Abstract
Strawberry, as an important global economic crop, its disease prevention and control directly affects yield and quality. Traditional detection means rely on manual observation or traditional machine learning algorithms, which have defects such as low efficiency, high false detection rate, and insufficient adaptability [...] Read more.
Strawberry, as an important global economic crop, its disease prevention and control directly affects yield and quality. Traditional detection means rely on manual observation or traditional machine learning algorithms, which have defects such as low efficiency, high false detection rate, and insufficient adaptability to tiny disease spots and complex environment. To solve the above problems, this study proposes a strawberry disease recognition method based on improved YOLOv8. By systematically acquiring 3146 image data covering seven types of typical diseases, such as gray mold and powdery mildew, a high-quality dataset containing different disease stages and complex backgrounds was constructed. Aiming at the difficulties in disease detection, the YOLOv8 model is optimized in two stages: on the one hand, the ultra-small scale detection head (32 × 32) is introduced to enhance the model’s ability to capture early tiny spots; on the other hand, the convolution and attention fusion module (CAFM) is combined to enhance the feature robustness in complex field scenes through the synergy of local feature extraction and global information focusing. Experiments show that the mAP50 of the improved model reaches 0.96 and outperforms mainstream algorithms such as YOLOv5 and Faster R-CNN in both recall and F1 score. In addition, the interactive system developed based on the PyQT5 framework can process images, videos and camera inputs in real time, and the disease areas are presented intuitively through visualized bounding boxes and category labels, which provides farmers with a lightweight and low-threshold field management tool. This study not only verifies the effectiveness of the improved algorithm but also provides a practical reference for the engineering application of deep learning in agricultural scenarios, which is expected to promote the further implementation of precision agriculture technology. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

8 pages, 1158 KB  
Proceeding Paper
Recoloring Cartoon Images Based on Palette Mapping Using K-Means Clustering and Gradient Analysis
by Alun Sujjada, Mochamad Rizky Fauzi, Abrar Ramadava Algadri Suriawan and Dilfa Mahmood Suhaimi
Eng. Proc. 2025, 107(1), 82; https://doi.org/10.3390/engproc2025107082 - 9 Sep 2025
Viewed by 165
Abstract
This study introduces a palette-based method for the recolorization of cartoon images by combining the k-means clustering algorithm and gradient analysis. The method aims to preserve the visual identity of the original image while allowing flexibility in color manipulation. By segmenting colors using [...] Read more.
This study introduces a palette-based method for the recolorization of cartoon images by combining the k-means clustering algorithm and gradient analysis. The method aims to preserve the visual identity of the original image while allowing flexibility in color manipulation. By segmenting colors using k-means clustering, the approach produces a representative color palette reflecting the dominant colors in the image. Gradient analysis is then applied to maintain smooth color transitions and aesthetic consistency. This palette serves as the basis for the recolorization process, enabling intuitive color adjustments without disrupting the visual structure of the image. Experimental results demonstrate that this method can produce recolorized images with high visual quality, preserve original image details, and provide users with greater control over the resulting colors. Full article
Show Figures

Figure 1

20 pages, 4920 KB  
Article
A Complete Neural Network-Based Representation of High-Dimension Convolutional Neural Networks
by Ray-Ming Chen
Mathematics 2025, 13(17), 2903; https://doi.org/10.3390/math13172903 - 8 Sep 2025
Viewed by 296
Abstract
Convolutional Neural Networks (CNNs) are a highly used machine learning architecture in various fields. Typical descriptions of CNNs are based on low-dimension and tensor representations in the feature extraction part. In this article, we extend the setting of CNNs to any arbitrary dimension [...] Read more.
Convolutional Neural Networks (CNNs) are a highly used machine learning architecture in various fields. Typical descriptions of CNNs are based on low-dimension and tensor representations in the feature extraction part. In this article, we extend the setting of CNNs to any arbitrary dimension and linearize the whole setting via the typical layers of neurons. In essence, a partial and a full network construct the entire process of a standard CNN, with the partial network being used to linearize the feature extraction. By doing so, we link the tensor-style representation of CNNs with the pure network representation. The outcomes serve two main purposes: to relate CNNs with other machine learning frameworks and to facilitate intuitive representations. Full article
Show Figures

Figure 1

16 pages, 2107 KB  
Article
SMS and Telephone Communication as Tools to Reduce Missed Medical Appointments
by Michał Brancewicz, Marlena Robakowska, Marcin Śliwiński and Dariusz Rystwej
Appl. Sci. 2025, 15(17), 9773; https://doi.org/10.3390/app15179773 - 5 Sep 2025
Cited by 1 | Viewed by 771
Abstract
The aim of this study was to analyze the effectiveness of implementing an automated appointment confirmation system in a mental health clinic and to assess its impact on patient attendance, which may indirectly support the patient recovery process. The study was conducted at [...] Read more.
The aim of this study was to analyze the effectiveness of implementing an automated appointment confirmation system in a mental health clinic and to assess its impact on patient attendance, which may indirectly support the patient recovery process. The study was conducted at a mental health outpatient clinic in Gdańsk, Poland, and focused on medical appointments across three affiliated outpatient units. Data from 2019 and 2023 were compared, focusing particularly on the rate of missed appointments (relationship between number of visits that did not take place and total number of visits that were scheduled in the software), form return rates (the relationship between the number of forms returned by patients and the total number sent), and patient opinions regarding the usability of the new system. The results showed a significant reduction in no-show rates—from 18.55% to 7.01%—confirming the high effectiveness of the automated system. The form return rate reached 55.41%, with the highest engagement observed among individuals aged 35–44. Patient evaluation of the system was highly positive—over 93% found it intuitive and meeting their expectations. A proprietary software solution developed in Python, alongside databases and Microsoft Office Access/Excel tools, was used for data collection and analysis. The study demonstrated that a comprehensive approach, combining automated reminders with the ability for quick patient response and telephone support, is an effective tool for improving the accessibility and quality of healthcare services. The analysis also considered limitations related to digital barriers and identified directions for further research, including studies on how patient abstention from appointments affects their recovery process. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

27 pages, 30539 KB  
Article
Priori Knowledge Makes Low-Light Image Enhancement More Reasonable
by Zefei Chen, Yongjie Lin, Jianmin Xu, Kai Lu and Zihao Huang
Sensors 2025, 25(17), 5521; https://doi.org/10.3390/s25175521 - 4 Sep 2025
Viewed by 989
Abstract
This paper presents a priori knowledge-based low-light image enhancement framework, termed Priori DCE (Priori Deep Curve Estimation). The priori knowledge consists of two key aspects: (1) enhancing a low-light image is an ill-posed task, as the brightness of the enhanced image corresponding to [...] Read more.
This paper presents a priori knowledge-based low-light image enhancement framework, termed Priori DCE (Priori Deep Curve Estimation). The priori knowledge consists of two key aspects: (1) enhancing a low-light image is an ill-posed task, as the brightness of the enhanced image corresponding to a low-light image is uncertain. To resolve this issue, we incorporate priori channels into the model to guide the brightness of the enhanced image; (2) during the enhancement of a low-light image, the brightness of pixels may increase or decrease. This paper explores the probability of a pixel’s brightness increasing/decreasing as its prior enhancement/suppression probability. Intuitively, pixels with higher brightness should have a higher priori suppression probability, while pixels with lower brightness should have a higher priori enhancement probability. Inspired by this, we propose an enhancement function that adaptively adjusts the priori enhancement probability based on variations in pixel brightness. In addition, we propose the Global-Attention Block (GA Block). The GA Block ensures that, during the low-light image enhancement process, each pixel in the enhanced image is computed based on all the pixels in the low-light image. This approach facilitates interactions between all pixels in the enhanced image, thereby achieving visual balance. The experimental results on the LOLv2-Synthetic dataset demonstrate that Priori DCE has a significant advantage. Specifically, compared to the SOTA Retinexformer, the Priori DCE improves the PSNR index and SSIM index from 25.67 and 92.82 to 29.49 and 93.6, respectively, while the NIQE index decreases from 3.94 to 3.91. Full article
Show Figures

Figure 1

22 pages, 9741 KB  
Article
Augminded: Ambient Mirror Display Notifications
by Timo Götzelmann, Pascal Karg and Mareike Müller
Multimodal Technol. Interact. 2025, 9(9), 93; https://doi.org/10.3390/mti9090093 - 4 Sep 2025
Viewed by 406
Abstract
This paper presents a new approach for providing contextual information in real-world environments. Our approach is consciously designed to be low-threshold; by using mirrors as augmented reality surfaces, no devices such as AR glasses or smartphones have to be worn or held by [...] Read more.
This paper presents a new approach for providing contextual information in real-world environments. Our approach is consciously designed to be low-threshold; by using mirrors as augmented reality surfaces, no devices such as AR glasses or smartphones have to be worn or held by the user. It enables technical and non-technical objects in the environment to be visually highlighted and thus subtly draw the attention of people passing by. The presented technology enables the provision of information that can be viewed in more detail by the user if required by slowing down their movement. Users can decide whether this is relevant to them or not. A prototype system was implemented and evaluated through a user study. The results show a high level of acceptance and intuitive usability of the system, with participants being able to reliably perceive and process the information displayed. The technology thus offers promising potential for the unobtrusive and context-sensitive provision of information in various application areas. The paper discusses limitations of the system and outlines future research directions to further optimize the technology and extend its applicability. Full article
Show Figures

Figure 1

Back to TopTop