Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = sticker pattern

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 6874 KiB  
Article
Automated Image-Based Wound Area Assessment in Outpatient Clinics Using Computer-Aided Methods: A Development and Validation Study
by Kuan-Chen Li, Ying-Han Lee and Yu-Hsien Lin
Medicina 2025, 61(6), 1099; https://doi.org/10.3390/medicina61061099 - 17 Jun 2025
Viewed by 573
Abstract
Background and Objectives: Traditionally, we evaluate the size of a wound by using Opsite Flexigrid transparent film dressing, placing it over the wound, tracing the edges of the wound, and then calculating the area. However, this method is both time-consuming and subjective, often [...] Read more.
Background and Objectives: Traditionally, we evaluate the size of a wound by using Opsite Flexigrid transparent film dressing, placing it over the wound, tracing the edges of the wound, and then calculating the area. However, this method is both time-consuming and subjective, often leading to varying results depending on the individual performing the assessment. In this study, our goal is to provide an objective method to calculate the wound size and solve variations in photo-taking distance caused by different medical practitioners or at different times, as these can lead to inaccurate wound size assessments. To evaluate this, we employed K-means clustering and used a QR code as a reference to analyze images of the same wound captured at varying distances, objectively quantifying the areas of 40 wounds. This study aims to develop an objective method for calculating the wound size, addressing variations in photo-taking distance that occur across different medical personnel or time points—factors that can compromise measurement accuracy. By improving consistency and reducing the manual workload, this approach also seeks to enhance the efficiency of healthcare providers. We applied K-means clustering for wound segmentation and used a QR code as a spatial reference. Images of the same wounds taken at varying distances were analyzed, and the wound areas of 40 cases were objectively quantified. Materials and Methods: We employed K-means clustering and used a QR code as a reference to analyze wound photos taken by different medical practitioners in the outpatient consulting room. K-means clustering is a machine learning algorithm that segments the wound region by grouping pixels in an image according to their color similarity. It organizes data points into clusters based on shared features. Based on this algorithm, we can use it to identify the wound region and determine its pixel area. We also used a QR code as a reference because of its unique graphical pattern. We used the printed QR code on the patient’s identification sticker as a reference for length. By calculating the ratio of the number of pixels within the square area of the QR code to its actual area, we applied this ratio to the detected wound pixel area, enabling us to calculate the wound’s actual size. The printed patient identification stickers were all uniform in size and format, allowing us to apply this method consistently to every patient. Results: The results support the accuracy of our algorithm when tested on a standard one-cent coin. The paired t-test comparing the first and second photos shot yielded a p-value of 0.370, indicating no significant difference between the two. Similarly, the t-test comparing the first and third photos shot produced a p-value of 0.179, also showing no significant difference. The comparison between the second and third photos shot resulted in a p-value of 0.547, again indicating no significant difference. Since all p-values are greater than 0.05, none of the test pairs show statistically significant differences. These findings suggest that the three randomly taken photo shots produce consistent results and can be considered equivalent. Conclusions: Our algorithm for wound area assessment is highly reliable, interchangeable, and consistently produces accurate results. This objective and practical method can aid clinical decision-making by tracking wound progression over time. Full article
(This article belongs to the Section Surgery)
Show Figures

Figure 1

20 pages, 4300 KiB  
Article
AdvRain: Adversarial Raindrops to Attack Camera-Based Smart Vision Systems
by Amira Guesmi, Muhammad Abdullah Hanif and Muhammad Shafique
Information 2023, 14(12), 634; https://doi.org/10.3390/info14120634 - 28 Nov 2023
Cited by 7 | Viewed by 3258
Abstract
Vision-based perception modules are increasingly deployed in many applications, especially autonomous vehicles and intelligent robots. These modules are being used to acquire information about the surroundings and identify obstacles. Hence, accurate detection and classification are essential to reach appropriate decisions and take appropriate [...] Read more.
Vision-based perception modules are increasingly deployed in many applications, especially autonomous vehicles and intelligent robots. These modules are being used to acquire information about the surroundings and identify obstacles. Hence, accurate detection and classification are essential to reach appropriate decisions and take appropriate and safe actions at all times. Current studies have demonstrated that “printed adversarial attacks”, known as physical adversarial attacks, can successfully mislead perception models such as object detectors and image classifiers. However, most of these physical attacks are based on noticeable and eye-catching patterns for generated perturbations making them identifiable/detectable by the human eye, in-field tests, or in test drives. In this paper, we propose a camera-based inconspicuous adversarial attack (AdvRain) capable of fooling camera-based perception systems over all objects of the same class. Unlike mask-based FakeWeather attacks that require access to the underlying computing hardware or image memory, our attack is based on emulating the effects of a natural weather condition (i.e., Raindrops) that can be printed on a translucent sticker, which is externally placed over the lens of a camera whenever an adversary plans to trigger an attack. Note, such perturbations are still inconspicuous in real-world deployments and their presence goes unnoticed due to their association with a natural phenomenon. To accomplish this, we develop an iterative process based on performing a random search aiming to identify critical positions to make sure that the performed transformation is adversarial for a target classifier. Our transformation is based on blurring predefined parts of the captured image corresponding to the areas covered by the raindrop. We achieve a drop in average model accuracy of more than 45% and 40% on VGG19 for ImageNet dataset and Resnet34 for Caltech-101 dataset, respectively, using only 20 raindrops. Full article
Show Figures

Figure 1

16 pages, 766 KiB  
Article
A Comprehensive Framework for Industrial Sticker Information Recognition Using Advanced OCR and Object Detection Techniques
by Gabriella Monteiro, Leonardo Camelo, Gustavo Aquino, Rubens de A. Fernandes, Raimundo Gomes, André Printes, Israel Torné, Heitor Silva, Jozias Oliveira and Carlos Figueiredo
Appl. Sci. 2023, 13(12), 7320; https://doi.org/10.3390/app13127320 - 20 Jun 2023
Cited by 16 | Viewed by 4239
Abstract
Recent advancements in Artificial Intelligence (AI), deep learning (DL), and computer vision have revolutionized various industrial processes through image classification and object detection. State-of-the-art Optical Character Recognition (OCR) and object detection (OD) technologies, such as YOLO and PaddleOCR, have emerged as powerful solutions [...] Read more.
Recent advancements in Artificial Intelligence (AI), deep learning (DL), and computer vision have revolutionized various industrial processes through image classification and object detection. State-of-the-art Optical Character Recognition (OCR) and object detection (OD) technologies, such as YOLO and PaddleOCR, have emerged as powerful solutions for addressing challenges in recognizing textual and non-textual information on printed stickers. However, a well-established framework integrating these cutting-edge technologies for industrial applications still needs to be discovered. In this paper, we propose an innovative framework that combines advanced OCR and OD techniques to automate visual inspection processes in an industrial context. Our primary contribution is a comprehensive framework adept at detecting and recognizing textual and non-textual information on printed stickers within a company, harnessing the latest AI tools and technologies for sticker information recognition. Our experiments reveal an overall macro accuracy of 0.88 for sticker OCR across three distinct patterns. Furthermore, the proposed system goes beyond traditional Printed Character Recognition (PCR) by extracting supplementary information, such as barcodes and QR codes present in the image, significantly streamlining industrial workflows and minimizing manual labor demands. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

30 pages, 2827 KiB  
Article
Influence of User Preferences on the Revealed Utility Factor of Plug-In Hybrid Electric Vehicles
by Seshadri Srinivasa Raghavan and Gil Tal
World Electr. Veh. J. 2020, 11(1), 6; https://doi.org/10.3390/wevj11010006 - 22 Dec 2019
Cited by 18 | Viewed by 5196
Abstract
Plug-in hybrid electric vehicles (PHEVs) are an effective intermediate vehicle technology option in the long-term transition pathway towards light-duty vehicle electrification. Their net environmental impact is evaluated using the performance metric Utility Factor (UF), which quantifies the fraction of vehicle miles traveled (VMT) [...] Read more.
Plug-in hybrid electric vehicles (PHEVs) are an effective intermediate vehicle technology option in the long-term transition pathway towards light-duty vehicle electrification. Their net environmental impact is evaluated using the performance metric Utility Factor (UF), which quantifies the fraction of vehicle miles traveled (VMT) on electricity. There are concerns about the gap between Environmental Protection Agency (EPA) sticker label and real-world UF due to the inability of test cycles to represent actual driving conditions and assumptions about their driving and charging differing from their actual usage patterns. Using multi-year longitudinal data from 153 PHEVs (11–53 miles all-electric range) in California, this paper systematically evaluates how observed driving and charging, energy consumption, and UF differs from sticker label expectations. Principal Components Analysis and regression model results indicated that UF of short-range PHEVs (less than 20-mile range) was lower than label expectations mainly due to higher annual VMT and high-speed driving. Long-distance travel and high-speed driving were the major reasons for the lower UF of longer-range PHEVs (at least 35-mile range) compared to label values. Enhancing charging infrastructure access at both home and away locations, and increasing the frequency of home charging, improves the UF of short-range and longer-range PHEVs respectively. Full article
Show Figures

Figure 1

Back to TopTop