1. Introduction
With the growing global attention to product quality and safety, transparent packaging containers (such as ampoules, reagent bottles, and plastic bottles) are being increasingly used in key industries including pharmaceuticals, food, and cosmetics. The specific demands of these industries require transparent containers not only to provide good sealing and safety but also to ensure product traceability and market acceptance. In the medical field, transparent packaging containers are essential components of drugs and medical devices, and their quality directly affects treatment outcomes and medication safety. The World Health Organization (WHO), in its Guidelines on Packaging for Pharmaceutical Products, clearly states that transparent containers help ensure the stability and safety of medicines while allowing healthcare professionals and patients to observe the condition of the drug [
1]. Studies have shown that glass ampoules offer excellent chemical stability and interact minimally with drugs, making them widely used in pharmaceutical packaging [
2]. In the food industry, transparent packaging has become an important trend in design due to its advantages in product display and information delivery, which help enhance consumer trust and sense of safety [
3,
4]. In the cosmetics industry, transparent containers are often made from chemically inert glass, which helps maintain the stability and effectiveness of the product while enhancing its visual quality [
5]. These containers not only protect the formulation but also improve the product’s display appeal, making it more attractive and competitive in the market. In conclusion, transparent packaging containers play a vital role in the pharmaceutical, food, and cosmetics industries. Whether in protecting the safety of drugs and food or enhancing the market competitiveness of cosmetic products, the quality of transparent containers directly affects the overall performance of the product and consumer acceptance.
In modern industrial manufacturing of transparent packaging containers, achieving high-precision and high-efficiency quality inspection has become a key direction in the development of intelligent manufacturing. This is especially important in industries such as pharmaceuticals, food, and cosmetics, where packaging quality is critical. Transparent containers not only serve the basic function of protecting the contents, but their structural reliability and safety also directly affect product stability, market acceptance, and user safety. For example, transparent containers like ampoules, reagent bottles, and plastic bottles may experience breakage during use due to manufacturing defects, dimensional deviations, or abnormal residual stress, which can lead to quality issues such as drug leakage, contamination risks, or filling failures. Therefore, building a systematic and integrated inspection and analysis mechanism is essential for improving the overall quality and reliability of these containers. As highlighted in the Fall 2024 cover story of
Vision Spectra: “Vision makes quick work of bottle inspection,” industrial vision is becoming a core enabling technology for bottle inspection [
6].
It is important to note that stress detection, dimensional measurement, and defect recognition are not independent tasks in practical applications, but rather closely related physical processes. Traditional inspection procedures often rely on separate systems to perform each task individually, which leads to high costs, complex workflows, and fragmented information, ultimately limiting overall inspection efficiency and accuracy. Recent studies have shown significant interconnections and coupling among these three tasks. For example, reference [
7] used finite element analysis to reveal that thin-walled glass containers like ampoules tend to develop stress concentrations in structural transition zones—areas that are not only potential starting points for microcrack propagation but may also affect local dimensional stability. Similarly, studies on polymer bottles [
8,
9] have shown that environmental stress cracking (ESC) often initiates from small defect areas and evolves along stress gradients, indicating a strong coupling between defects and internal stress. Dimensional measurement is also affected by defects: surface damage may alter the physical contour, interfering with the extraction of measurement boundaries; in addition, image noise and grayscale variation can degrade the stability and accuracy of edge detection algorithms. However, most existing studies focus on optimizing individual detection modules, while lacking system-level exploration of coordinated inspection of stress, dimension, and defects within a unified platform. Especially under resource-constrained conditions, achieving coordinated inspection with balanced accuracy, efficiency, and structural compactness remains a key challenge to be solved.
Therefore, future inspection systems should gradually shift from traditional functionally separated architectures to an integrated and multi-task coordinated inspection framework that combines defect, stress, and dimensional inspection. By fusing perception mechanisms with information processing chains, such systems can not only improve detection efficiency and stability, but also enable global modeling of container health conditions, thereby promoting the implementation of predictive maintenance and intelligent quality control.
In the field of stress detection, photoelasticity is one of the mainstream techniques, enabling visualization and quantitative analysis of residual stress based on the birefringence effect of materials. Errapart et al. achieved accurate measurement of residual stress distribution in non-axisymmetric glass containers using the principles of photoelasticity, demonstrating the feasibility and effectiveness of this method for inspecting glass products with complex shapes [
10]. To enhance sensitivity, several emerging technologies have been developed in recent years, such as the scattered light polarimeter (SCALP) technique [
11], quantum polarization imaging systems [
12], and dual photoelastic modulator-based frequency-difference modulation methods [
13]. However, these advanced methods often involve complex systems and high equipment costs, limiting their practical deployment in production lines. Efferz et al. [
14], in their study on edge stress measurement of architectural tempered glass, also pointed out that current systems still face adaptability bottlenecks in industrial environments, indicating the need for further development of efficient and low-cost inspection technologies. Against this background, the Senarmont compensation method [
15], known for its simple structure, ease of operation, and low cost, remains widely used for stress detection in transparent containers. It is especially suitable for frequent sampling inspections under resource-constrained conditions. Therefore, we selected the Senarmont compensation method as the most appropriate approach for stress measurement in this work.
In the area of dimensional measurement, machine vision offers a non-contact and automated solution with high precision. Li [
16] developed a system for measuring the dimensions of shaft parts based on an improved single-pixel edge detection method. Miao [
17] achieved online dimensional measurement of disk-shaped parts through camera calibration and geometric fitting. Zhou and Hartman [
18] proposed a cost-effective vision inspection system for measuring key dimensions of plastic bottles, using intelligent image processing techniques to achieve high measurement accuracy. Eshkevari et al. [
19] designed a glass bottle inspection approach for pharmaceutical use, which improved the accuracy of complex boundary detection through image segmentation. These studies demonstrate that machine vision has great potential in the dimensional inspection of transparent containers, especially in achieving high precision and automation.
In the field of defect detection, traditional methods such as rule-based template matching, edge detection, and statistical thresholding were widely used in early applications. For example, Zhou et al. [
20] proposed an automated glass bottle bottom inspection system based on machine vision, using saliency detection and template matching to identify bottom defects. Yang et al. [
21] applied Halcon software with threshold segmentation and edge detection techniques to achieve high-precision, high-speed, and stable detection of ampoule bottle mouth defects. However, due to the high reflectivity, complex lighting conditions, and curved surfaces of transparent materials, these rule-based methods face challenges and have gradually shown limitations such as poor robustness and weak generalization. To overcome these issues, deep learning methods have rapidly emerged in industrial vision. Models based on convolutional neural networks (CNN) can automatically extract image features, significantly improving the ability to recognize diverse types of defects.For instance, Kazmi [
22] proposed a deep CNN-based framework for visual inspection of plastic bottles, demonstrating high accuracy and low resource consumption. Claypo [
23] combined CNN-LSTM with instance-based classification to further improve the accuracy and efficiency of surface defect detection in glass bottles. These studies highlight the strong potential of deep learning in complex industrial environments. It is worth noting that in real industrial production, defect detection systems must not only achieve high accuracy but also meet requirements for real-time performance and high throughput. To address this, the YOLO (You Only Look Once) [
24] family of algorithms, as a typical single-stage object detection approach, has gained significant attention due to its end-to-end structure, high parallelism, and millisecond-level detection speed. YOLO stands out for its ability to balance detection accuracy and real-time performance, making it an increasingly popular solution in industrial defect detection scenarios.
Although significant progress has been made in the individual detection of residual stress, structural dimensions, and surface defects, most industrial inspection systems still adopt a separated design, where the three types of tasks are handled by different modules or standalone devices. This architecture not only increases deployment costs and maintenance complexity, but also leads to fragmented data flows and limited information sharing, making it difficult to meet the actual demands for multi-parameter coordinated inspection and high-throughput sampling on production lines. Especially in resource-constrained environments, achieving compact structure, functional integration, and efficient multi-target detection remains a major challenge.
On the other hand, most existing studies focus on algorithm optimization for individual tasks, with limited attention to the design of mechanisms that enable coordinated execution of multiple inspection tasks within a unified system, as well as the unified scheduling of software and hardware resources. The lack of system-level integration and coupling capabilities has become a key bottleneck restricting the development of industrial vision systems toward high reliability and real-time performance.
To address the above challenges, this paper proposes a multifunctional vision inspection platform for transparent containers, using pharmaceutical ampoules as a representative example. For the first time, the platform integrates residual stress detection, key dimensional measurement, and surface defect recognition within a unified system architecture. The platform improves system integration and task coordination by designing a visual resource sharing mechanism and a module-level scheduling process, enabling dimensional and defect inspection to share a common vision subsystem while maintaining high-precision stress analysis capabilities.In the defect recognition module, the YOLOv8 deep learning model is introduced, combined with optimized image acquisition and processing scheduling, to enable efficient detection of various defect types without increasing hardware load. This ensures the system’s overall real-time performance and adaptability to industrial environments. The integrated architecture not only enhances multi-parameter inspection efficiency but also provides a data foundation for modeling the relationships among defects, stress, and structural features, showing strong potential for industrial application and future research development.
To address the fragmented nature of existing inspection systems, we propose a unified vision-based platform and introduce innovations across three technical dimensions:
(1) System Architecture:We propose a unified visual inspection platform for ampoule quality control, integrating stress evaluation, dimensional measurement, and defect detection into a single imaging system. Dimensional measurement and defect detection share the same planar backlight and camera architecture, enabling synergistic multi-parameter inspection. To the best of our knowledge, this is the first multi-functional integrated platform applied to ampoule quality control, filling the gap in integrated ampoule inspection.
(2) Task Scheduling Mechanism: We design an industrial real-time multi-task scheduling strategy for shared imaging modules, coordinating dimensional measurement and defect detection via dynamic regulation of acquisition timing, exposure switching, and processing sequences. This strategy achieves efficient resource utilization and real-time responsiveness, addressing the conflict and latency issues in multi-task industrial inspection.
(3) Deployment Strategy:We develop a module-level deployment strategy with hardware-software co-optimization to address key challenges of inference latency, illumination stability, and industrial environment adaptability. This strategy ensures robust and efficient system operation under near-production conditions, enhancing the practical applicability of the proposed platform.
In addition, we built a simulated production-line testing setup to evaluate the overall performance of the integrated platform. The evaluation shows that the system simplifies hardware requirements while preserving the precision needed for stress, dimensional, and defect inspection. In our tests, the platform achieved an optical path difference error of about ±3 nm for stress measurement, a dimensional accuracy of ±0.2 mm, and an mAP@0.5 of 90.3% for defect detection, indicating its suitability for industrial deployment and scalable integration.
6. Ablation Study
To evaluate the advantages of the proposed unified visual inspection platform in terms of overall performance and system design, design-level ablation experiments were conducted. Since it is challenging to construct two complete systems under practical conditions, a semi-quantitative comparison based on system parameters and camera specifications was employed. The integrated deployment scheme and the separate deployment scheme were analyzed and estimated in terms of hardware configuration, acquisition efficiency, and system synchronization complexity.
In conventional schemes, stress measurement, dimensional inspection, and defect detection are typically performed using separate optical paths and camera modules. In contrast, the proposed unified visual platform integrates the dimensional and defect detection modules into the same optical path and camera, achieving multi-task inspection solely through light source switching and algorithmic branching. To evaluate the engineering advantages of this integrated design, this paper selects five indicators for comparison: the number of cameras, light sources and imaging channels, image acquisition time (estimated based on camera frame rate), system synchronization difficulty, and calibration workload.
The camera used is the Basler acA2440-20gmLET (Sony IMX264 sensor, global shutter; Basler AG, Ahrensburg, Germany), a customized version of the Basler acA2440-20gm. Except for the resolution, it is identical to the standard model, with a typical frame rate of 22.7 fps (corresponding to a single-frame period of approximately 44.1 ms). In this study, both dimensional and defect inspections were performed using images acquired at the native resolution; any subsequent downsampling or ROI extraction was conducted only during algorithm processing and did not affect the acquisition rate. Therefore, the image acquisition time was estimated based on this frame rate.
As shown in
Table 12, for the same inspection tasks, the integrated visual system outperforms the conventional separate scheme in terms of hardware count, optical structure, synchronization complexity, and cost. In particular, regarding image acquisition efficiency, the integrated system can reduce time delay by approximately 50%, providing a clear advantage for production line inspection or online quality control scenarios. It should be noted that this section presents a design-level ablation study, and the data are estimated based on camera specifications and engineering experience rather than measured from fully built systems. In future work, prototypes of both systems will be constructed to conduct experimental verification of acquisition delay, alignment errors, and algorithm fusion performance, providing quantitative evidence of the integrated system’s performance improvements in real production environments.
7. Discussion
Conventional inspection systems for transparent containers typically treat stress analysis, dimensional measurement, and defect detection as independent tasks, each requiring separate hardware modules and data processing pipelines. This separation not only increases system complexity and cost, but also limits opportunities for cross-task optimization and joint decision-making.
The platform proposed in this work represents a shift toward an integrated framework that combines stress, dimensional, and defect inspection into a coordinated system. By sharing imaging hardware and synchronizing data flows, the system achieves higher operational efficiency and facilitates multi-task reasoning. For example, critical defects near the bottle neck can be cross-referenced with local stress concentrations and dimensional deviations, enabling a composite quality judgment rather than relying on isolated criteria.
This fusion of sensory data streams lays the foundation for global modeling of container health status. In addition to identifying defective items, the system can generate structured data suitable for predictive quality assessment and risk evaluation. Such capabilities align with the vision of intelligent manufacturing, where inspection systems not only detect problems, but also anticipate failures and guide process optimization.
Future research will focus on enhancing this integration by refining the task scheduling logic, improving real-time throughput, and establishing correlation models between defect patterns and residual stress distributions. Ultimately, we aim to evolve the platform from a multi-parameter inspection tool into an intelligent system for failure prediction and quality forecasting.
Moreover, establishing a correlation model between surface defect patterns and residual stress distribution opens up new possibilities for intelligent inspection. Instead of performing stress analysis uniformly across the entire sample, the system can dynamically focus stress evaluation only on regions exhibiting critical defects, as detected by the vision module. This targeted inspection strategy can significantly reduce measurement time and computational load, while maintaining high diagnostic confidence.
By leveraging such cross-modal information coupling, the inspection process can shift from passive detection to active fault prediction, wherein localized stress concentrations around defects signal potential risks of fracture or leakage. This transition not only enhances the platform’s value in predictive maintenance scenarios but also contributes to improving overall system throughput and scalability in industrial deployments.