Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = foreground contour

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3230 KiB  
Article
Active Contours Connected Component Analysis Segmentation Method of Cancerous Lesions in Unsupervised Breast Histology Images
by Vincent Majanga, Ernest Mnkandla, Zenghui Wang and Donatien Koulla Moulla
Bioengineering 2025, 12(6), 642; https://doi.org/10.3390/bioengineering12060642 - 12 Jun 2025
Viewed by 461
Abstract
Automatic segmentation of nuclei on breast cancer histology images is a basic and important step for diagnosis in a computer-aided diagnostic approach and helps pathologists discover cancer early. Nuclei segmentation remains a challenging problem due to cancer biology and the variability of tissue [...] Read more.
Automatic segmentation of nuclei on breast cancer histology images is a basic and important step for diagnosis in a computer-aided diagnostic approach and helps pathologists discover cancer early. Nuclei segmentation remains a challenging problem due to cancer biology and the variability of tissue characteristics; thus, their detection in an image is a very tedious and time-consuming task. In this context, overlapping nuclei objects present difficulties in separating them by conventional segmentation methods; thus, active contours can be employed in image segmentation. A major limitation of the active contours method is its inability to resolve image boundaries/edges of intersecting objects and segment multiple overlapping objects as a single object. Therefore, we present a hybrid active contour (connected component + active contours) method to segment cancerous lesions in unsupervised human breast histology images. Initially, this approach prepares and pre-processes data through various augmentation methods to increase the dataset size. Then, a stain normalization technique is applied to these augmented images to isolate nuclei features from tissue structures. Secondly, morphology operation techniques, namely erosion, dilation, opening, and distance transform, are used to highlight foreground and background pixels while removing overlapping regions from the highlighted nuclei objects on the image. Consequently, the connected components method groups these highlighted pixel components with similar intensity values and assigns them to their relevant labeled component to form a binary mask. Once all binary-masked groups have been determined, a deep-learning recurrent neural network (RNN) model from the Keras architecture uses this information to automatically segment nuclei objects having cancerous lesions on the image via the active contours method. This approach, therefore, uses the capabilities of connected components analysis to solve the limitations of the active contour method. This segmentation method is evaluated on an unsupervised, augmented human breast cancer histology dataset of 15,179 images. This proposed method produced a significant evaluation result of 98.71% accuracy score. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

24 pages, 2991 KiB  
Article
Automatic Blob Detection Method for Cancerous Lesions in Unsupervised Breast Histology Images
by Vincent Majanga, Ernest Mnkandla, Zenghui Wang and Donatien Koulla Moulla
Bioengineering 2025, 12(4), 364; https://doi.org/10.3390/bioengineering12040364 - 31 Mar 2025
Viewed by 633
Abstract
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the [...] Read more.
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the visual inspection of medical images, which is ineffective, particularly for large and visible cancerous lesions in such images. Additionally, conventional methods face challenges in analyzing objects in large images due to overlapping/intersecting objects and the inability to resolve their image boundaries/edges. Nevertheless, the early detection of breast cancer lesions is a key determinant for diagnosis and treatment. In this study, we present a deep learning-based technique for breast cancer lesion detection, namely blob detection, which automatically detects hidden and inaccessible cancerous lesions in unsupervised human breast histology images. Initially, this approach prepares and pre-processes data through various augmentation methods to increase the dataset size. Secondly, a stain normalization technique is applied to the augmented images to separate nucleus features from tissue structures. Thirdly, morphology operation techniques, namely erosion, dilation, opening, and a distance transform, are used to enhance the images by highlighting foreground and background pixels while removing overlapping regions from the highlighted nucleus objects in the image. Subsequently, image segmentation is handled via the connected components method, which groups highlighted pixel components with similar intensity values and assigns them to their relevant labeled components (binary masks). These binary masks are then used in the active contours method for further segmentation by highlighting the boundaries/edges of ROIs. Finally, a deep learning recurrent neural network (RNN) model automatically detects and extracts cancerous lesions and their edges from the histology images via the blob detection method. This proposed approach utilizes the capabilities of both the connected components method and the active contours method to resolve the limitations of blob detection. This detection method is evaluated on 27,249 unsupervised, augmented human breast cancer histology dataset images, and it shows a significant evaluation result in the form of a 98.82% F1 accuracy score. Full article
Show Figures

Figure 1

21 pages, 13186 KiB  
Article
Ship Contour Extraction from Polarimetric SAR Images Based on Polarization Modulation
by Guoqing Wu, Shengbin Luo Wang, Yibin Liu, Ping Wang and Yongzhen Li
Remote Sens. 2024, 16(19), 3669; https://doi.org/10.3390/rs16193669 - 1 Oct 2024
Viewed by 1222
Abstract
Ship contour extraction is vital for extracting the geometric features of ships, providing comprehensive information essential for ship recognition. The main factors affecting the contour extraction performance are speckle noise and amplitude inhomogeneity, which can lead to over-segmentation and missed detection of ship [...] Read more.
Ship contour extraction is vital for extracting the geometric features of ships, providing comprehensive information essential for ship recognition. The main factors affecting the contour extraction performance are speckle noise and amplitude inhomogeneity, which can lead to over-segmentation and missed detection of ship edges. Polarimetric synthetic aperture radar (PolSAR) images contain rich target scattering information. Under different transmitting and receiving polarization, the amplitude and phase of pixels can be different, which provides the potential to meet the uniform requirement. This paper proposes a novel ship contour extraction framework from PolSAR images based on polarization modulation. Firstly, the image is partitioned into the foreground and background using a super-pixel unsupervised clustering approach. Subsequently, an optimization criterion for target amplitude modulation to achieve uniformity is designed. Finally, the ship’s contour is extracted from the optimized image using an edge-detection operator and an adaptive edge extraction algorithm. Based on the contour, the geometric features of ships are extracted. Moreover, a PolSAR ship contour extraction dataset is established using Gaofen-3 PolSAR images, combined with expert knowledge and automatic identification system (AIS) data. With this dataset, we compare the accuracy of contour extraction and geometric features with state-of-the-art methods. The average errors of extracted length and width are reduced to 20.09 m and 8.96 m. The results demonstrate that the proposed method performs well in both accuracy and precision. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis (2nd Edition))
Show Figures

Figure 1

27 pages, 3278 KiB  
Article
Deep Learning Approach for Human Action Recognition Using a Time Saliency Map Based on Motion Features Considering Camera Movement and Shot in Video Image Sequences
by Abdorreza Alavigharahbagh, Vahid Hajihashemi, José J. M. Machado and João Manuel R. S. Tavares
Information 2023, 14(11), 616; https://doi.org/10.3390/info14110616 - 15 Nov 2023
Cited by 7 | Viewed by 3383
Abstract
In this article, a hierarchical method for action recognition based on temporal and spatial features is proposed. In current HAR methods, camera movement, sensor movement, sudden scene changes, and scene movement can increase motion feature errors and decrease accuracy. Another important aspect to [...] Read more.
In this article, a hierarchical method for action recognition based on temporal and spatial features is proposed. In current HAR methods, camera movement, sensor movement, sudden scene changes, and scene movement can increase motion feature errors and decrease accuracy. Another important aspect to take into account in a HAR method is the required computational cost. The proposed method provides a preprocessing step to address these challenges. As a preprocessing step, the method uses optical flow to detect camera movements and shots in input video image sequences. In the temporal processing block, the optical flow technique is combined with the absolute value of frame differences to obtain a time saliency map. The detection of shots, cancellation of camera movement, and the building of a time saliency map minimise movement detection errors. The time saliency map is then passed to the spatial processing block to segment the moving persons and/or objects in the scene. Because the search region for spatial processing is limited based on the temporal processing results, the computations in the spatial domain are drastically reduced. In the spatial processing block, the scene foreground is extracted in three steps: silhouette extraction, active contour segmentation, and colour segmentation. Key points are selected at the borders of the segmented foreground. The last used features are the intensity and angle of the optical flow of detected key points. Using key point features for action detection reduces the computational cost of the classification step and the required training time. Finally, the features are submitted to a Recurrent Neural Network (RNN) to recognise the involved action. The proposed method was tested using four well-known action datasets: KTH, Weizmann, HMDB51, and UCF101 datasets and its efficiency was evaluated. Since the proposed approach segments salient objects based on motion, edges, and colour features, it can be added as a preprocessing step to most current HAR systems to improve performance. Full article
(This article belongs to the Special Issue Computer Vision for Security Applications)
Show Figures

Figure 1

17 pages, 17336 KiB  
Article
Fully Automatic Approach for Smoke Tracking Based on Deep Image Quality Enhancement and Adaptive Level Set Model
by Rimeh Daoudi, Aymen Mouelhi, Moez Bouchouicha, Eric Moreau and Mounir Sayadi
Electronics 2023, 12(18), 3888; https://doi.org/10.3390/electronics12183888 - 14 Sep 2023
Cited by 1 | Viewed by 1655
Abstract
In recent decades, the need for advanced systems with good precision, low cost, and high-time response for wildfires and smoke detection and monitoring has become an absolute necessity. In this paper, we propose a novel, fast, and autonomous approach for denoising and tracking [...] Read more.
In recent decades, the need for advanced systems with good precision, low cost, and high-time response for wildfires and smoke detection and monitoring has become an absolute necessity. In this paper, we propose a novel, fast, and autonomous approach for denoising and tracking smoke in video sequences captured from a camera in motion. The proposed method is based mainly on two stages: the first one is a reconstruction and denoising path with a novel lightweight convolutional autoencoder architecture. The second stage is a specific scheme designated for smoke tracking, and it consists of the following: first, the foreground frames are extracted with the HSV color model and textural features of smoke; second, possible false detections of smoke regions are eliminated with image processing technique and last smoke contours detection is performed with an adaptive nonlinear level set. The obtained experimental results exposed in this paper show the potential of the proposed approach and prove its efficiency in smoke video denoising and tracking with a minimized number of false negative regions and good detection rates. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Pattern Recognition)
Show Figures

Figure 1

16 pages, 5097 KiB  
Article
Anchor-Free Smoke and Flame Recognition Algorithm with Multi-Loss
by Gang Li, Peng Chen, Chuanyun Xu, Chengjie Sun and Yingli Ma
Fire 2023, 6(6), 225; https://doi.org/10.3390/fire6060225 - 4 Jun 2023
Cited by 3 | Viewed by 2108
Abstract
Fire perception based on machine vision is essential for improving social safety. Object recognition based on deep learning has become the mainstream smoke and flame recognition method. However, the existing anchor-based smoke and flame recognition algorithms are not accurate enough for localization due [...] Read more.
Fire perception based on machine vision is essential for improving social safety. Object recognition based on deep learning has become the mainstream smoke and flame recognition method. However, the existing anchor-based smoke and flame recognition algorithms are not accurate enough for localization due to the irregular shapes, unclear contours, and large-scale changes in smoke and flames. For this problem, we propose a new anchor-free smoke and flame recognition algorithm, which improves the object detection network in two dimensions. First, we propose a channel attention path aggregation network (CAPAN), which forces the network to focus on the channel features with foreground information. Second, we propose a multi-loss function. The classification loss, the regression loss, the distribution focal loss (DFL), and the loss for the centerness branch are fused to enable the network to learn a more accurate distribution for the locations of the bounding boxes. Our method attains a promising performance compared with the state-of-the-art object detectors; the recognition accuracy improves by 5% for the mAP, 8.3% for the flame AP50, and 2.1% for the smoke AP50 compared with the baseline model. Overall, the algorithm proposed in this paper significantly improves the accuracy of the object detection network in the smoke and flame recognition scenario and can provide real-time fire recognition. Full article
Show Figures

Figure 1

7 pages, 3017 KiB  
Proceeding Paper
Adaptive Gaussian and Double Thresholding for Contour Detection and Character Recognition of Two-Dimensional Area Using Computer Vision
by Nehal Abdul Rehman and Farah Haroon
Eng. Proc. 2023, 32(1), 23; https://doi.org/10.3390/engproc2023032023 - 22 May 2023
Cited by 7 | Viewed by 3229
Abstract
Contour detection with good accuracy is challenging in various computer-aided measurement applications. This paper evaluates the performance and comparison of thresholding and edge detection techniques for contour measurement along with character detection and recognition between images of high and low quality. Thresholding is [...] Read more.
Contour detection with good accuracy is challenging in various computer-aided measurement applications. This paper evaluates the performance and comparison of thresholding and edge detection techniques for contour measurement along with character detection and recognition between images of high and low quality. Thresholding is one of the key techniques for pre-processing in computer vision. Adaptive Gaussian Thresholding (AGT) is applied to distinguish the foreground and background of an image, and Canny edge detection (CED) is used for spotting a wide range of edges. Adaptive Gaussian Thresholding works on a small set of neighboring pixels, while Canny Edge Detection takes high- and low-intensity pixels in the form of thresholds that are tested to find accurate contour measurements while retaining the maximum data contained within them. The results show that Adaptive Gaussian Thresholding outperforms Canny edge detection for both brightened sharp and blurry dull images. Full article
Show Figures

Figure 1

12 pages, 3600 KiB  
Article
Bust Portraits Matting Based on Improved U-Net
by Honggang Xie, Kaiyuan Hou, Di Jiang and Wanjie Ma
Electronics 2023, 12(6), 1378; https://doi.org/10.3390/electronics12061378 - 14 Mar 2023
Cited by 4 | Viewed by 2617
Abstract
Extracting complete portrait foregrounds from natural images is widely used in image editing and high-definition map generation. When making high-definition maps, it is often necessary to matte passers-by to guarantee their privacy. Current matting methods that do not require additional trimap inputs often [...] Read more.
Extracting complete portrait foregrounds from natural images is widely used in image editing and high-definition map generation. When making high-definition maps, it is often necessary to matte passers-by to guarantee their privacy. Current matting methods that do not require additional trimap inputs often suffer from inaccurate global predictions or blurred local details. Portrait matting, as a soft segmentation method, allows the creation of excess areas during segmentation, which inevitably leads to noise in the resulting alpha image as well as excess foreground information, so it is not necessary to keep all the excess areas. To overcome the above problems, this paper designed a contour sharpness refining network (CSRN) that modifies the weight of the alpha values of uncertain regions in the prediction map. It is combined with an end-to-end matting network for bust matting based on the U-Net target detection network containing Residual U-blocks. An end-to-end matting network for bust matting is designed. The network can effectively reduce the image noise without affecting the complete foreground information obtained by the deeper network, thus obtaining a more detailed foreground image with fine edge details. The network structure has been tested on the PPM-100, the RealWorldPortrait-636, and a self-built dataset, showing excellent performance in both edge refinement and global prediction for half-figure portraits. Full article
(This article belongs to the Special Issue Computer Vision for Modern Vehicles)
Show Figures

Figure 1

16 pages, 5822 KiB  
Article
Overlapping Pellet Size Detection Method Based on Marker Watershed and GMM Image Segmentation
by Weining Ma, Lijing Wang, Tianyu Jiang, Aimin Yang and Yuzhu Zhang
Metals 2023, 13(2), 327; https://doi.org/10.3390/met13020327 - 6 Feb 2023
Cited by 3 | Viewed by 2268
Abstract
The particle size of pellets is an important parameter in steel big data, and the high density and high overlap rate of pellets bring a great challenge to particle size detection. To address this problem, a particle size intelligent detection algorithm with an [...] Read more.
The particle size of pellets is an important parameter in steel big data, and the high density and high overlap rate of pellets bring a great challenge to particle size detection. To address this problem, a particle size intelligent detection algorithm with an improved watershed and a Gaussian mixture model (GMM) is proposed. First, the initial segmentation of the pellets and background is achieved by using adaptive binary segmentation, and then the secondary fine segmentation of the pellets and background is achieved by combining morphological operations such as skeleton extraction and marked watershed segmentation; then, the contour of the connected domain of pellets is calculated, and the non-overlapping pellets in the foreground and the overlapping pellets are filtered according to the roundness of their contours. Finally, the number of overlapping pellets is predicted by Gaussian reconstruction of the grayscale image of the overlapping pellets, and the number and granularity of the overlapping pellets are predicted by the Gaussian reconstruction of the overlapping pellets. The experimental results showed that the algorithm achieved a 91.98% segmentation accuracy in the experimental images. Compared with other algorithms, the algorithm can also effectively suppress the over-segmentation and under-segmentation problems, and it can effectively realize the pellet size detection of dense, overlapping pellets such as those on a pelletizing disk, which provides an effective technical means for the metallurgical performance analysis of pellet ore and intelligent pellet-making driven by big data. Full article
(This article belongs to the Special Issue Big Data of Steel and Low Carbon Intelligent Smelting)
Show Figures

Figure 1

15 pages, 3331 KiB  
Article
A Method of Improving the Length Measurement Accuracy of Metal Parts Using Polarization Vision
by Zhiying Tan, Yan Ji, Wenbo Fan, Weifeng Kong, Xu Tao, Xiaobin Xu and Minzhou Luo
Machines 2023, 11(2), 145; https://doi.org/10.3390/machines11020145 - 20 Jan 2023
Cited by 1 | Viewed by 2115
Abstract
Measurement technology based on machine vision has been widely used in various industries. The development of vision measurement technology mainly depends on the process of photosensitive components and the algorithm of processing a target image. In the high-precision dimension measurement of machined metal [...] Read more.
Measurement technology based on machine vision has been widely used in various industries. The development of vision measurement technology mainly depends on the process of photosensitive components and the algorithm of processing a target image. In the high-precision dimension measurement of machined metal parts, the high-resolution imaging device usually exposes the cutting texture of the metal surface and affects the accuracy of measurement algorithm. At the same time, the edges of machined metal parts are often chamfered, which makes the edges of objects in the picture overexposed in the lighting measurement environment. These factors reduce the accuracy of dimensioning metal parts using visual measurements. The traditional vision measurement method based on color/gray image makes it difficult to analyze the physical quantities in the light field except for the light intensity, which limits the measurement accuracy. Polarization information can more carefully depict the edge contour edge information in the scene and increase the contrast between the foreground and the background. This paper presents a method to improve the measurement accuracy of machined metal parts by using polarization vision. The incident angle of the light source is optimized according to the complex refractive index of the metal material, and the degree of polarization image with enhanced edge contour features of the ROI (region of interest) is obtained. The high-precision measurement of cylindrical brass motor components is realized by using the method of reprojection transformation correction and maximum correlation template matching (NCC) for rough positioning, as well as the method of edge extraction and optimal fitting. The experimental results show that for copper parts with a tolerance range of ±0.1 mm, the average measurement error and maximum measurement error are within 0.01 mm, which are higher than the existing color/gray image measurement methods. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

16 pages, 25269 KiB  
Article
Improved Method Based on Retinex and Gabor for the Surface Defect Enhancement of Aluminum Strips
by Qi Zhang, Hongqun Tang, Yong Li, Bing Han and Jiadong Li
Metals 2023, 13(1), 118; https://doi.org/10.3390/met13010118 - 6 Jan 2023
Cited by 3 | Viewed by 1973
Abstract
Aiming at the problems of the blurred image defect contour and the surface texture of the aluminum strip suppressing defect feature extraction when collecting photos online in the air cushion furnace production line, we propose an algorithm for the surface defect enhancement and [...] Read more.
Aiming at the problems of the blurred image defect contour and the surface texture of the aluminum strip suppressing defect feature extraction when collecting photos online in the air cushion furnace production line, we propose an algorithm for the surface defect enhancement and detection of aluminum strips based on the Retinex theory and Gobar filter. The Retinex algorithm can enhance the information and detail part of the image, while the Gobar algorithm can maintain the integrity of the defect edges well. The method first improves the high-frequency information of the image using a multi-scale Retinex based on a Laplacian filter, scales the original image and the enhanced image, and enhances the contrast of the image by adaptive histogram equalization. Then, the image is denoised, and texture suppressed using median filtering and morphological operations. Finally, Gobar edge detection is performed on the obtained sample images by convolving the sinusoidal plane wave and the Gaussian kernel function in the null domain and performing double-threshold segmentation to extract and refine the edges. The algorithm in this paper is compared with histogram equalization and the Gaussian filter-based MSR algorithm, and the surface defects of aluminum strips are significantly enhanced for the background. The experimental results show that the information entropy of the aluminum strip material defect image is improved from 5.03 to 7.85 in the original image, the average gradient of the image is improved from 3.51 to 9.51 in the original image, the contrast between the foreground and background is improved from 16.66 to 117.53 in the original image, the peak signal-to-noise ratio index is improved to 24.50 dB, and the integrity of the edges is well maintained while denoising. This paper’s algorithm effectively enhances and detects the surface defects of aluminum strips, and the edges of defect contours are clearer and more complete. Full article
Show Figures

Figure 1

16 pages, 4850 KiB  
Article
TSSTNet: A Two-Stream Swin Transformer Network for Salient Object Detection of No-Service Rail Surface Defects
by Chi Wan, Shuai Ma and Kechen Song
Coatings 2022, 12(11), 1730; https://doi.org/10.3390/coatings12111730 - 12 Nov 2022
Cited by 14 | Viewed by 2560
Abstract
The detection of no-service rail surface defects is important in the rail manufacturing process. Detection of defects can prevent significant financial losses. However, the texture and form of the defects are often very similar to the background, which makes them difficult for the [...] Read more.
The detection of no-service rail surface defects is important in the rail manufacturing process. Detection of defects can prevent significant financial losses. However, the texture and form of the defects are often very similar to the background, which makes them difficult for the human eye to distinguish. How to accurately identify rail surface defects thus poses a challenge. We introduce salient object detection through machine vision to deal with this challenge. Salient object detection locates the most “significant” areas of an image using algorithms, which constitute an integral part of machine vision inspection. However, existing saliency detection networks suffer from inaccurate positioning, poor contouring, and incomplete detection. Therefore, we propose an innovative deep learning network named Two-Stream Swin Transformer Network (TSSTNet) for salient detection of no-service rail surface defects. Specifically, we propose a two-stream encoder—one stream for feature extraction and the other for edge extraction. TSSTNet also includes a three-stream decoder, consisting of a saliency stream, edge stream, and fusion stream. For the problem of incomplete detection, we innovatively introduce the Swin Transformer to model global information. For the problem of unclear contours, we expect to deepen the understanding of the difference in depth between the foreground and background through the learning of contour maps, so the contour alignment module (CAM) is created to deal with this problem. Moreover, to make the most of multimodal information, we suggest a multi-feature fusion module (MFFM). Finally, we conducted comparative experiments with 10 state-of-the-art (SOTA) approaches on the NRSD-MN datasets, and our model performed more competitively than others on five metrics. Full article
(This article belongs to the Special Issue Solid Surfaces, Defects and Detection)
Show Figures

Figure 1

20 pages, 4418 KiB  
Article
Improved YOLOv3 Model for Workpiece Stud Leakage Detection
by Peichao Cong, Kunfeng Lv, Hao Feng and Jiachao Zhou
Electronics 2022, 11(21), 3430; https://doi.org/10.3390/electronics11213430 - 23 Oct 2022
Cited by 11 | Viewed by 2776
Abstract
In this study, a deep convolutional neural network based on an improved You only look once version 3 (YOLOv3) is proposed to improve the accuracy and real-time detection of small targets in complex backgrounds when detecting leaky weld studs on an automotive workpiece. [...] Read more.
In this study, a deep convolutional neural network based on an improved You only look once version 3 (YOLOv3) is proposed to improve the accuracy and real-time detection of small targets in complex backgrounds when detecting leaky weld studs on an automotive workpiece. To predict stud locations, the prediction layer of the model increases from three layers to four layers. An image pyramid structure obtains stud feature maps at different scales, and shallow feature fusion at multiple scales obtains stud contour details. Focal loss is added to the loss function to solve the imbalanced sample problem. The reduced weight of simple background classes allows the algorithm to focus on foreground classes, reducing the number of missed weld studs. Moreover, K-medians algorithm replaces the original K-means clustering to improve model robustness. Finally, an image dataset of car body workpiece studs is built for model training and testing. The results reveal that the average detection accuracy of the improved YOLOv3 model is 80.42%, which is higher than the results of Faster R-CNN, single-shot multi-box detector (SSD), and YOLOv3. The detection time per image is just 0.32 s (62.8% and 23.8% faster than SSD and Faster R-CNN, respectively), fulfilling the requirement for stud leakage detection in real-world working environments. Full article
Show Figures

Figure 1

15 pages, 278 KiB  
Article
New Routes to Mixed “Roots”
by Kimberly DaCosta
Genealogy 2022, 6(3), 60; https://doi.org/10.3390/genealogy6030060 - 1 Jul 2022
Cited by 2 | Viewed by 2662
Abstract
Developments in reproductive (e.g., assisted reproduction, surrogacy) and genetic technologies (commercial DNA ancestry testing) have opened new routes to mixedness that disrupt the relationship between multiracialism and family. Discussions of racial mixedness, both academic and lay, tend to refer to persons born to [...] Read more.
Developments in reproductive (e.g., assisted reproduction, surrogacy) and genetic technologies (commercial DNA ancestry testing) have opened new routes to mixedness that disrupt the relationship between multiracialism and family. Discussions of racial mixedness, both academic and lay, tend to refer to persons born to parents of different racialized ancestry. Multiracialism is also understood as an outcome of extended generational descent—a family lineage comprised of ancestors of varied “races”. Both modes of mixed subjectivity rely on a notion of race as transmitted through sexual reproduction, and our study of them has often focused on the implications of this boundary crossing for families. These routes to mixedness imply a degree of intimacy and “knownness” between partners, with implications for the broader web of relationships into which one is born or marries. Assisted reproduction allows for the intentional creation of mixed-race babies outside of sexual reproduction and relationship. These technologies make possible mixed race by design, in which one can choose an egg or sperm donor on the basis of their racial difference, without knowing the donor beyond a set of descriptive characteristics. Commercial DNA testing produces another route to mixedness—mixed by revelation—in which previously unknown mixed ancestry is revealed through genetic testing. Ancestry tests, however, deal in estimations of biogenetic markers, rather than specific persons. To varying degrees, these newer routes to mixedness reconfigure the nexus of biogenetic substance and kinship long foregrounded in American notions of mixedness, expand the contours of mixed-race subjectivity, and reshape notions of interracial relatedness. Full article
14 pages, 3771 KiB  
Article
Image Segmentation via Multiscale Perceptual Grouping
by Ben Feng and Kun He
Symmetry 2022, 14(6), 1076; https://doi.org/10.3390/sym14061076 - 24 May 2022
Cited by 1 | Viewed by 1763
Abstract
The human eyes observe an image through perceptual units surrounded by symmetrical or asymmetrical object contours at a proper scale, which enables them to quickly extract the foreground of the image. Inspired by this characteristic, a model combined with multiscale perceptual grouping and [...] Read more.
The human eyes observe an image through perceptual units surrounded by symmetrical or asymmetrical object contours at a proper scale, which enables them to quickly extract the foreground of the image. Inspired by this characteristic, a model combined with multiscale perceptual grouping and unit-based segmentation is proposed in this paper. In the multiscale perceptual grouping part, a novel total variation regularization is proposed to smooth the image into different scales, which removes the inhomogeneity and preserves the edges. To simulate perceptual units surrounded by contours, the watershed method is utilized to cluster pixels into groups. The scale of smoothness is determined by the number of perceptual units. In the segmentation part, perceptual units are regarded as the basic element instead of discrete pixels in the graph cut. The appearance models of the foreground and background are constructed by combining the perceptual units. According to the relationship between perceptual units and the appearance model, the foreground can be segmented through a minimum-cut/maximum-flow algorithm. The experiment conducted on the CMU-Cornell iCoseg database shows that the proposed model has a promising performance. Full article
(This article belongs to the Special Issue Symmetry in Image Processing and Visualization)
Show Figures

Figure 1

Back to TopTop