Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = polygon clipping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 2468 KB  
Article
A Real-World Underwater Video Dataset with Labeled Frames and Water-Quality Metadata for Aquaculture Monitoring
by Osbaldo Aragón-Banderas, Leonardo Trujillo, Yolocuauhtli Salazar, Guillaume J. V. E. Baguette and Jesús L. Arce-Valdez
Data 2025, 10(12), 211; https://doi.org/10.3390/data10120211 - 18 Dec 2025
Viewed by 877
Abstract
Aquaculture monitoring increasingly relies on computer vision to evaluate fish behavior and welfare under farming conditions. This dataset was collected in a commercial recirculating aquaculture system (RAS) integrated with hydroponics in Queretaro, Mexico, to support the development of robust visual models for Nile [...] Read more.
Aquaculture monitoring increasingly relies on computer vision to evaluate fish behavior and welfare under farming conditions. This dataset was collected in a commercial recirculating aquaculture system (RAS) integrated with hydroponics in Queretaro, Mexico, to support the development of robust visual models for Nile tilapia (Oreochromis niloticus). More than ten hours of underwater recordings were curated into 31 clips of 30 s each, a duration selected to balance representativeness of fish activity with a manageable size for annotation and training. Videos were captured using commercial action cameras at multiple resolutions (1920 × 1080 to 5312 × 4648 px), frame rates (24–60 fps), depths, and lighting configurations, reproducing real-world challenges such as turbidity, suspended solids, and variable illumination. For each recording, physicochemical parameters were measured, including temperature, pH, dissolved oxygen and turbidity, and are provided in a structured CSV file. In addition to the raw videos, the dataset includes 3520 extracted frames annotated using a polygon-based JSON format, enabling direct use for training object detection and behavior recognition models. This dual resource of unprocessed clips and annotated images enhances reproducibility, benchmarking, and comparative studies. By combining synchronized environmental data with annotated underwater imagery, the dataset contributes a non-invasive and versatile resource for advancing aquaculture monitoring through computer vision. Full article
Show Figures

Figure 1

15 pages, 4037 KB  
Article
Methods of Work Area Division Under a Human–Machine Cooperative Mode of Intelligent Agricultural Machinery Equipment
by Jing He, Jiarui Zou, Zhun Cheng, Jiatao Huang, Runmao Zhao, Guoqing Wang and Jie He
Agriculture 2025, 15(18), 1919; https://doi.org/10.3390/agriculture15181919 - 10 Sep 2025
Viewed by 635
Abstract
To address the problems of incomplete coverage of complex plots and low efficiency in unmanned agricultural machinery operations, the study proposes the Human–Machine Collaboration (HMC). Targeting different types of plots, the study designed the method of area division based on the Breseham algorithm [...] Read more.
To address the problems of incomplete coverage of complex plots and low efficiency in unmanned agricultural machinery operations, the study proposes the Human–Machine Collaboration (HMC). Targeting different types of plots, the study designed the method of area division based on the Breseham algorithm and the polygonal clipping algorithm. In addition, the study proposed a secondary division method of the area based on alternating point judgment and risk area evaluation function to ensure the security of the HMC. The experimental results show that the coverage rate of HMC is 100% and the field operation work efficiency is higher than 86% under different shapes of plots (rectangle, right trapezoid and ordinary quadrilateral). In the three shapes of plots, the operation scores of the HMC in the open edge area are 96.08, 163.39, and 137.4, respectively; the operation scores in other areas are 104.73, 89.88, 97.77, respectively; and the comprehensive scores are 162.36, 204.33, and 189.85, respectively, which are higher than those of unmanned operation and manned operation, showing comparatively better performance. The area division under the HMC meets the operational requirements, and the research provides technical support for unmanned farm development. Full article
Show Figures

Figure 1

14 pages, 6384 KB  
Article
Parallel CUDA-Based Optimization of the Intersection Calculation Process in the Greiner–Hormann Algorithm
by Jiwei Zuo, Junfu Fan, Kuan Li, Qingyun Liu, Yuke Zhou and Yi Zhang
Algorithms 2025, 18(3), 147; https://doi.org/10.3390/a18030147 - 5 Mar 2025
Viewed by 1627
Abstract
The Greiner–Hormann algorithm is a commonly used polygon overlay analysis algorithm. It uses a double-linked list structure to store vertex data, and its intersection calculation step has a significant effect on the overall operating efficiency of the algorithm. To address the time-consuming intersection [...] Read more.
The Greiner–Hormann algorithm is a commonly used polygon overlay analysis algorithm. It uses a double-linked list structure to store vertex data, and its intersection calculation step has a significant effect on the overall operating efficiency of the algorithm. To address the time-consuming intersection calculation process in the Greiner–Hormann algorithm, this paper presents two kernel functions that implement a GPU parallel improvement algorithm based on CUDA multi-threading. This method allocates a thread to each edge of the subject polygon, determines in parallel whether it intersects with each edge of the clipping polygon, transfers the number of intersection points back to the CPU for calculation, and opens up corresponding storage space on the GPU side on the basis of the total number of intersection points; then, information such as intersection coordinates is calculated in parallel. In addition, experiments are conducted on the data of eight polygons with different complexities, and the optimal thread mode, running time, and speedup ratio of the parallel algorithm are statistically analyzed. The experimental results show that when a single CUDA thread block contains 64 threads or 128 threads, the parallel transformation step of the Greiner–Hormann algorithm has the highest computational efficiency. When the complexity of the subject polygon exceeds 53,000, the parallel improvement algorithm can obtain a speedup ratio of approximately three times that of the serial algorithm. This shows that the design method in this paper can effectively improve the operating efficiency of the polygon overlay analysis algorithm in the current large-scale data context. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

28 pages, 36222 KB  
Review
Technical Review of Solar Distribution Calculation Methods: Enhancing Simulation Accuracy for High-Performance and Sustainable Buildings
by Ana Paula de Almeida Rocha, Ricardo C. L. F. Oliveira and Nathan Mendes
Buildings 2025, 15(4), 578; https://doi.org/10.3390/buildings15040578 - 13 Feb 2025
Cited by 2 | Viewed by 2063
Abstract
Solar energy utilization in buildings can significantly contribute to energy savings and enhance on-site energy production. However, excessive solar gains may lead to overheating, thereby increasing cooling demands. Accurate calculation of sunlit and shaded areas is essential for optimizing solar technologies and improving [...] Read more.
Solar energy utilization in buildings can significantly contribute to energy savings and enhance on-site energy production. However, excessive solar gains may lead to overheating, thereby increasing cooling demands. Accurate calculation of sunlit and shaded areas is essential for optimizing solar technologies and improving the precision of building energy simulations. This paper provides a review of the solar shading calculation methods used in building performance simulation (BPS) tools, focusing on the progression from basic trigonometric models to advanced techniques such as projection and clipping (PgC) and pixel counting (PxC). These advancements have improved the accuracy and efficiency of solar shading simulations, enhancing energy performance and occupant comfort. As building designs evolve and adaptive shading systems become more common, challenges remain in ensuring that these methods can handle complex geometries and dynamic solar exposure. The PxC method, leveraging modern GPUs and parallel computing, offers a solution by providing real-time high-resolution simulations, even for irregular, non-convex surfaces. This ability to handle continuous updates positions PxC as a key tool for next-generation building energy simulations, ensuring that shading systems can adjust to changing solar conditions. Future research could focus on integrating appropriate modeling approaches with AI technologies to enhance accuracy, reliability, and computational efficiency. Full article
(This article belongs to the Special Issue Research on Sustainable Energy Performance of Green Buildings)
Show Figures

Figure 1

13 pages, 419 KB  
Article
GPU-Accelerated Algorithm for Polygon Reconstruction
by Ruian Ji, Zhirui Niu and Lan Chen
Appl. Sci. 2025, 15(3), 1111; https://doi.org/10.3390/app15031111 - 23 Jan 2025
Viewed by 2197
Abstract
Polygon reconstruction is widely used across various fields. Although the current polygon reconstruction algorithms have achieved near-linear time complexity, they still fail to meet the speed demands imposed by the exponential growth in polygon numbers. The development of GPU technology provides a promising [...] Read more.
Polygon reconstruction is widely used across various fields. Although the current polygon reconstruction algorithms have achieved near-linear time complexity, they still fail to meet the speed demands imposed by the exponential growth in polygon numbers. The development of GPU technology provides a promising solution to this issue. This paper proposes a GPU-based algorithm that leverages hash tables and memory pools to transform the polygon reconstruction problem into an efficiently parallelizable task. Experimental results on Nvidia RTX 2080Ti demonstrate that the new algorithm achieves 17× and 46× speedups on Manhattan and non-Manhattan polygon test sets, respectively. Compared to traditional CPU algorithms, the new algorithm significantly improves processing speeds, especially when handling layouts with complex polygons. It demonstrates strong scalability and performance advantages, providing crucial support for enhancing the overall efficiency of CAD tools. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Graphical abstract

19 pages, 36978 KB  
Article
Algorithms for the Recognition of the Hull Structures’ Elementary Plate Panels and the Determination of Their Parameters in a Ship CAD System
by Sergey Ryumin, Vladimir Tryaskin and Kirill Plotnikov
J. Mar. Sci. Eng. 2023, 11(1), 189; https://doi.org/10.3390/jmse11010189 - 11 Jan 2023
Cited by 1 | Viewed by 3156
Abstract
The article deals with some issues of geometric modeling of ship hull structures in specialized CAD system. Stiffened shells and platings should be idealized as a set of elementary plate panels for the purpose of structural design using local strength and buckling requirements. [...] Read more.
The article deals with some issues of geometric modeling of ship hull structures in specialized CAD system. Stiffened shells and platings should be idealized as a set of elementary plate panels for the purpose of structural design using local strength and buckling requirements. In the process of geometric modeling and creating the database for calculation, a special searching algorithm for closed loops of every panel should be used. This algorithm is to have good performance and versatility. In this paper, the authors suggest an original algorithm used in CADS-Hull software developed in SMTU. It is based on a regular field of points generation within the large contour of the considered structure. A series of rays is built from every point to find intersections. It is shown that this algorithm is quite good for structures (expansions, decks, bulkheads, etc.) with non-orthogonal boundaries. Some tasks for logical operations with found panels are also discussed. One of them is the clipping of a panel or plate polygon by boundaries of a considered structure (expansion contour, hull lines). The authors developed a generic method of polygons clipping. It is based on a rotation of clipping convex polygons together with the clipped polygons. All faces of the latter that are in the negative half-plane are removed. Some problems of collecting data for every found panel are discussed. An original algorithm of smaller and larger size definition for irregular and triangular panels is also given in this paper. Full article
(This article belongs to the Special Issue Strength of Ship Structures)
Show Figures

Figure 1

32 pages, 1807 KB  
Article
Line Clipping in 2D: Overview, Techniques and Algorithms
by Dimitrios Matthes and Vasileios Drakopoulos
J. Imaging 2022, 8(10), 286; https://doi.org/10.3390/jimaging8100286 - 17 Oct 2022
Cited by 5 | Viewed by 7188
Abstract
Clipping, as a fundamental process in computer graphics, displays only the part of a scene which is needed to be displayed and rejects all others. In two dimensions, the clipping process can be applied to a variety of geometric primitives such as points, [...] Read more.
Clipping, as a fundamental process in computer graphics, displays only the part of a scene which is needed to be displayed and rejects all others. In two dimensions, the clipping process can be applied to a variety of geometric primitives such as points, lines, polygons or curves. A line-clipping algorithm processes each line in a scene through a series of tests and intersection calculations to determine whether the entire line or any part of it is to be saved. It also calculates the intersection position of a line with the window edges so its major goal is to minimize these calculations. This article surveys important techniques and algorithms for line-clipping in 2D but it also includes some of the latest research made by the authors. The survey criteria include evaluation of all line-clipping algorithms against a rectangular window, line clipping versus polygon clipping, and our line clipping against a convex polygon, as well as all line-clipping algorithms against a convex polygon algorithm. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

24 pages, 6679 KB  
Article
Using High-Performance Computing to Address the Challenge of Land Use/Land Cover Change Analysis on Spatial Big Data
by Xiaochen Kang, Jiping Liu, Chun Dong and Shenghua Xu
ISPRS Int. J. Geo-Inf. 2018, 7(7), 273; https://doi.org/10.3390/ijgi7070273 - 11 Jul 2018
Cited by 10 | Viewed by 5925
Abstract
Land use/land cover change (LUCC) analysis is a fundamental issue in regional and global geography that can accurately reflect the diversity of landscapes and detect the differences or changes on the earth’s surface. However, a very heavy computational load is often unavoidable, especially [...] Read more.
Land use/land cover change (LUCC) analysis is a fundamental issue in regional and global geography that can accurately reflect the diversity of landscapes and detect the differences or changes on the earth’s surface. However, a very heavy computational load is often unavoidable, especially when processing multi-temporal land cover data with fine spatial resolution using more complicated procedures, which often takes a long time when performing the LUCC analysis over large areas. This paper employs a graph-based spatial decomposition that represents the computational loads as graph vertices and edges and then uses a balanced graph partitioning to decompose the LUCC analysis on spatial big data. For the decomposing tasks, a stream scheduling method is developed to exploit the parallelism in data moving, clipping, overlay analysis, area calculation and transition matrix building. Finally, a change analysis is performed on the land cover data from 2015 to 2016 in China, with each piece of temporal data containing approximately 260 million complex polygons. It took less than 6 h in a cluster with 15 workstations, which was an indispensable task that may surpass two weeks without any optimization. Full article
(This article belongs to the Special Issue Geospatial Big Data and Urban Studies)
Show Figures

Graphical abstract

15 pages, 4116 KB  
Article
An Efficient Vector-Raster Overlay Algorithm for High-Accuracy and High-Efficiency Surface Area Calculations of Irregularly Shaped Land Use Patches
by Peng Xie, Yaolin Liu, Qingsong He, Xiang Zhao and Jun Yang
ISPRS Int. J. Geo-Inf. 2017, 6(6), 156; https://doi.org/10.3390/ijgi6060156 - 27 May 2017
Cited by 6 | Viewed by 6371
Abstract
The Earth’s surface is uneven, and conventional area calculation methods are based on the assumption that the projection plane area can be obtained without considering the actual undulation of the Earth’s surface and by simplifying the Earth’s shape to be a standard ellipsoid. [...] Read more.
The Earth’s surface is uneven, and conventional area calculation methods are based on the assumption that the projection plane area can be obtained without considering the actual undulation of the Earth’s surface and by simplifying the Earth’s shape to be a standard ellipsoid. However, the true surface area is important for investigating and evaluating land resources. In this study, the authors propose a new method based on an efficient vector-raster overlay algorithm (VROA-based method) to calculate the surface areas of irregularly shaped land use patches. In this method, a surface area raster file is first generated based on the raster-based digital elevation model (raster-based DEM). Then, a vector-raster overlay algorithm (VROA) is used that considers the precise clipping of raster cells using the vector polygon boundary. Xiantao City, Luotian County, and the Shennongjia Forestry District, which are representative of a plain landform, a hilly topography, and a mountain landscape, respectively, are selected to calculate the surface area. Compared with a traditional method based on triangulated irregular networks (TIN-based method), our method significantly reduces the processing time. In addition, our method effectively improves the accuracy compared with another traditional method based on raster-based DEM (raster-based method). Therefore, the method satisfies the requirements of large-scale engineering applications. Full article
Show Figures

Figure 1

Back to TopTop