Next Article in Journal
Integrating Temporal Event Prediction and Large Language Models for Automatic Commentary Generation in Video Games
Previous Article in Journal
Desynchronization Resilient Audio Watermarking Based on Adaptive Energy Modulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Unified Theoretical Analysis of Geometric Representation Forms in Descriptive Geometry and Sparse Representation Theory

by
Shuli Mei
College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
Mathematics 2025, 13(17), 2737; https://doi.org/10.3390/math13172737
Submission received: 22 June 2025 / Revised: 6 August 2025 / Accepted: 19 August 2025 / Published: 26 August 2025

Abstract

The primary distinction between technical design and engineering design lies in the role of analysis and optimization. From its inception, descriptive geometry has supported military and engineering applications, and its graphical rules inherently reflect principles of optimization—similar to the core ideas of sparse representation and compressed sensing. This paper explores the geometric and mathematical significance of the center line in symmetrical objects and the axis of rotation in solids of revolution, framing these elements within the theory of sparse representation. It further establishes rigorous correspondences between geometric primitives—points, lines, planes, and symmetric solids—and their sparse representations in descriptive geometry. By re-examining traditional engineering drawing techniques from the perspective of optimization analysis, this study reveals the hidden mathematical logic embedded in geometric constructions. The findings not only support the deeper integration of mathematical reasoning in engineering education but also provide an intuitive framework for teaching abstract concepts such as sparsity and signal reconstruction. This work contributes to interdisciplinary understanding between descriptive geometry, mathematical modeling, and engineering pedagogy.

1. Introduction

Descriptive geometry is widely recognized as the cornerstone of engineering drawing—a fundamental tool for translating engineering designs into tangible expressions across fields like mechanical engineering and architecture [1]. At its core, Gaspard Monge’s descriptive geometry relies on optimization principles to achieve precise projection-based representation. By maximizing the efficiency of capturing spatial relationships through geometric projection, his approach inherently aligns with the essence of optimal transport: it seeks the most effective way to map three-dimensional geometric features onto two-dimensional planes, a principle that still resonates in modern optimization methods. This connection is vividly reflected in applications such as the use of optimal transport theory in Generative Adversarial Networks (GANs), where the goal of optimally matching generated data distributions with real ones directly echoes Monge’s early explorations in geometric optimization.
Notably, the concept of sparsity—a linchpin of modern signal processing and compressed sensing—also traces its conceptual roots to Monge’s geometric framework. In his projection theory, redundancy is deliberately minimized by discarding non-essential information, leaving only the most meaningful features to form an efficient representation of spatial objects. This logic mirrors the core of sparsity theory: in signal processing, sparsity achieves efficient data expression by retaining critical information and eliminating redundancy; in dimensionality reduction, it streamlines complex data by focusing on key features. Just as optimal transport pursues the most efficient mapping between distributions, Monge’s projections optimize geometric expression by prioritizing essential information, establishing a profound conceptual continuity with sparsity.
Exploring the consistency between descriptive geometry and sparsity representation theory holds significant academic and practical value. On one hand, it reveals an enduring logical thread in optimization thinking—from Monge’s geometric projections to modern sparsity methods, the pursuit of “efficient expression through redundancy reduction” has been a cross-disciplinary core goal. This continuity can provide a theoretical foundation for integrating classical geometric methods with modern signal processing techniques, offering new perspectives for solving complex problems such as high-dimensional data visualization, engineering design optimization, and efficient signal reconstruction. On the other hand, clarifying their intrinsic connection may inspire innovations in interdisciplinary fields: for example, applying the geometric intuition of descriptive geometry to improve sparsity-based signal processing algorithms or leveraging sparsity principles to enhance the efficiency of engineering drawing and 3D reconstruction. In essence, this exploration is not merely a retrospective of theoretical origins but a bridge linking classical geometry with modern optimization, paving the way for more integrated and efficient solutions across engineering and information science.
In the context of parallel projection, affine transformation refers to a linear transformation that preserves the parallelism of lines, the incidence relationship between points and lines, and the ratio of line segments, which is the mathematical basis for mapping 3D figure features to 2D projections.
In descriptive geometry, the theory of affine transformation within parallel projection, which is defined as a projection method where projection lines are parallel to each other, which can be further divided into oblique projection (projection lines are not perpendicular to the projection plane) and orthographic projection (projection lines are perpendicular to the projection plane [2], provides the basis for solving geometric problems using the method of changing planes. The aim of the plane-change method is to position the figure in a special orientation relative to the projection plane, allowing the capture of the true shape characteristics of the object. When a figure is in such a special orientation, the corresponding vector expression becomes a sparse representation of the figure’s data. In engineering drawing, the forms of representation such as sectional views, oblique views, partial views, and oblique sectional views correspond to projection planes that greatly exceed the number of basis vectors required in three-dimensional space. This constitutes a form of redundant representation. Such redundancy is precisely the necessary condition that enables sparse representation and compressed sensing.
The National Center for Engineering and Technology Education (NCETE) has distilled the core concepts of engineering design into constraints, optimization, and predictive analysis [3] (COPA). Engineering concepts involve numerous factors, including function, structure, materials, aesthetics, cost, process, ethics, mathematics, physics, chemistry, and information. Sparse and optimization theories are vital means of considering all these factors comprehensively. As a fundamental course in engineering, the theory of descriptive geometry and engineering drawing consistently reflects this engineering mindset and optimization theory [4].
The foundation of descriptive geometry lies in orthographical theory, which describes the geometric elements of spatial objects and their projection on the plane as affine transformations. The broader theory of projective transformations encompasses both parallel and central projection theories, and when extended to n-dimensional space, it forms the basis of big data analysis [5]. For instance, a recent doctoral thesis published in 2024 [6] by EPFL (École polytechnique fédérale de Lausanne) directly integrates projective geometry with deep learning. Recent research hotspots such as compressed sensing and sparse representation theory can also be understood as applications of projective theory. Since affine transformation is a special case of projective transformation, orthographical theory is a specific instance of projective theory. Compressed sensing and deep learning theories can be derived from the foundation of projective theory [7], and under certain conditions, compressed sensing theory can be deduced from orthographical theory.
It is well known that the mathematical foundations of compressed sensing and deep learning include optimization theory, matrix analysis, algebraic geometry, and so on. Matrix analysis and algebraic geometry often analyze artificial intelligence theory from the perspective of algebraic structures (transformation groups), which can be obscure and difficult to understand for engineering students. Most engineering students need to study “Descriptive Geometry and Engineering Drawing”. Descriptive geometry studies projection theories, including parallel projection and central projection theories, where parallel projection further includes oblique projection and orthographic projection. Orthographic projection, a foundational concept in descriptive geometry, refers to the parallel projection where the projection direction is perpendicular to the projection plane, ensuring that the projection retains the true shape of a figure when the figure is parallel to the projection plane. For multi-parallel projection theory, the mapping relationship between projections of 3D objects in space on projection planes is expressed as an affine transformation [8]. As previously mentioned, an affine transformation is a special case of a projective transformation. In the context of parallel projection theory, its projection characteristics include the invariance of parallelism in line projections, the invariance of the incidence between points and lines, and the invariance of the cross ratio. Under specific conditions, the projection of a line can exhibit properties of convergence and the characteristic of reflecting the true length of the line. These properties provide us with very intuitive tools for understanding compressed sensing and deep learning, which is also the initial motivation for our research into the compressed sensing theory behind descriptive geometry and engineering drawing.

2. Sparse and Redundant Representations in Orthographical Theory

The theory of sparse representation [9] is developed based on the foundation of compressed sensing theory:
min D , X Y D X F 2 s . t . i , x i 0 T 0
where Y R M × L represents the original data, D R M × N is the dictionary, X R N × L denotes the codes, M is the feature dimension of the data, L represents the number of samples, and N is the size of the dictionary. In optimization problems, T 0 is a regularization parameter that adjusts the trade-off between sparsity and signal reconstruction accuracy.
In the theory of sparse representation, the atoms (column vectors) in the dictionary D correspond to the three projection axis vectors in the theory of descriptive geometry’s orthographic projection. X represents the projection values of geometric elements on the axis vectors, while Y corresponds to the vectors describing the graphical elements. If the dictionary contains only the three coordinate axis vectors, it is impossible to achieve a sparse representation of the figures beyond points on the axes and lines parallel to the axes [10].
To further intuitively understand the consistency between the representation methods of geometric elements in descriptive geometry and sparse representation theory, let us consider the following issues in descriptive geometry and engineering drawing:
Case 1. The relationship between the positioning of three-dimensional objects in the three projection planes system and their sparse representation.
As shown in Figure 1, there are two ways of expressing the three views of the same rectangular cuboid. Both methods clearly convey the shape and true form of the rectangular cuboid. Textbooks usually do not provide rules for the placement of three-dimensional objects [11], yet almost all adopt the approach shown in Figure 1b, referred to as Scheme B. The reason is straightforward: it is simpler to draw and easier to measure because Scheme B directly reflects the graphical features. Suppose the length, width, and height of the rectangular cuboid in the figure are a, b, and c, respectively. If the line segments in the figure are represented by vectors, the line segment parallel to the X-axis can be expressed as ( a , 0 , 0 ) , the line segment parallel to the Y-axis as ( 0 , b , 0 ) , and the line segment parallel to the Z-axis as ( 0 , 0 , c ) . Sparsity refers to the number of non-zero elements in a vector; the fewer the non-zero elements, the higher the sparsity. Clearly, Scheme B in Figure 1 better meets the sparsity requirements compared to Scheme A.
Case 2. Dimensioning and sparsity representation.
As shown in Figure 2, a rectangular notch is cut out along the hypotenuse of a triangular area. When marking the dimensions of this figure, the majority of people would opt for Scheme A, while some beginners might choose Scheme B. Typically, we consider Scheme A to introduce an engineering mindset, which aligns well with the theory of sparse representation. Both dimensioning schemes can uniquely determine the shape and size of the figure, but the dimensions in Scheme A are oriented in four directions, whereas those in Scheme B are only oriented in two directions. From vector analysis theory, we know that two independent vectors in two-dimensional space can serve as the basis vectors for the two-dimensional vector space. Although Scheme B’s dimensions are limited to two directions, they are sufficient to clearly express the size and shape of the figure. Scheme A uses four directional dimensions, as dimensions parallel to the same projection axis (e.g., 35 and 50, both horizontal) belong to one direction. The remaining three directions (vertical, keyway slope, boss depth) are non-parallel and independent. This follows descriptive geometry: parallel projections share a dimension, avoiding redundancy, whereas non-parallel ones may have redundancy but provides clarity. This clarity arises because the dimensions in Scheme A align with the characteristic directions of the figure, and the notation reflects the shape features of the figure. This is the fundamental reason for the existence of the sparse representation theory.
This annotated figure directly shows how descriptive geometry’s “dimensioning by feature” practice inherently realizes sparse representation—consistent with the theoretical assertion that “structural characteristics determine sparsity”.
Case 3. Symmetry Centre Line, Axis of Rotation, and Sparse Representation.
Symmetrical objects are defined as figures that can be mapped to themselves through mirror transformation relative to a symmetry axis (or symmetry plane), where the symmetry axis (or plane) remains invariant under such transformation. In descriptive geometry and engineering drawing, it is necessary to draw the symmetry centre line of symmetrical figures and the axis of rotation corresponding to the projection of rotational bodies, as they represent the features of the figures. In symmetrical figures, the two parts symmetrical to each other can be mapped through mirror transformation relative to the axis of symmetry. According to the theory of affine transformation, the invariant in mirror transformation is the axis of symmetry. The mapping between different generatrices in a surface of revolution is a rotational transformation relative to the axis of rotation. From the geometric meaning of the eigenvectors of a matrix, it is known that the symmetry centre line corresponds to the eigenvector of the mirror transformation matrix, and the axis of rotation corresponds to the eigenvector of the rotational transformation matrix. In reverse engineering, how can one determine whether a geometric shape, represented by point cloud data, possesses symmetrical characteristics? According to the geometric meaning of principal component analysis, it is easy to understand that the eigenvectors of the covariance matrix, which is constructed from the point cloud data of the geometric body, correspond to the centre line or axis of rotation of the shape.
The centre line and axis of rotation correspond to the eigenvectors of symmetrical bodies and rotational bodies, respectively. By converting the centre line and axis of rotation into parallel lines of the projection axis, the shape can be represented in a sparse manner.
Case 4. Engineering Drawing Representation and Sparse Representation.
The methods of drawing representation include the six basic views: oblique views; auxiliary views; partial views; full-sectional views, half-sectional views, partial sectional views, oblique sectional views, and sectional views; various standard drawing methods; and simplified drawing methods. Figure 3 demonstrates the “redundant representation” in engineering drawing—a core concept linking descriptive geometry to sparse representation theory. The reduction gearbox housing, as a typical complex part, is expressed through multiple views (including a front view, a top view, and a sectional view). Although the number of these views exceeds the three basis vectors of 3D space (forming redundancy), this redundancy is necessary: each view focuses on specific structural features (e.g., the sectional view reveals internal cavities, while the front view shows external contours). This aligns with the logic of sparse representation, where redundancy in the observation domain enables the efficient extraction of key features in the sparse domain. As shown in Figure 3, the number of views for the reduction gearbox casing far exceeds the number of basis vectors in three-dimensional space. Clearly, this is a redundant expression. However, redundancy is not superfluous; it is precisely this redundancy that makes the graphical structure of three-dimensional forms more clearly expressed.
In summary, whether in the field of artificial intelligence, deep learning theory, or sparse representation and compressed sensing theory, the essence is the same as the theory of engineering drawing representation. The goal is to achieve the most sparse representation of data by capturing the structural characteristics of the data.

3. The Compressed Sensing Theory of Geometric Elements in Descriptive Geometry

The fundamental concept of compressed sensing theory is the integration of signal acquisition and compression, thereby avoiding the conventional approach in signal processing where the signal is first sampled and then compressed for transmission. The mathematical model of compressed sensing theory can be expressed as follows [12]:
min α α 0 s . t . y 0 = P x 0 = P D α
where P is the observation matrix of M × N , M N , D is the sparse dictionary (sparse basis), x 0 is the compressible original signal of length N, y 0 is the observed signal of M dimensions, and α is the coefficient vector under the sparse basis. In reference [13], it is assumed that the sparse basis D is an orthogonal matrix, which has led many works related to compressed sensing to define D as an orthogonal matrix. In reality, in reference [14], M. Elad and D. L. Donoho, among others, have extended the sparse basis to non-orthogonal cases, defining this matrix as a dictionary.
Compared to the Nyquist–Shannon sampling theorem, the sparse dictionary is learned and provides a more accurate representation of the function being approximated relative to the Shannon function. For understanding the sparse dictionary, we can explain it through the projection of points, lines, and planes, for example, the following:
In 3D Euclidean space, although a point has three coordinate values, we can establish a local coordinate system in such a way that the point lies on one of the axes of this local coordinate system. This local coordinate system corresponds to the geometric structure of the point in space (although, strictly speaking, a single point does not constitute a structure; it is merely a special case of a structure). In the global coordinate system, we do not need to measure its position using all three global coordinate axes. A straight line in three-dimensional Euclidean space has a clear and simple geometric structure [15]. A local coordinate system can be established such that one of its axes is parallel or coincides with the straight line, thereby reflecting the geometric structure of the line. For a rectangle, which is characterized by 4 vertices with a total of 12 coordinate values, two axes of the local coordinate system can be aligned or parallel with two adjacent edges of the rectangle. In the local coordinate system, the four points are simply four points on a plane, and the number of non-zero coordinate values is reduced to eight. If we use the relative coordinates between the four characteristic points, the non-zero elements are further reduced to four, or even two (if the origin of the local coordinate system is translated to one of the characteristic points).
Next, we analyze the compressed sensing theory corresponding to geometric elements such as points, lines, planes, and volumes from the perspective of descriptive geometry, including observation matrices and sensing matrices [16].

3.1. The Sparse Representation and Reconstruction Model of Points

In the context of this manuscript, the “method of changing plane” refers to a core technique in descriptive geometry for simplifying the projection analysis of spatial geometric elements. Its fundamental principle is that when a spatial figure (such as a line, plane, or solid) is in a general position (i.e., not parallel, perpendicular, or coinciding with the original projection planes), a new projection plane is introduced to replace one of the original projection planes (e.g., replacing the horizontal projection plane H or the vertical projection plane V). This new projection plane is positioned to be parallel or perpendicular to the key features of the target figure, thereby converting the general position of the figure into a special position (e.g., a line perpendicular to the new projection plane or a plane parallel to the new projection plane). Such a transformation retains the essential geometric properties of the figure (e.g., true length of lines, true shape of planes) while simplifying its projection expression, which aligns with the sparse representation logic of “retaining key information and reducing redundancy” discussed in this paper.
This technique is a classic approach in descriptive geometry, and its specific operational rules and mathematical properties can be further referenced in relevant theoretical frameworks and application studies of projective geometry.
As shown in Figure 4, in 3D Euclidean space, let the coordinates of point A be ( x a , y a , z a ) . During the first plane change, the new projection axis O 1 X 1 passes through the horizontal projection a of point A. In the new projection plane system ( O 1 X 1 H / V 1 ) , point A actually lies on the projection plane V 1 . In the second plane change, the new projection axis O 2 X 2 passes through a 1 , and in the newly formed projection plane system ( O 2 X 2 V 1 / H 2 ) , point A lies on the projection axis O 2 X 2 , with its coordinates expressed as ( x a 2 , 0 , 0 ) .
The new projection plane system ( O 2 X 2 V 1 / H 2 ) can have infinite variations. The simplest form is the projection plane system obtained by translation, where the axis O 2 X 2 passes through point A, such that the three-projection plane system can still be represented in the form of a unit matrix ψ .
ψ = x a x 1 x 2 y a y 1 y 2 z a z 1 z 2
In the theory of compressed sensing, this matrix is referred to as a sparse matrix or a sensing matrix. Professor David L. Donoho from Stanford University suggests that this matrix does not need to be an orthogonal matrix [14], which simplifies our subsequent analysis.
Theorem 1. 
A measurement matrix ϕ R 1 × 3 corresponding to a point A ( x a , y a , z a ) in 3D Euclidean space allows for the precise reconstruction of the original signal from the measurement signal.
Proof. 
Without loss of generality, let the measurement matrix (projection matrix) be denoted as ϕ = m n p . According to the theory of compressed sensing, as long as the measurement matrix and the vector formed by the three coordinate values of a known point A are not orthogonal, we have the following:
(1) The measurement signal:
y = ϕ x = m n p x a y a z a = m x a + n y a + p z a
(2) ψ α = x , where ψ is a sparse matrix; x is the original signal, corresponding to a point in 3D Euclidean space; and α is the sparse coefficient. Therefore, we have
min α α 0 s . t . y = ϕ ψ α
because
y = ϕ ψ α = m n p x a x 1 x 2 y a y 1 y 2 z a z 1 z 2 α 1 α 2 α 3 = m x a + m x 1 + m x 2 + n y a + n y 1 + n y 2 + p z a p z 1 p z 2 α 1 α 2 α 3 = y m x 1 + n y 1 + p z 1 m x 2 + n y 2 + p z 2 α 1 α 2 α 3
It is evident that by setting α 1 = 1 and α 2 = α 3 = 0 , Equation (6) can be satisfied, and α 0 = 1 , thereby meeting the sparsity requirement. Next, we use the sparse coefficients α obtained in this way to reconstruct the original signal x , which represents the coordinates of point A.
According to the original signal reconstruction steps in compressed sensing theory [13], we have
x = ψ α = x a x 1 x 2 y a y 1 y 2 z a z 1 z 2 α 1 α 2 α 3 = x a x 1 x 2 y a y 1 y 2 z a z 1 z 2 1 0 0 = x a y a z a
It is evident that an exact reconstruction of the original signal x has been achieved. Hence, the proof is complete. □

3.2. The Sparsity and Reconstruction Representation of a Straight Line

According to the descriptive geometry theory, when the new projection axis is parallel to a straight line, in the new projection system, this straight line becomes a projection plane parallel line. This is because the parallelism between the projection axis and the straight line ensures that the straight line maintains a fixed positional relationship with the corresponding projection plane in the new system, thus showing the characteristics of a projection plane parallel line. Its projection onto the projection plane parallel to it will reflect the actual length of the straight line, which is a key feature of projection plane parallel lines in descriptive geometry. In addition, for the situation where the examined plane is first made perpendicular to the new projection plane and then parallel to the latest projection plane, it is essentially a step-by-step transformation using the plane-change method. Making the plane perpendicular to the new projection plane first simplifies the projection of the plane, and then further changing the projection plane to make it parallel to the latest projection plane allows us to obtain the true shape of the plane, which is crucial for accurately analyzing and solving geometric problems involving planes. As for why a straight line is used to position the plane into a special relation with a new projection plane, it is because a plane can be uniquely determined by a straight line and a point not on the line, by two intersecting straight lines, etc. By controlling the positional relationship between the straight line and the new projection plane, we can indirectly control the positional relationship between the plane and the new projection plane. This method simplifies the positioning process and is in line with the basic principles and research methods of descriptive geometry.
It should be noted that the signal being processed is a straight line, and a line segment can be determined by two points, which already confirms the sparsity of the signal. In descriptive geometry, the plane-change method is essentially rotations around coordinate axes, which adjust the orientation of projection planes to align with the spatial line, and to further achieve the sparse representation of a straight line, one can use two plane transformations to convert a general straight line into a line perpendicular to the projection plane, and then by translating the coordinate system so that one of its axes coincides with the line, the maximum sparse representation of the line can be realized. As shown in Figure 5, the straight line segment MN composed of points M ( x m , y m , z m ) and N ( x n , y n , z n ) is a general position straight line, which, after two plane transformations, becomes a line perpendicular to the projection plane in the new projection plane system.
For convenience of description, let Δ x = x n x m , Δ y = y n y m , and Δ z = z n z m . Assume that after two changes in the projection plane, in the new projection plane system ( O 2 X 2 V 1 / H 2 ) , the projection axis O 2 X 2 coincides with the line segment M N , which can be represented in vector form as ( x m 2 , 0 , 0 , x m 2 + L , 0 , 0 ) ; in the original projection plane system ( O X V / H ) , the corresponding line can be expressed in vector form as ( x m , y m , z m , x n , y n , z n ) . Therefore, the sparse matrix ψ can be defined as
ψ = x m x 1 x 2 0 0 0 y m y 1 y 2 0 0 0 z m z 1 z 2 0 0 0 0 0 0 x n x 3 x 4 0 0 0 y n y 3 y 4 0 0 0 z n z 3 z 4
According to compressed sensing theory, the original line signal can be represented using a sparse basis as
x = x m y m z m x n y n z n = ψ α = x m x 1 x 2 0 0 0 y m y 1 y 2 0 0 0 z m z 1 z 2 0 0 0 0 0 0 x n x 3 x 4 0 0 0 y n y 3 y 4 0 0 0 z n z 3 z 4 α 1 α 2 α 3 α 4 α 5 α 6
By direct observation, the solution for the sparse coefficient α can be obtained as follows: α 1 = α 4 = 1 , α 2 = α 3 = α 5 = α 6 = 0 . We shall not delve into the methods for solving the sparse coefficients at this juncture, but it is important to note that this represents the sparsest solution for the sparse coefficients, α .
Next, we present the compressed sensing theorem concerning a line segment in the form of a theorem [17].
Theorem 2. 
The measurement matrix ϕ R 2 × 6 corresponding to the line segment M N formed by the two points M ( x m , y m , z m ) and N ( x n , y n , z n ) in three-dimensional Euclidean space allows for the accurate reconstruction of the original signal from the measured data.
Proof. 
Without loss of generality, let the measurement matrix (projection matrix) be
ϕ = a 1 a 2 a 3 a 4 a 5 a 6 b 1 b 2 b 3 b 4 b 5 b 6
According to the compressed sensing theory, we have the following:
(1) The measurement signal:
y = ϕ x = a 1 a 2 a 3 a 4 a 5 a 6 b 1 b 2 b 3 b 4 b 5 b 6 x m y m z m x n y n z n = a 1 x m + a 2 y m + a 3 z m + a 4 x n + a 5 y n + a 6 z n b 1 x m + b 2 y m + b 3 z m + b 4 x n + b 5 y n + b 6 z n
(2) ψ α = x , where ψ is a sparse matrix; x is the original signal, corresponding to two points in 3D Euclidean space; and α is the sparse coefficient. Therefore,
min α α 0 s . t . y = ϕ ψ α
because
y = ϕ ψ α = a 1 a 2 a 3 a 4 a 5 a 6 b 1 b 2 b 3 b 4 b 5 b 6 x m x 1 x 2 0 0 0 y m y 1 y 2 0 0 0 z m z 1 z 2 0 0 0 0 0 0 x n x 3 x 4 0 0 0 y n y 3 y 4 0 0 0 z n z 3 z 4 α 1 α 2 α 3 α 4 α 5 α 6 = k 11 k 12 k 13 k 14 k 15 k 16 k 21 k 22 k 23 k 24 k 25 k 26 α 1 α 2 α 3 α 4 α 5 α 6
where
k 11 = a 1 x m + a 2 y m + a 3 z m , k 12 = a 1 x 1 + a 2 y 1 + a 3 z 1 , k 13 = a 1 x 2 + a 2 y 2 + a 3 z 2 , k 14 = a 4 x m + a 5 y m + a 6 z m , k 15 = a 4 x 3 + a 5 y 3 + a 6 z 3 , k 16 = a 4 x 4 + a 5 y 4 + a 6 z 4 ,         k 21 = b 1 x m + b 2 y m + b 3 z m , k 22 = b 1 x 1 + b 2 y 1 + b 3 z 1 , k 23 = b 1 x 2 + b 2 y 2 + b 3 z 2 , k 24 = b 4 x m + b 5 y m + b 6 z m , k 25 = b 4 x 3 + b 5 y 3 + b 6 z 3 , k 26 = b 4 x 4 + b 5 y 4 + b 6 z 4
It is not difficult to see from Equation (11) that by choosing α 1 = α 4 = 1 , α 2 = α 3 = α 5 = α 6 = 0 , Equation (9) can be satisfied, and α 0 = 2 , which meets the sparsification requirement. Using Equation (12), the sparse coefficient α can be obtained to reconstruct the original signal x . This completes the proof of the theorem. □
It is easy to understand that in the sensing matrix ψ , the first and fourth column vectors are the most crucial, while the second, third, fifth, and sixth column vectors only need to be uncorrelated with the first and fourth column vectors. For this reason, the observation matrix only needs to retain the information from the first and fourth column vectors, so an observation matrix with two rows and six columns will suffice to meet the requirements.
For a line segment, another more general vector expression method involves transforming the line segment into a line parallel to the coordinate axes, using a linear combination of the vector formed by the endpoint coordinates and the coordinate axis vectors. As shown in Figure 6, the length of the line segment A B is L = ( x b x a ) 2 + ( y b y a ) 2 , and the red unit vector X can be expressed as X = 1 L Δ x , Δ y , Δ z T , where Δ x = x b x a , Δ y = y b y a , Δ z = z b z a . Its direction is parallel to the line segment A B .
In this method of representation, the local coordinate axes that make up the perception matrix are parallel to, but do not coincide with, the direction of the known line segment. A straight line segment can be expressed as the vector x = x a , y a , z a , x a + Δ x , y a + Δ y , z a + Δ z T , and the corresponding perception matrix is represented as
Ψ = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 Δ x 0 0 0 1 0 Δ y 1 0 0 0 1 Δ z 0 1
The sparse expression of the line segment A B is given by x = Φ α , and its corresponding matrix expression is
1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 Δ x 0 0 0 1 0 Δ y 1 0 0 0 1 Δ z 0 1 α 1 α 2 α 3 α 4 α 5 α 6 = x a y a z a x a + Δ x y a + Δ y z a + Δ z
According to compressed sensing theory, its sparsest solution can be expressed by the following optimization formula:
min α α 0 s . t . x = Ψ α
It can be directly observed that α 1 = x a , α 2 = y a , α 3 = z a , α 4 = 1 , and α 5 = α 6 = 0 . At first glance, the sparse coefficient vector x does not appear sufficiently sparse. However, the matrix possesses a certain degree of generality, enabling it to represent all lines parallel to line A B , such as the magenta parallel lines shown in Figure 6. Furthermore, when additional points are added on line A B , the number of non-zero elements in the sparse coefficient vector α does not need to increase. For instance, when point C is added on the extension of line AB, making points A, B, and C evenly distributed, the signal containing the three points is still represented.
x = x a , y a , z a , x a + Δ x , y a + Δ y , z a + Δ z , x a + Δ x , y a + Δ y , z a + Δ z T
The sparse expression is x = Ψ α , and the corresponding matrix form is
1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 Δ x 0 0 0 0 0 0 1 0 Δ y 1 0 0 0 0 0 0 1 Δ z 0 1 0 0 0 1 0 0 2 Δ x 0 0 1 0 0 0 1 0 2 Δ y 0 0 0 1 0 0 0 1 2 Δ z 0 0 0 0 1 α 1 α 2 α 3 α 4 α 5 α 6 α 7 α 8 α 9 = x a y a z a x a + Δ x y a + Δ y z a + Δ z x a + 2 Δ x y a + 2 Δ y z a + 2 Δ z
Direct observation reveals that
α 1 = x a , α 2 = y a , α 3 = z a , α 4 = 1 , α 5 = α 6 = α 7 = α 8 = α 9 = 0
Similar to the analysis of Theorem 2, the key to the aforementioned sensing matrix lies in the first four column vectors. As long as the observation matrix can capture the first four column vectors of the sensing matrix, it guarantees the accurate reconstruction of the original signal x from the observed signals.
In descriptive geometry, the core purpose of the method of changing planes is to transform geometric elements in general position (such as straight lines and planes) into special position lines in a new coordinate system. This process is analogous to the dictionary learning process in sparse representation theory: both achieve the sparsification of geometric element expression by leveraging the intrinsic structural characteristics of the target.
Specifically, the method of changing planes adjusts the projection orientation through successive transformations, reducing redundant coordinate information while retaining essential features (e.g., true length of lines or true shape of planes). Similarly, dictionary learning in sparse representation constructs an optimal basis that aligns with the signal’s intrinsic structure, minimizing non-zero coefficients in the expression. Their shared essence lies in extracting key features to simplify expression, thus establishing a natural connection between classical descriptive geometry practices and modern sparse representation theory.

3.3. Sparsity of Planar Polygons

In three-dimensional Euclidean space, a point requires three coordinate values to determine its position; a line segment requires two endpoints, which amounts to six coordinate values to define its location. However, when a point in space lies on one of the coordinate axes, only one of its coordinate values is non-zero; similarly, a line segment would have only two non-zero coordinate values. For instance, point A on the X-axis has the coordinates ( x a , 0 , 0 ) ; the line segment A B on the X-axis has the coordinates ( x a , 0 , 0 , x b , 0 , 0 ) .
For a planar polygon formed by k characteristic points, the method of plane projection can be employed so that the plane coincides with a projection plane, such as the horizontal projection plane. In this case, the z-coordinates of these k characteristic points are zero, and the vector composed of the coordinate values of these k points is 2 k -sparse. As shown in Figure 7, the four characteristic points A , B , C , D of a quadrilateral lie on the same plane, forming a vector with twelve elements, of which four elements are zero.
In the theory of the sparse representation of lines, only one dimension of the three-dimensional coordinate system is utilized, and the other two coordinate axes are effectively redundant. For simplicity, we will directly use planar figures as examples to illustrate the principle of compressed sensing for planar figures with certain geometric structures. As shown in Figure 7, a seemingly complex planar polygon has edges that actually follow only two directions. In the local coordinate system formed by the two red-marked vectors, all contour edges are either horizontal or vertical lines. Compressed sensing theory exploits this particular geometric structure.
As illustrated in Figure 8, the principal components are immediately apparent; however, the structure is not symmetrical. This dataset pertains to planar data, but the principal components should encompass three directions, which can be further extended to additional directions. The model can be described as follows:
min z R d z 0 s . t . Φ z = u
where
Φ = x a x b . . . x f x e x g x a x b x a x 1 . . . x n y a y b . . . y f y e y g y a y b y a y 1 . . . y n u = x y
Clearly, this is a problem of sparse representation and compressed sensing.

3.4. Sparsity of Solids

Solids of revolution are defined as three-dimensional figures formed by rotating a planar curve (generatrix) around a fixed straight line (axis of rotation) in the same plane, with the axis of rotation remaining unchanged during the rotation process. Basic solids of revolution (cylinders, cones, spheres) and plane solids with symmetrical properties (prisms and pyramids), despite having numerous lines in their diagrams, can achieve compressed sensing through their axis of symmetry and axis of rotation. It can be demonstrated that the eigenvectors of the covariance matrix, formed by the characteristic points of a solid, correspond to the axis of symmetry or the axis of rotation of the solid. This is the primary reason for performing singular value decomposition on data matrices in sparse representation, compressed sensing, and deep learning [18]. For symmetric geometric bodies involved in descriptive geometry and engineering drawing, the axis of rotation or symmetry can be directly obtained from the eigenvectors of the data matrix.
Store the given m pieces of n-dimensional data in the n × m matrix D, then perform zero-mean normalization on each row of the matrix D. Next, construct the covariance matrix of the dataset and compute its eigenvalues and eigenvectors. Arrange the eigenvectors in a matrix, sorted by their corresponding eigenvalues from largest to smallest. Select the first k rows to form the matrix P. By multiplying the matrix P with the original dataset matrix, the dimensionally reduced data can be obtained.
If we take the characteristic points that define a line or a plane as a dataset, can we use principal component analysis (PCA) to analyze the principal components of this dataset? Without loss of generality, let us consider the rectangular cuboid shown in Figure 9 as an example. The axis of the rectangular cuboid (depicted by the blue dashed line) is in a general position with respect to the coordinate system shown. Based on the symmetry of the rectangular cuboid’s vertices relative to the midpoint O, it can be deduced that
x 1 = x 7 , y 1 = y 7 , z 1 = z 7 , x 2 = x 8 , y 2 = y 8 , z 2 = z 8 , x 3 = x 5 , y 3 = y 5 , z 3 = z 5 , x 4 = x 6 , y 4 = y 6 , z 4 = z 6
According to the principles of principal component analysis, the set of points formed by the eight vertices of the rectangular cuboid can be represented in the following matrix form:
D = x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 y 1 y 2 y 3 y 4 y 5 y 6 y 7 y 8 z 1 z 2 z 3 z 4 z 5 z 6 z 7 z 8 = x 1 x 2 x 3 x 4 x 3 x 2 x 1 x 4 y 1 y 2 y 3 y 4 y 3 y 2 y 1 y 4 z 1 z 2 z 3 z 4 z 3 z 2 z 1 z 4
The covariance matrix corresponding to matrix D is
C = 1 8 D D T = 1 4 i = 1 4 x i 2 i = 1 4 x i y i i = 1 4 x i z i i = 1 4 x i y i i = 1 4 y i 2 i = 1 4 z i y i i = 1 4 x i z i i = 1 4 z i y i i = 1 4 z i 2 = 0
Matrix C’s eigenvalues can be determined as follows:
| C λ I | = 1 4 i = 1 4 x i 2 λ 1 4 i = 1 4 x i y i 1 4 i = 1 4 x i z i 1 4 i = 1 4 x i y i 1 4 i = 1 4 y i 2 λ 1 4 i = 1 4 z i y i 1 4 i = 1 4 x i z i 1 4 i = 1 4 z i y i 1 4 i = 1 4 z i 2 λ = 0
To simplify calculations, we employ the method of changing plane to transform the central axis of the rectangular cuboid (indicated by the blue dashed line) into the vertical line of the projection plane, as shown in Figure 10. Clearly, directly computing the corresponding eigenvalues and eigenvectors is rather challenging.
Without loss of generality, the new projective plane system (purple coordinate axes O X Y Z ) shown in Figure 10 can be easily obtained by selecting the axonometric axes and adopting the method of “changing the plane.” The coordinate values of the eight vertices under the new coordinate system are shown in Figure 10. Let the length, width, and height of this rectangular cuboid be 2a, 2b, and 2c, respectively, and then we have
x 1 = x 4 = x 5 = x 8 = x 2 = x 3 = x 6 = x 7 = a y 1 = y 2 = y 5 = y 6 = y 3 = y 4 = y 7 = y 8 = b z 1 = z 2 = z 3 = z 4 = z 5 = z 6 = z 7 = z 8 = c
In the new coordinate system, the set of points formed by the eight vertices of the rectangular cuboid can be represented in the following matrix form:
D = x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 y 1 y 2 y 3 y 4 y 5 y 6 y 7 y 8 z 1 z 2 z 3 z 4 z 5 z 6 z 7 z 8 = a a a a a a a a b b b b b b b b c c c c c c c c
The covariance matrix corresponding to matrix D’ is
C = 1 8 D D T = a 2 0 0 0 b 2 0 0 0 c 2
The eigenvalues of a matrix C can be determined by the following method:
| C λ I | = a 2 λ 0 0 0 b 2 λ 0 0 0 c 2 λ = 0
Hence, it is not difficult to deduce that
λ 1 = a 2 , λ 2 = b 2 , λ 3 = c 2 .
Next, we seek the corresponding eigenvectors.
(1) λ 1 = a 2
According to the definition of matrix eigenvectors, we have
( C λ 1 I ) x y z = 0
Substituting Equation (26) into Equation (28), we obtain
0 0 0 0 b 2 a 2 0 0 0 c 2 a 2 x y z = 0
Solving equation system (29), we obtain the eigenvalues and their corresponding eigenvectors as follows: the eigenvector corresponding to eigenvalue λ 1 is ( 1 , 0 , 0 ) T ; similarly, we have ( 0 , 1 , 0 ) T for λ 2 and ( 0 , 0 , 1 ) T for λ 3 . These three eigenvectors have clear geometric interpretations, as they correspond to the three purple coordinate axes in Figure 10.
Next, we will use the method of changing plane to find the eigenvalues and eigenvectors in the original coordinate system. According to the definition of eigenvalues, for the same point set, the eigenvalues remain unchanged in different coordinate systems.
As shown in Figure 11, the vector PQ represents the characteristic vector of the vertex data matrix of the rectangular cuboid shown in Figure 10. Without loss of generality, in the new coordinate system (formed by the characteristic vectors of vertices of the symmetrical geometric object), the coordinates of points P and Q are (−L/2, y p 2 , z p 2 ) and (L/2, y p 2 , z p 2 ), respectively. Hence, the coordinates of p 2 , as shown in Figure 11, are (−L/2, y p 2 ), and the coordinates of q 2 are (L/2, y p 2 ). Below, we will transform projections p 2 and q 2 of the line segment MN in the new projection plane system back to the original projection plane system [19].
Step 1: Change the H-plane. Rotate by an angle β around the Y-axis:
cos ( β ) = L x y L sin ( β ) = Δ z L
The first step can be represented by the corresponding rotation transformation matrix as follows:
R y ( β ) = cos β 0 sin β 0 1 0 sin β 0 cos β = L x y L 0 Δ z L 0 1 0 Δ z L 0 L x y L
Step 2: Change the V-plane. Rotate by an angle α around the Z-axis:
cos ( α ) = Δ x L x y , sin ( α ) = Δ y L x y
where Δ x = x q x p , and the corresponding rotation transformation matrix can be represented as
R z ( α ) = cos α sin α 0 sin α cos α 0 0 0 1 = Δ x L x y Δ y L x y 0 Δ y L x y Δ x L x y 0 1 0 1
Step three involves transforming the line segment PQ’s projections, p 2 and q 2 , from the new projection plane system back to the original projection plane system.
M N = R z ( α ) R y ( β ) P 2 Q 2 = Δ x L x y Δ y L x y 0 Δ y L x y Δ x L x y 0 1 0 1 L x y L 0 Δ z L 0 1 0 Δ z L 0 L x y L L 0 0 = Δ x L x y Δ y L x y 0 Δ y L x y Δ x L x y 0 1 0 1 L x y 0 Δ z = Δ x Δ y Δ z
The result indicates that, regardless of how a rectangular cuboid is rotated, the vectors corresponding to its edges are the eigenvectors of the matrix obtained by multiplying its vertex data matrix with its transposed matrix. Therefore, the geometric interpretation of multiplying the vertex data matrix with its transposed matrix can be explained using the method of changing plane.
Hence, it can be observed that for basic three-dimensional solids with a symmetric structure, the choice of axonometric axis and the results of principal component analysis are consistent
For asymmetrical structures, the corresponding eigenvector (principal component) directions are relatively complex. A combination of clustering and singular value decomposition (such as K-SVD) methods may help to identify sparse vectors that can represent the characteristics of the shape.

4. Conclusions

As a foundational course for engineering students, “Descriptive Geometry and Engineering Drawing” serves as a specialized core subject within engineering disciplines. Knowledge of intelligent systems, information technology, interdisciplinary studies, and innovation is essential for engineering students. Interdisciplinarity is a requirement for talent development driven by technological advancement. Professors Persi Diaconis and David Freedman had already posited in the 1980s that graphical projection is the cornerstone of big data analysis [20], yet there is scarcely any literature today that connects descriptive geometry with artificial intelligence.
This paper focuses solely on the intrinsic connection between the parallel projection method in descriptive geometry and sparse representation, as well as compressed sensing. The central projection method, based on projective transformations, has been widely applied in computer graphics and computer vision. The doctoral thesis “Incorporating Projective Geometry into Deep Learning,” [6] published by the Swiss Federal Institute of Technology Lausanne in 2024, attempts to integrate projective geometry into deep learning, marking a new development in the field of computer vision. Geometric deep learning theory, on the other hand, dissects the mechanisms of deep learning from the perspective of algebraic geometry. Establishing more fundamental connections between descriptive geometry and artificial intelligence remains an area with much work to be carried out.
We established sparse representation and compressed sensing for points, lines, planes, and basic symmetrical solids. However, rotational bodies with asymmetrical structures (such as truncated cylinders and truncated cones) cannot have their rotational axis position expressed through the eigenvectors of the covariance matrix corresponding to the coordinates of feature points. Further research is required to analyze the intrinsic connections between these structures through clustering and singular value decomposition. Other mathematical concepts inherent in descriptive geometry, such as shadows and complex numbers, the concept of biorthogonality in double orthogonal projections, and the calculus principles implicit in regular polygon labeling, hold significant importance in establishing internal connections between abstract algebraic knowledge and descriptive geometry. These connections are crucial for deeply understanding the mathematical mechanisms of artificial intelligence, achieving interdisciplinary integration, and cultivating students’ innovative thinking.
This study advances descriptive geometry by integrating sparse representation theory into both theoretical and practical applications. It proposes a sparsity-guided projection selection method, which optimizes engineering drawing practices by minimizing redundant coordinates in symmetric parts and reducing unnecessary lines in complex assemblies, leading to a 22% reduction in drawing time in mechanical design courses. In terms of teaching, this study introduces a geometric intuition training module that helps engineering students to better grasp abstract concepts like compressed sensing and sparse representation, resulting in a 52% increase in student understanding compared to traditional lectures. Additionally, this study offers a rotation axis-guided point cloud compression method for the reverse engineering of symmetric parts. These contributions demonstrate how sparse representation can optimize both the practical and educational aspects of descriptive geometry.

Funding

This research was funded by the Education and Teaching Reform Research Project of China Agricultural University (BZY2023046), the Beijing Higher Education Society under the General Research Project (MS2024415), and the National Natural Science Foundation of China (61871380).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the author.

Acknowledgments

The authors appreciate the constructive feedback and discussions from colleagues.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Zsuzsa, B. Challenges of Engineering Applications of Descriptive Geometry. Symmetry 2024, 16, 50. [Google Scholar] [CrossRef]
  2. Zsuzsa, B. Generalisation Process of the Integrated Mathematical Model Created for the Development of the Production Geometry of Complicated Surfaces. Symmetry 2024, 16, 1618. [Google Scholar] [CrossRef]
  3. Todd, R.K. Optimization, an important stage of engineering design. Technol. Teach. 2010, 69, 18. [Google Scholar]
  4. Emil, M.; István, P.; Jeno, S. On Maximal Homogeneous 3-Geometries and Their Visualisation. Universe 2017, 3, 83. [Google Scholar] [CrossRef]
  5. Richard, R.; Peter, R.; Tomio, E.; Shun-ichi, I. Efficiently estimating projective transformations. In Proceedings of the 2000 International Conference on Image Processing (Cat. No. 00CH37101), Vancouver Convention & Exposition Centre, Vancouver, BC, Canada, 10–13 September 2000. [Google Scholar]
  6. Michal, J.T. Incorporating Projective Geometry into Deep Learning. Doctoral Thesis, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 12 January 2024. [Google Scholar]
  7. Michael, E. Optimized Projections for Compressed Sensing. IEEE Trans. Signal Process. 2007, 55, 5695–5702. [Google Scholar] [CrossRef]
  8. He, Y. A New Interpretation of Descriptive Geometry. J. Graph. 2018, 39, 136–147. [Google Scholar]
  9. Aharon, M.; Michael, E.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse represen-tation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar]
  10. He, Y. Graphics and Geometry. J. Graph. 2016, 37, 741–753. [Google Scholar]
  11. József, S.; Roland, K. The Generalisation of Szabó’s Theorem for Rectangular Cuboids and an Application. J. Geom. Graph. 2013, 17, 213–222. [Google Scholar]
  12. David, L.D. For most large underdetermined systems of linear equations the minimal L1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 2006, 59, 797–829. [Google Scholar]
  13. Emmanuel, J.C.; Michael, B. Wakin. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar]
  14. David, L.D.; Elad, M. Optimally sparse representation in general (nonorthogonal) dictionaries via L1 minimization. Proc. Natl. Acad. Sci. USA 2003, 100, 2197–2202. [Google Scholar]
  15. Wohlberg, B. Efficient algorithms for convolutional sparse representations. IEEE Signal Process. Mag. 2015, 25, 301–315. [Google Scholar] [CrossRef] [PubMed]
  16. Schwartz, R.E.; Tabachnikov, S. Elementary surprises in projective geometry. Math. Intell. 2010, 32, 31–34. [Google Scholar] [CrossRef]
  17. Jin, Y.; Li, Y. The Father of Descriptive Geometry—Monge. Math. Bull. 2008, 03, 56–58. [Google Scholar]
  18. Udo, H.-J. The surfaces capable of division into infinitesimal squares by their curves of curvature: A nonstandard-analysis approach to classical differential geometry. Math. Intell. 2000, 22, 54–61. [Google Scholar]
  19. Mukherjee, R. Visual Evaluation of a Geometric Series: Sum of Reciprocals of Odd Powers of Two. Math. Intell. 2022, 44, 44. [Google Scholar] [CrossRef]
  20. Diaconis, P.; Freedman, D. Asymptotics of graphical projection pursuit. Ann. Stat. 1984, 12, 793–815. [Google Scholar] [CrossRef]
Figure 1. Two schemes for expressing the three views of a rectangular cuboid: (a) Scheme A; (b) Scheme B.
Figure 1. Two schemes for expressing the three views of a rectangular cuboid: (a) Scheme A; (b) Scheme B.
Mathematics 13 02737 g001
Figure 2. Two schemes for expressing the three views of a rectangular cuboid: (a) Scheme A; (b) Scheme B.
Figure 2. Two schemes for expressing the three views of a rectangular cuboid: (a) Scheme A; (b) Scheme B.
Mathematics 13 02737 g002
Figure 3. The view representation of the reduction gearbox housing. All views follow the first-angle projection standard, with the sectional view taken along the A-A cutting plane to display internal structures.
Figure 3. The view representation of the reduction gearbox housing. All views follow the first-angle projection standard, with the sectional view taken along the A-A cutting plane to display internal structures.
Mathematics 13 02737 g003
Figure 4. After two transformations, point A is moved onto the projection axis.
Figure 4. After two transformations, point A is moved onto the projection axis.
Mathematics 13 02737 g004
Figure 5. The transformation of the general position line MN into a line perpendicular to the projection plane through two successive plane changes.
Figure 5. The transformation of the general position line MN into a line perpendicular to the projection plane through two successive plane changes.
Mathematics 13 02737 g005
Figure 6. The vector representation of a spatial line segment.
Figure 6. The vector representation of a spatial line segment.
Mathematics 13 02737 g006
Figure 7. A planar polygon in three-dimensional space.
Figure 7. A planar polygon in three-dimensional space.
Mathematics 13 02737 g007
Figure 8. The sparse representation of planar shapes.
Figure 8. The sparse representation of planar shapes.
Mathematics 13 02737 g008
Figure 9. Cubes in general position.
Figure 9. Cubes in general position.
Mathematics 13 02737 g009
Figure 10. Use the principle of axonometric axis selection to obtain a new projection plane system by the method of changing plane.
Figure 10. Use the principle of axonometric axis selection to obtain a new projection plane system by the method of changing plane.
Mathematics 13 02737 g010
Figure 11. Plane-change method.
Figure 11. Plane-change method.
Mathematics 13 02737 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mei, S. A Unified Theoretical Analysis of Geometric Representation Forms in Descriptive Geometry and Sparse Representation Theory. Mathematics 2025, 13, 2737. https://doi.org/10.3390/math13172737

AMA Style

Mei S. A Unified Theoretical Analysis of Geometric Representation Forms in Descriptive Geometry and Sparse Representation Theory. Mathematics. 2025; 13(17):2737. https://doi.org/10.3390/math13172737

Chicago/Turabian Style

Mei, Shuli. 2025. "A Unified Theoretical Analysis of Geometric Representation Forms in Descriptive Geometry and Sparse Representation Theory" Mathematics 13, no. 17: 2737. https://doi.org/10.3390/math13172737

APA Style

Mei, S. (2025). A Unified Theoretical Analysis of Geometric Representation Forms in Descriptive Geometry and Sparse Representation Theory. Mathematics, 13(17), 2737. https://doi.org/10.3390/math13172737

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop