Next Article in Journal
Generalization of the Rafid Operator and Its Symmetric Role in Meromorphic Function Theory with Electrostatic Applications
Previous Article in Journal
Cost-Factor Recognition and Recommendation in Open-Pit Coal Mining via BERT-BiLSTM-CRF and Knowledge Graphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Minimality of Projections in Expanded Matrix Spaces

by
Agnieszka Kozdęba
and
Michał Kozdęba
*
Department of Applied Mathematics, University of Agriculture in Krakow, 253 Balicka St., 30-198 Krakow, Poland
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(11), 1835; https://doi.org/10.3390/sym17111835
Submission received: 4 October 2025 / Revised: 19 October 2025 / Accepted: 22 October 2025 / Published: 2 November 2025
(This article belongs to the Section Mathematics)

Abstract

Let S be a space of all functions from X × Y × Z into R or C , where X , Y , Z are finite sets. It was proved that there is a unique projection from S into its subspace consisting of all sums of functions that depend on one variable. We show here its explicit formula. A similar fact was proved for the subspace of S consisting of all the sums of functions that depend on two variables. We prove a significant generalization of this result, considering a space of all functions from X 1 × X 2 × × X n and its subspace consisting of all sums of functions that depend on n 1 variables, where n is any natural number. We also show that these results are true not only for L p -spaces but also for many others.

1. Introduction

The study of decompositions of multivariable functions into sums of functions depending on fewer variables has a long tradition in functional analysis and approximation theory [1]. Such decompositions provide insight into the structural and symmetric properties of function spaces and play an essential role in the analysis of projections [2], tensor products [3,4], and operators [5]. In particular, identifying and characterizing projections onto subspaces defined by these decompositions allows for a deeper understanding of the geometry and algebraic structure of spaces of functions.
In [6], the projection from the space S of all functions defined on X × Y × Z (where X , Y , Z are finite sets) onto its subspace consisting of all sums of functions depending on a single variable was investigated. It was shown that such a projection is unique. In this paper, we continue this line of research by providing an explicit formula for this projection.
A related problem was considered in [7], where the author studied the subspace of all sums of functions depending on two variables. There, the formula for a unique projection onto this subspace was shown. The goal of this paper is to extend this result in a substantial way. Specifically, we consider the space S of all functions defined on the Cartesian product
X 1 × X 2 × × X n ,
where each X i is a finite set, and we investigate the subspace T 2 consisting of all sums of functions depending on n 1 variables. We show an explicit representation for the projection from S onto this subspace.
Beyond L p -spaces on finite sets, our results apply in a much broader context. We show that the same constructions and formulas remain valid for a large class of spaces. This generalization highlights the relevance of this paper in various analytic and algebraic contexts.
The paper is organized as follows. In Section 2, we recall the basic definitions and known results used in this work. Section 3 contains the main result of the explicit formula for the unique projection from [6]. In Section 4, we focus on generalizing the results from [7] and derive formulas for minimal projections for a number of spaces.

2. Preliminaries

At the beginning, let us set up the necessary terminology and notation.
Definition 1. 
Let S be a Banach space, and let T be a closed linear subspace of S. An operator P : S T is called a projection if P | T = i d | T . We denote by P ( S ; T ) the set of all linear and continuous (with respect to the operator norm) projections.
Definition 2. 
A projection P 0 P ( S ; T ) is called minimal if
P 0   = inf { P : P P ( S ; T ) } = : λ ( T ; S ) .
A projection P 0 is called cominimal if
P 0 I d = inf { P I d : P P ( S ; T ) } = : λ ( T ; S ) .
The main problems in the theory of minimal projections are the existence and the uniqueness of minimal projection [8,9], finding the constant λ ( T ; S ) [10], and concrete formulas for minimal projections [11]. This theory is widely studied by many authors, also recently [12,13,14,15,16]. We focus here on the first and third problems.
The main tool for proving minimality is Rudin’s Theorem. We present it below with some basic definitions.
Definition 3. 
Let X be a Banach space, and G be a topological group such that for every g G there is a continuous linear operator A g : X X for which
A e = I , A g 1 g 2 = A g 1 A g 2 ,   f o r   e v e r y   g 1 , g 2 , G .
Then, we say that G acts as a group of linear operators on X.
Definition 4. 
We say that L : X X commutes with G if A g L A g 1 = L for every g G .
Theorem 1 
(Rudin, [17]). Let G be a compact topological group that acts as a group of isomorphisms on a Banach space S and let T be its complemented (i.e., P ( S , T ) ) G invariant subspace. Let A g be an image of g G through that isomorphism, and assume that mapping
G × S ( g , s ) A g s S
is continuous in strong operator topology.
Then, for any P P ( S , T ) , the projection Q P given by the formula
Q P s = G A g 1 P A g s d μ ( g ) ,
commutes with G.
From now on, we will write g in place of A g .
Theorem 2. 
Let the assumptions of Theorem 1 be satisfied. Assume also that every g G is a surjective linear isometry on S. If there exists a unique projection Q P ( S , T ) that commutes with G, then Q is minimal.
Theorem 1, combined with Theorem 2, provides an effective way to prove the minimality of projections. Let us recall another version of Rudin’s Theorem from [18], which enables us to apply our results to a wide range of other spaces. Moreover, the projection under consideration is not only minimal, but also cominimal.
Let D ( Z ) be a Banach algebra of all continuous linear operators from a Banach space Z into Z. For every A D ( Z ) , we define
A G = G ( g 1 A g ) d μ ( g ) ,
where G is a compact topological group that acts through isometries on Z, and μ is the normalized Haar measure for G.
Theorem 3 
(Theorem 2.2. [18]). Let G be a compact topological group that acts through isometries on a Banach space Z, and let N : D ( Z ) [ 0 , + ] be a convex lower semicontinuous function in the strong operator topology D ( Z ) . Let
N ( g 1 A g ) N ( A )   f o r   a n y   A D ( Z ) a n d   g G .
Then,
N ( A G ) N ( A )   f o r   a l l   A D ( Z ) .
In particular, if V is a closed subspace of Z and there is only one projection Q from Z into V that commutes with G, then Q is N-minimal and N-cominimal. This means that N ( Q ) N ( P ) and N ( I Z Q ) N ( I Z P ) for every bounded projection P of Z into V.
The results presented here are also applicable to modular spaces. Let us recall the basic facts about these spaces.
Let X be a vector space over R or C .
Definition 5. 
A functional ρ : X [ 0 , + ] is called semimodular if
1.
λ > 0 ρ ( λ x ) = 0 x = 0 ;
2.
ρ ( a x ) = ρ ( x ) , where | a | = 1 ;
3.
a , b [ 0 , 1 ] , a + b = 1 ρ ( a x + b y ) a ρ ( x ) + b ρ ( y ) .
If we replace condition 1. by the following:
ρ ( x ) = 0 x = 0 ,
then ρ is called modular. If ρ is modular, then
X ρ = { x X : a > 0 ρ ( a x ) < + } ,
which is a linear subspace of X, is called a modular space.
The most common norms considered in a modular space are the Luxemburg norm and Orlicz norm given by
x L : = inf u > 0 : ρ x u 1
and
x O : = inf u · 1 + ρ x u : u > 0 ,
respectively. A special case of modular spaces is the Orlicz space.
Definition 6. 
Let φ : [ 0 , + ) [ 0 , + ) be a convex function. If φ ( 0 ) = 0 and φ | ( 0 , + ) > 0 , then φ is called an Orlicz function.
Definition 7. 
If an Orlicz function also meets the conditions
lim x 0 + φ ( x ) x = 0 a n d lim x + φ ( x ) x = + ,
then we call it an N-function.
From this point on, let us assume that φ is an N-function.
Definition 8. 
Let ( X , Σ , μ ) be a complete space with a σ-finite measure. Let M ( X , K ) be a space of Σ-measurable functions with values in K . Then, for any x M ( X , K ) , we can define Orlicz modular
ρ ( x ) = X φ ( | x ( t ) | ) d μ ( t ) .
Because φ is convex, ρ is a convex semimodular. Then, the modular space X is called an Orlicz space and denoted by X φ .
Recall two theorems from [19], which allow us to expand our results to spaces with various norms.
Theorem 4 
(Theorem 3.2. [19]). Let X be a vector space over K and s ( 0 , 1 ] . Set n 2 and let ρ i be an s-convex semimodular on X for every i { 1 , , n 1 } . Let
ρ = max 1 i n 1 { ρ i }
and f : R n [ 0 , + ) be a convex function such that
f ( x ) = 0 x = 0 .
For every x = ( 1 , x 2 , , x n ) R + n and y = ( 1 , y 2 , , y n ) R + n assume
x j y j   f o r   j = 2 , , n f ( x ) f ( y ) .
Define
x f = inf k > 0 k f e 1 + i = 2 n ρ i 1 x k 1 / s e i .
for every x X ρ . Then . f is an s-norm (a norm if s = 1 ) in X ρ .
If for some i { 1 , , n 1 } , it holds that ρ i x k 1 / s = , then we define
f e 1 + i = 2 n ρ i 1 x k 1 / s e i = .
Using Theorem 4 for n = 2 , it is straightforward to obtain
Theorem 5 
(Theorem 3.3. [19]). Let f : R 2 R be the same as in Theorem 4, ρ 1 be an s-convex semimodular. Then function
x f = inf k > 0 k f 1 , ρ 1 x k 1 / s
is an s-convex norm (a norm if s = 1 ) on X ρ 1 .
Definition 9. 
Let p be a right-sided derivative of φ. Define the function
q ( s ) = sup { t : p ( t ) s } = inf { t : p ( t ) > s } .
Then the function
ϕ ( v ) = 0 v q ( s ) d s
is called a complementary function of φ.
The necessary and sufficient conditions for the smoothness of the Orlicz space can be found in [20], Chapter 2.7 . For our space X φ , these conditions are as follows.
Theorem 6 
(Theorem 2.54 [20]). Let φ be an N-function. Then S = X φ equipped with the Luxemburg norm is smooth if and only if φ is smooth on ( 0 , φ 1 ( 1 ) ) .
Theorem 7 
(Theorem 2.56 [20]). Let φ be an N-function. Then S = X φ with the Orlicz norm is smooth if and only if φ is smooth on ( 0 , π φ ( 1 2 ) ) and lim t π φ ( 1 2 ) p ( t ) = ϕ 1 1 2 , where
π φ ( α ) = inf { t > 0 : ϕ ( p ( t ) ) α } .
The preceding two theorems facilitate a significant extension of our results to Orlicz spaces (Theorems 11 and 12).

3. Subspace L p ( X ) + L p ( Y ) + L p ( Z )

Let X , Y , and Z be finite sets containing n X , n Y , and n Z elements, respectively. Let U ( Ω ) be the space of all functions from Ω into K ( R or C ). This section concentrates on projections from S = U ( X × Y × Z ) into its subspace T 1 consisting of all sums of functions that depend on one variable, i.e.
T 1 = { f S : f ( x , y , z ) = g ( x ) + h ( y ) + i ( z ) , g U ( X ) , h U ( Y ) , i U ( Z ) } .
Let π l , m X denote the permutation that interchanges planes l × Y × Z and m × Y × Z , where 1 l , m n X . The permutations π l , m Y and π l , m Z are defined analogously. These transformations generate a finite group G. The elements of G are associated with S in a natural way. We can define
( A π X f ) ( x , y , z ) = ( f π X ) ( x , y , z ) for   π X G .
We will consider the norm on S such that the maps A π are isometries. The subspace T is invariant under the isometries A π . If G has a discrete topology, then the mapping ( f , π X ) A π X f is continuous.
In [6], the following theorem was proved.
Theorem 8 
(Theorem 10 [6]). Let S and T 1 be as above. Assume that for any element π of the group G, the mapping A π is an isometry, and S is equipped with a smooth norm. Assume that Q is a minimal projection of S into its subspace T 1 and it commutes with G. Then Q is the unique minimal projection of S into T 1 .
In that section, we will show the uniqueness of the projection commuting with G, presenting its explicit formula. Then, from Theorem 2, we obtain minimality, and from Theorem 8, we establish the uniqueness of minimal projection. But first, we need two lemmas.
Lemma 1. 
If D a : = a 0 0 0 0 0 0 U ( X ) + U ( Y ) , then a = 0 .
Proof. 
The space U ( X ) + U ( Y ) is generated by matrices
A i = 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 , B j = 0 0 1 0 0 0 0 1 0 0 ,
where i { 1 , , n X } denotes the only non-zero row consisting of all 1’s, and j { 1 , , n Y } denotes the only non-zero column consisting of all 1’s. Note that each of these matrices and any of their linear combinations C satisfies the so-called four-point rule:
C k 1 l 1 + C k 2 l 2 = C k 1 l 2 + C k 2 l 1
for any k 1 , k 2 { 1 , 2 , , n X } and l 1 , l 2 { 1 , 2 , , n Y } . Hence, every element of the space U ( X ) + U ( Y ) also satisfies this equality. Since D a U ( X ) + U ( Y ) , a + 0 = 0 + 0 , then a = 0 . □
Lemma 2. 
If Q is a projection of S into T 1 , which commutes with a group G, then
Q ( e x y z ) ( i , j , k ) : = q x y z ( i , j , k ) = b 111 for i = x , j = y , k = z B X for i x , j = y , k = z B Y for i = x , j y , k = z B Z for i = x , j = y , k z B X Y for i x , j y , k = z B X Z for i x , j = y , k z B Y Z for i = x , j y , k z B for i x , j y , k z , ,
for some constants b 111 , B X , B Y , B Z , B X Y , B X Z , B Y Z , B satisfying the following equations
B X Y + B X Z + B Y Z = b 111 + 2 B ,
B X Y + B X Z = B X + B ,
B X Y + B Y Z = B Y + B ,
B X Z + B Y Z = B Z + B .
Proof. 
To prove that Q is of that form, we proceed similarly as in the proof of Theorem 4 in [7]. Note that, for any 2 l , m n x , we have π l , m X e 111 = e 111 . Hence,
π l , m X q 111 = π l , m X Q e 111 = Q π l , m X e 111 = Q e 111 = q 111 ,
and since Q commutes with G, we obtain
q 111 ( l , j , k ) = q 111 ( m , j , k ) , for 2 l , m n X , 1 j n Y , 1 k n Z .
For the rest of the permutations, we obtain another two sets of equalities
q 111 ( i , l , k ) = q 111 ( i , m , k ) , for 1 i n X , 2 l , m n Y , 1 k n Z
and
q 111 ( i , j , l ) = q 111 ( i , j , m ) , for 1 i n X , 1 j n Y , 2 l , m n Z .
Consequently,
q 111 ( i , j , k ) = b 111 for i = 1 , j = 1 , k = 1 B X for i 1 , j = 1 , k = 1 B Y for i = 1 , j 1 , k = 1 B Z for i = 1 , j = 1 , k 1 B X Y for i 1 , j 1 , k = 1 B X Z for i 1 , j = 1 , k 1 B Y Z for i = 1 , j 1 , k 1 B for i 1 , j 1 , k 1 .
Moreover,
q x y z = Q e x y z = Q π x 1 X π y 1 Y π z 1 Z e 111 = π x 1 X π y 1 Y π z 1 Z Q e 111 = π x 1 X π y 1 Y π z 1 Z q 111 ,
therefore,
q x y z ( i , j , k ) = b 111 for i = x , j = y , k = z B X for i x , j = y , k = z B Y for i = x , j y , k = z B Z for i = x , j = y , k z B X Y for i x , j y , k = z B X Z for i x , j = y , k z B Y Z for i = x , j y , k z B for i x , j y , k z .
It is enough to find the formula of q 111 . Element q 111 could be interpreted as a so-called “three-dimensional matrix”, whose first layer has the following form (I):
b 111 B Y B Y B Z B Y Z B Z B Y Z ,
and each subsequent layer one has a form (II)
B X B X Y B X Y B X Z B B X Z B .
In other words, the element q 111 consists of n X matrices n Z by n Y arranged one after the other, the first of which is of the form (I) and each subsequent one is of the form (II). Since q x y z T 1 , it can be represented as a sum of the basis elements of T 1 :
u a ( i , j , k ) = 1   if   i = a 0   if   i a ( layer   a ) ,
v b ( i , j , k ) = 1   if   j = b 0   if   j b ( vertical   b ) ,
w c ( i , j , k ) = 1   if   k = c 0   if   k c ( level   c ) .
In a 3D-matrix interpretation, this means that q x y z is a sum of matrices with constant levels, matrices with constant verticals, and matrices with constant layers. Consequently, by subtracting from q 111 certain basis elements multiplied by certain constants, we obtain a three-dimensional matrix whose elements are all equal to zero.
Let us subtract from q 111 the verticals numbered 2 to n Y , each having all elements equal to B; that is, the matrix b = 2 n Y B · v b . We then obtain
( I ) : b 111 B Y B B Y B B Z B Y Z B B Z B Y Z B ,
( I I ) : B X B X Y B B X Y B B X Z 0 B X Z 0 .
Then, subtracting the first level with all elements equal to B X Y B , that is, the matrix ( B X Y B ) · w 1 , we obtain
( I ) : b 111 ( B X Y B ) B Y B ( B X Y B ) B Y B ( B X Y B ) B Z B Y Z B B Z B Y Z B ,
( I I ) : B X ( B X Y B ) 0 0 B X Z 0 B X Z 0 .
Now, let us subtract the first layer with all elements equal to B Y Z B (that is, the matrix ( B Y Z B ) · u 1 ). We will obtain the following:
( I ) : b 111 ( B X Y B ) ( B Y Z B ) B B B Z ( B Y Z B ) 0 B Z ( B Y Z B ) 0 ,
where B = B Y B ( B X Y B ) ( B Y Z B ) and
( I I ) : B X ( B X Y B ) 0 0 B X Z 0 B X Z 0 .
Finally, let us subtract the first vertical with all elements equal to B X Z (matrix B X Z · v 1 ). We will receive the following:
( I ) : b 111 ( B X Y B ) ( B Y Z B ) B X Z B B B Z ( B Y Z B ) B X Z 0 B Z ( B Y Z B ) B X Z 0 ,
( I I ) : B X ( B X Y B ) B X Z 0 0 0 0 0 0 .
Note that the restriction of the space T 1 and its generators to one layer is isomorphic to the space U ( Z ) + U ( Y ) and its generators. In particular, Lemma 1 could be used for the restriction to the second and all subsequent layers,
B X ( B X Y B ) B X Z 0 0 0 0 0 0 ,
and thus we obtain
B X ( B X Y B ) B X Z = 0 ,
which implies
B X Y + B X Z = B X + B .
Analogously, the restriction of one vertical is isomorphic to U ( X ) + U ( Z ) . In particular, Lemma 1 could be used for the restriction to the second and all subsequent verticals,
B 0 0 0 0 0 0 ,
and thus we obtain
B Y B ( B X Y B ) ( B Y Z B ) = 0 ,
which implies
B X Y + B Y Z = B Y + B .
Finally, the restriction to one level is isomorphic to U ( X ) + U ( Y ) . From Lemma 1, applied to the second and all subsequent levels, we obtain
B Z ( B Y Z B ) B X Z 0 0 0 0 0 0 ,
and thus
B Z ( B Y Z B ) B X Z = 0 ,
which implies
B Y Z + B X Z = B Z + B .
Consequently, the first layer (I) takes the following form:
b 111 ( B X Y B ) ( B Y Z B ) B X Z 0 0 0 0 0 0 ,
which (from Lemma 1) implies
b 111 ( B X Y B ) ( B Y Z B ) B X Z = 0 ,
and thus
B X Y + B Y Z + B X Z = b 111 + 2 B ,
which ends the proof. □
By virtue of the technical Lemma, we can now find the form of the minimal projection onto our subspace T 1 .
Theorem 9. 
For S, T 1 , and G defined as above, there exists a unique projection Q : S T 1 commuting with G.
Proof. 
Let Q : S T 1 be a projection commuting with G. Similarly to the proof of Lemma 2, first we find a formula for q 111 .
We know that
Q i = 1 n X j = 1 n Y e i j k ( 1 , 1 , 1 ) = i = 1 n X j = 1 n Y e i j k ( 1 , 1 , 1 ) = 0 for k 1 ,
and
Q i = 1 n X j = 1 n Y e i j 1 ( 1 , 1 , 1 ) = i = 1 n X j = 1 n Y e i j 1 ( 1 , 1 , 1 ) = 1 .
We obtain analogical equations for e i 1 k and e 1 j k . The obtained equations of the first type can be written in the form
B X + ( n Y 1 ) B X Y + ( n Z 1 ) B X Z + ( n Y 1 ) ( n Z 1 ) B = 0 ,
B Y + ( n Z 1 ) B Y Z + ( n X 1 ) B X Y + ( n Z 1 ) ( n X 1 ) B = 0 ,
B Z + ( n X 1 ) B X Z + ( n Y 1 ) B Y Z + ( n X 1 ) ( n Y 1 ) B = 0 ,
and these of the second type in the form
b 111 + ( n X 1 ) B X + ( n Z 1 ) B Z + ( n X 1 ) ( n Z 1 ) B X Z = 1 ,
b 111 + ( n X 1 ) B X + ( n Y 1 ) B Y + ( n X 1 ) ( n Y 1 ) B X Y = 1 ,
b 111 + ( n Y 1 ) B Y + ( n Z 1 ) B Z + ( n Y 1 ) ( n Z 1 ) B Y Z = 1 ,
where b 111 , B X , B Y , B Z , B X Y , B X Z , B Y Z , B are constants, as in Lemma 2.
Geometrically (for the three-dimensional matrix with fixed element e 111 ), it means that the sum of all elements at every level, vertical and layer, which contains the element ( 1 , 1 , 1 ) , equals 1. Every other level, vertical, and layer, has elements that add up to 0. As we can see, the above systems of equations exhibit perfect symmetry with respect to the indices; consequently, determining a single variable immediately provides the values of all other variables of the same type.
Using Equations (3)–(12), we will obtain an explicit formula for Q, which will complete the proof. From (4) and (7), we obtain
B X Z + B X Y B + ( n Y 1 ) B X Y + ( n Z 1 ) B X Z + ( n Y 1 ) ( n Z 1 ) B = 0 ,
and hence,
n Y B X Y + n Z B X Z + ( n Y n Z n Y n Z ) B = 0 .
Analogically, from (5) and (8), we receive
n X B X Y + n Z B Y Z + ( n X n Z n X n Z ) B = 0
and from (6) and (9) we obtain
n Y B Y Z + n X B X Z + ( n X n Y n X n Y ) B = 0 .
Now comparing Equation (13) multiplied by n X and Equation (14) multiplied by n Y , we obtain
n X n Y B X Y + n Z B X Z + ( n Y n Z n Y n Z ) B
= n Y n X B X Y + n Z B Y Z + ( n X n Z n X n Z ) B .
Therefore,
n X n Z B X Z + n X ( n Y n Z n Y n Z ) B = n Y n Z B Y Z + n Y ( n X n Z n X n Z ) B .
Consequently,
n X n Z B X Z ( n Y n X n Z n Y n X n Y n Z n X n Y n Z + n X n Y + n X n Z ) B = n Y n Z B Y Z ,
which implies that
n X B X Z ( n X n Y ) B = n Y B Y Z .
From (15) we obtain
n Y B Y Z + n X B X Z = ( n X n Y + n X + n Y ) B .
This combined with (16) yields
n X B X Z ( n X n Y ) B + n X B X Z = ( n X n Y + n X + n Y ) B 2 n X B X Z = ( n X n Y n X n Y + n X + n Y ) B 2 n X B X Z = ( 2 n X n X n Y ) B B X Z = 2 n Y 2 B .
The linear system considered in the proof shows symmetry with respect to the permutation of indices; hence, we can analogously obtain
B Y Z = 2 n X 2 B ,
and
B X Y = 2 n Z 2 B .
Now, using (17)–(19) in Equation (3), we obtain
b 111 = 2 n X 2 + 2 n Y 2 + 2 n Z 2 2 B = 2 ( n X + n Y + n Z ) 2 B .
Whereas in Equations (4)–(6) we obtain, respectively,
B X = 2 n Y 2 + 2 n Z 2 1 B = 2 ( n Y + n Z ) 2 B ,
B Y = 2 ( n X + n Z ) 2 B ,
B Z = 2 ( n X + n Y ) 2 B .
Now, substituting these values into Equation (10), we receive
2 ( n X + n Y + n Z ) 2 + ( n X 1 ) 2 ( n Y + n Z ) 2 + ( n Z 1 ) 2 ( n X + n Y ) 2
+ ( n X 1 ) ( n Z 1 ) 2 n Y 2 B = 1 .
After elementary transformations and reductions, we receive
B = 2 n X n Y n Z ,
and
q x y z ( i , j , k ) = n X + n Y + n Z 2 n X n Y n Z for i = x , j = y , k = z n Y + n Z 2 n X n Y n Z for i x , j = y , k = z n X + n Z 2 n X n Y n Z for i = x , j y , k = z n X + n Y 2 n X n Y n Z for i = x , j = y , k z n Z 2 n X n Y n Z for i x , j y , k = z n Y 2 n X n Y n Z for i x , j = y , k z n X 2 n X n Y n Z for i = x , j y , k z 2 n X n Y n Z for i x , j y , k z .
This determines the values of Q on the basis, which completes the proof. □
In consequence, we obtain the following.
Theorem 10. 
Let X = { 1 , 2 , n } , Y = { 1 , 2 , m } , and Z = { 1 , 2 , r } , and let μ , ν , ξ be measures such that μ ( i ) = 1 n , ν ( j ) = 1 m , ξ ( k ) = 1 r for all i X , j Y , k Z . If 1 < p < + , then there exists a unique minimal projection Q from L p ( X × Y × Z ) into L p ( X ) + L p ( Y ) + L p ( Z ) . Moreover,
Q e x y z ( i , j , k ) = n + m + r 2 n m r for i = x , j = y , k = z m + r 2 n m r for i x , j = y , k = z n + r 2 n m r for i = x , j y , k = z n + m 2 n m r for i = x , j = y , k z r 2 n m r for i x , j y , k = z m 2 n m r for i x , j = y , k z n 2 n m r for i = x , j y , k z 2 n m r for i x , j y , k z , ,
where e x y z ( i , j , k ) = δ x i δ y j δ z k .
Proof. 
Permutations of levels, verticals, and layers of any function f L p ( X × Y × Z ) do not change its norm. Hence, the operators A π associated with these permutations are isometries. By Theorem 9, we obtain the uniqueness of the projection commuting with G. Now, from Theorem 2, we deduce that the projection Q is minimal. For p ( 1 , + ) , the norm on L p is smooth, which allows us to apply Theorem 8. Consequently, we obtain the uniqueness of the minimal projection. □
Example 1. 
Let X = { x 1 , x 2 } , Y = { y 1 , y 2 , y 3 } , Z = { z 1 , z 2 , z 3 , z 4 } . Equip these sets with probability measures defined by
μ ( x i ) = 1 2 for i = 1 , 2 ,
ν ( y j ) = 1 3 for j = 1 , 2 , 3 ,
ξ ( z k ) = 1 4 for k = 1 , 2 , 3 , 4 .
Then, there is a unique minimal projection Q from L 3 ( X × Y × Z ) onto L 3 ( X ) + L 3 ( Y ) + L 3 ( Z ) , which acts on the canonical basis elements e x y z ( i , j , k ) = δ i x δ j y δ k z as follows:
Q e x y z ( i , j , k ) = 7 24 for i = x , j = y , k = z 5 24 for i x , j = y , k = z 4 24 for i = x , j y , k = z 3 24 for i = x , j = y , k z 2 24 for i x , j y , k = z 1 24 for i x , j = y , k z 0 for i = x , j y , k z 2 24 for i x , j y , k z , ,
where x , i { x 1 , x 2 } , y , j { y 1 , y 2 , y 3 } , z , k { z 1 , z 2 , z 3 , z 4 } .
This example (Figure 1) illustrates how the symmetry of the underlying system determines the pattern of the coefficients. Although the cardinalities of X , Y , Z are different, the projection preserves the same structural relations among the parameters.
Remark 1. 
Note that Theorem 10 is true for p = 2 . In this case, the orthogonal projection P 0 onto L 2 ( X ) + L 2 ( Y ) + L 2 ( Z ) is a minimal projection. By Theorem 2.9 in [21], P 0 is a unique minimal projection. Hence, the projection Q from Theorem 9 is equal to P 0 .
The applications of Theorem 10 are not limited to L p spaces. We can consider, for example, Orlicz spaces like Żwak in [22]. In fact, Theorem 8 also works in modular spaces, among others. Following Skrzypek (see [23]), we can define an Orlicz modular corresponding to our space S as
ρ φ ( x ) = i , j , k φ ( | x i , j , k | ) ,
for any x S , where x = i , j , k x i , j , k e i j k and φ is an Orlicz function.
Then the Luxemburg and Orlicz norms have the forms
x L = inf d > 0 d max 1 , ρ φ x d ,
x O = inf d > 0 d + d ρ φ x d ,
respectively.
It is easy to check that in the space S with the Luxemburg or Orlicz norm, the mappings A π are isometries. Thus, by Theorems 6 and 7, we obtain the following two theorems.
Theorem 11 
(Luxemburg norm). Let φ be an N-function such that φ is smooth on ( 0 , φ 1 ( 1 ) ) . Then the projection Q of the space S equipped with the Luxemburg norm given by a Formula (21) is a unique minimal projection.
Theorem 12 
(Orlicz norm). Let φ be an N-function such that φ is smooth on ( 0 , π φ ( 1 2 ) ) and lim t π φ ( 1 2 ) p ( t ) = ϕ 1 1 2 , where π φ , ρ, and ϕ are as in Preliminaries. Then the projection Q of the space S with an Orlicz norm given by a Formula (21) is a unique minimal projection.
Example 2. 
Take φ ( t ) = t 2 + ln ( t + 1 ) t . This φ is an N-function, and the Luxemburg and Orlicz norms generated by it are smooth. Thus, we obtain the uniqueness of projections in spaces with norms other than L p .
Remark 2. 
The assumption about the smoothness of the S space norm in Theorem 10 is significant, as shown by the following theorem.
Theorem 13. 
If S is equipped with . 1 or . , then the projection Q given by a Formula (21) is not unique.
The proof of the theorem is analogous to the proof of Theorem 2.3 in [9].

4. Subspace T 2

Let us now focus on another subspace of S. Let T = L p ( X × Y ) + L p ( X × Z ) + L p ( Y × Z ) . In [7], the following theorem was proved.
Theorem 14. 
For S, T, and G defined earlier, there exists a unique projection Q : S T commuting with G.
In the proof of this claim, the following lemma played a key role.
Lemma 3 
(Eight-point rule). Let f S . Then f T if and only if f satisfies the “eight-point rule”
f ( x , y , z ) = f ( a , y , z ) + f ( x , b , z ) + f ( x , y , c ) f ( a , b , z ) f ( a , y , c ) f ( x , b , c ) + f ( a , b , c ) ,
for any 1 a , x n X , 1 b , y n Y , 1 c , z n Z .
It was mentioned in [7] that similar reasoning could be carried out for
S = U ( X 1 × X 2 × × X n )
and its subspace
T 2 = i = 1 n U ( X 1 × × X i 1 × X i + 1 × × X n ) ,
where # X i = m i < + , for every i { 1 , 2 , , n } .
In this section, we will prove it and show the explicit formula for a minimal projection in the generalized discrete case. First, let us generalize Lemma 3.
Lemma 4 
( 2 n -point rule). Let f S . Then f T 2 if and only if f satisfies the following 2 n -point rule:
f ( x 1 , , x n ) = f ( a 1 , x 2 , , x n ) + f ( x 1 , a 2 , x 3 , x n ) + + f ( x 1 , x 2 , , x n 1 a n ) f ( a 1 , a 2 , x 3 , , x n ) f ( x 1 , x 2 , , a n 1 , a n ) + + ( 1 ) n + 1 f ( a 1 , a 2 , , a n )
or in a shorter version:
f ( x 1 , , x n ) = k = 1 n ( 1 ) k + 1 1 i 1 < < i k n f ( a i 1 i k )
for any x i , a i X i , where i { 1 , , n } and
a i 1 = ( x 1 , , x i 1 1 , a i 1 , x i 1 + 1 , , x n ) ,
a i 1 i 2 = ( x 1 , , x i 1 1 , a i 1 , x i 1 + 1 , , x i 2 1 , a i 2 , x i 2 + 1 , , x i k 1 , a i k , x i k + 1 , , x n ) .
Proof. 
If f T 2 , then, without loss of generality, f depends on the last n 1 variables; therefore,
f ( x 1 , c 2 , , c n ) = f ( a 1 , c 2 , , c n )
for any c 2 , , c n . In particular,
f ( x 1 , x 2 , , x n ) = f ( a 1 , x 2 , , x n ) , 0 = f ( x 1 , a 2 , x 3 , , x n ) f ( a 1 , a 2 , x 3 , , x n ) , 0 = ( 1 ) n f ( x 1 , a 2 , , a n ) + ( 1 ) n + 1 f ( a 1 , a 2 , , a n ) .
Adding these equations together, we obtain the thesis.
To prove the opposite, fix ( a 1 , , a n ) X 1 × X n . The sum on the right side of Equation (23) can be split into smaller parts. For our convenience, let the components of the right side of (23) be elements of F. Let
f 1 = { f F : f   has   on   the   first   coordinate   a 1 } , f 2 = { f F : f   has   on   the   second   coordinate   a 2 } f 1 , f n = { f F : f   has   on   the   last   coordinate   a n } i = 1 n 1 f i .
Let g j = g f j g for j { 1 , . . . , n } . Then g j does not depend on the j - th variable, i.e.,
g 1 U ( X 2 × × X n ) , g 2 U ( X 1 × X 3 × × X n ) , g n U ( X 1 × × X n 1 ) .
Of course, f = i = 1 n g i . Hence, f T 2 . □
For simplicity, assume that X i = { 1 , , m i } , m i < + for every i { 1 , , n } . Then, the space S can be identified with ”n-dimensional matrices” of dimensions m 1 × m 2 × × m n . Now we show a technical lemma strictly connected to the final formula for Q and the 2 n -point rule.
Lemma 5. 
For any m 1 , , m n N , and n N > 0 the following equation holds:
i = 1 n m i = j = 1 n J { 1 , , n } # J = j k J ( m k 1 ) + 1 .
Proof. 
We will conduct the classical inductive proof on n.
(1)
For n = 1 : m 1 = m 1 1 + 1 .
For n = 2 , the left side of Formula (24) equals m 1 · m 2 , while the right side equals
( m 1 1 ) ( m 2 1 ) + ( m 1 1 ) + ( m 2 1 ) + 1 = m 1 m 2 m 1 m 2 + 1 + m 1 + m 2 1 = m 1 m 2 .
(2)
Assume that (24) is true for some l
i = 1 l m i = j = 1 l J { 1 , , l } # J = j k J ( m k 1 ) + 1 .
(3)
Then
i = 1 l + 1 m i = ( m l + 1 1 ) · i = 1 l m i + i = 1 l m i = ( m l + 1 1 ) · j = 1 l J { 1 , , l } # J = j k J ( m k 1 ) + 1 + j = 1 l J { 1 , , l } # J = j k J ( m k 1 ) + 1 = j = 1 l + 1 J { 1 , , l + 1 } # J = j k J ( m k 1 ) + 1 .
In the second equality, we used the inductive assumption. The last equality comes from the fact that this expression consists of all possible products: those that contain the factor ( m l + 1 1 ) and those that do not. Therefore, we obtain the right-hand side of the formula for l + 1 .
(4)
By induction, the formula is true for any n N > 0 , which ends the proof.
Fix k 1 , l 1 { 1 , , m 1 } . By π k 1 , l 1 1 , we mean a permutation of elements of X 1 × × X n that have the first coordinate equal to k 1 with elements having the first coordinate equal to l 1 . To be more precise:
π k 1 , l 1 1 ( x 1 , , x n ) = ( l 1 , x 2 , , x n ) if x 1 = k 1 ( k 1 , x 2 , , x n ) if x 1 = l 1 ( x 1 , x 2 , , x n ) if x 1 X 1 { l 1 , k 1 } .
Analogously, we define the permutations π k 2 , l 2 2 , , π k n , l n n . Let G be the finite group generated by these permutations. Now we can prove the generalization of Theorem 14.
Theorem 15. 
There exists a unique projection Q : S T 2 commuting with G.
Proof. 
The canonical basis of S is composed of elements
e x 1 x n ( j 1 , , j n ) = δ x 1 j 1 · · δ x n j n ,   where   x i , j i X i , i { 1 , , n } .
It is enough to find the image of that basis under the mapping Q commuting with G, i.e. Q e j 1 j n . Since
q j 1 j n : = Q e j 1 j n = Q π j 1 , 1 1 π j n , 1 n e 1 1 = π j 1 , 1 1 π j n , 1 n Q e 1 1 = π j 1 , 1 1 π j n , 1 n q 1 1 ,
we will only focus on finding a formula for q 1 1 .
We know that π k i , l i i q 1 1 = q 1 1 for all k i , l i X i { 1 } , which gives us the first type of equations
q 1 1 ( k 1 , j 2 , , j n ) = q 1 1 ( l 1 , j 2 , , j n )
for any k 1 , l 1 X 1 { 1 } and j i , X i , i { 2 , , n } . Analogously for k r , l r X r { 1 } , r { 2 , , n } , j i X i , i { 1 , , n } { r } we obtain
q 1 1 ( j 1 , j 2 , , j r 1 , k r , j r + 1 , , j n ) = q 1 1 ( j 1 , j 2 , , j r 1 , l r , j r + 1 , , j n ) .
These equations imply
q 1 1 ( j 1 , j 2 , , j n ) = q 1 1 ( h 1 , h 2 , , h n ) ,
for any j i , h i X i provided that
i { 1 , , n } j i = 1 h i = 1 .
It means that q 1 1 ( j 1 , j 2 , , j n ) is equal to one of 2 n values, depending only on which of the numbers j 1 , j 2 , , j n is equal to 1.
For our convenience, we use the following notation:
q 11 1 ( j 1 , j 2 , , j n ) = b 1 for j 1 = j 2 = = j n = 1 B a for j a 1 , j i = 1 , i { 1 , 2 , , n } { a } B a b for j a , j b 1 , j i = 1 , i { 1 , 2 , , n } { a , b } B a 1 a 2 a n 1 for j a 1 , a { a 1 , , a n 1 } , j a n = 1 , B for j 1 , j 2 , , j n 1 .
Moreover, the second type of equations holds:
Q i 2 = 1 m 2 i n = 1 m n e 1 i 2 i n ( 1 , , 1 ) = i 2 = 1 m 2 i n = 1 m n e 1 i 2 i n ( 1 , , 1 ) = 1 ,
Q i 2 = 1 m 2 i n = 1 m n e k i 2 i n ( 1 , , 1 ) = i 2 = 1 m 2 i n = 1 m n e k i 2 i n ( 1 , , 1 ) = 0   for   k 1 .
Likewise, for the two indices at e ,
Q i 3 = 1 m 3 i n = 1 m n e 11 i 3 i n ( 1 , , 1 ) = i 3 = 1 m 3 i n = 1 m n e 11 i 3 i n ( 1 , , 1 ) = 1 ,
Q i 3 = 1 m 3 i n = 1 m n e 1 k i 3 i n ( 1 , , 1 ) = i 3 = 1 m 3 i n = 1 m n e 1 k i 3 i n ( 1 , , 1 ) = 0   for   k 1
and for a larger number of indices at e. The same equalities hold for analogous indices at e such that e i 1 1 i 3 i n , , e i 1 i 2 i n 1 1 , e 1 i 2 1 i 4 i n , . In particular,
Q i 1 = 1 m 1 e i 1 1 1 ( 1 , , 1 ) = Q i 2 = 1 m 2 e 1 i 2 1 1 ( 1 , , 1 ) = 1 ,
Q i 1 = 1 m 1 i 2 = 1 m 2 e i 1 i 2 1 1 ( 1 , , 1 ) = 1 ,
which implies
b 1 + ( m 1 1 ) B 1 = 1
b 1 + ( m 2 1 ) B 2 = 1
b 1 + ( m 1 1 ) B 1 + ( m 2 1 ) B 2 + ( m 1 1 ) ( m 2 1 ) B 12 = 1 .
Hence,
B 2 = ( m 1 1 ) B 12 B 1 = ( m 2 1 ) B 12 .
Analogously,
B 3 = ( m 1 1 ) B 13 B 1 = ( m 3 1 ) B 13 . B 2 = ( m 3 1 ) B 23 . B 3 = ( m 2 1 ) B 23 .
Similarly,
b 1 + ( m 1 1 ) B 1 + ( m 2 1 ) B 2 + ( m 1 1 ) ( m 2 1 ) B 12 = 1 ,
and
b 1 + ( m 1 1 ) B 1 + ( m 2 1 ) B 2 + ( m 1 1 ) ( m 2 1 ) B 12 + ( m 3 1 ) B 3 + ( m 1 1 ) ( m 3 1 ) B 13 + ( m 2 1 ) ( m 3 1 ) B 23 + ( m 1 1 ) ( m 2 1 ) ( m 3 1 ) B 123 = 1 ,
which gives us
B 3 + ( m 1 1 ) B 13 + ( m 2 1 ) B 23 + ( m 1 1 ) ( m 2 1 ) B 123 = 0 .
From (26) we obtain
B 23 + ( m 1 1 ) B 123 = 0
B 23 = ( m 1 1 ) B 123
Continuing the reasoning, we obtain
B a 1 a k = ( m a k + 1 1 ) B a 1 a k + 1 , B a 1 a n 1 = ( m a n 1 ) B ,
where k { 1 , , n 2 } , a k { 1 , , n } , a i a j   for   i j , and as a result
b 1 = 1 + ( 1 ) n · B · i = 1 n ( m i 1 ) .
Of course q 11 1 T 2 , and thus applying Lemma 4 to ( x 1 , x 2 , , x n ) = ( 1 , 1 , , 1 ) , we obtain
b 1 = B 1 + B 2 + + B n B 12 B 13 B ( n 1 ) n + + ( 1 ) n + 1 B ,
and from (29) we receive
1 + ( 1 ) n · B · i = 1 n ( m i 1 ) = ( 1 ) n + 1 j = 1 n 1 J { 1 , , n } # J = j k J ( m k 1 ) + 1 B .
Thus, by Lemma 5,
1 = ( 1 ) n + 1 j = 1 n J { 1 , , n } # J = j k J ( m k 1 ) + 1 B = = ( 1 ) n + 1 B i = 1 n m i .
Consequently,
B = ( 1 ) n + 1 1 i = 1 n m i .
This gives us a unique projection of the form
q 1 1 ( j 1 , , j n ) = 1 i = 1 n m i · i = 1 n m i i = 1 n ( m i 1 ) , if j 1 = = j n = 1 ( 1 ) # J + 1 · i { 1 , , n } J ( m i 1 ) , if 1 # J n 1 ( 1 ) n + 1 , if j 1 , , j n 1 ,
where J = { i : j i 1 } . □
The conclusion of this claim is the following theorem.
Theorem 16. 
For i { 1 , , n } , let X i = { 1 , , m i } be spaces with measures μ i such that μ i ( k i ) = 1 m i for every k i X i . Let S = L p i = 1 n X i and T 2 = i = 1 n L p j = 1 , j i n X j . If 1 p < + and Q : S T 2 is given by a Formula (30), then Q is a minimal projection of S into its subspace T 2 .
Proof. 
Permutations of a function f S do not change its norm. Hence, the operators A π are of the unit norm. Thus, from Theorem 15, we obtain the uniqueness of the projection commuting with G. Now, from Theorem 2, we deduce that the projection Q is minimal. □
Remark 3. 
Recall that Theorem 16 remains true for p = 2 . In this case, the orthogonal projection P 0 is minimal and
T 2 = i = 1 n L 2 j = 1 , j i n X j .
From Theorem 2.9 from [21], it follows that P 0 is a unique minimal projection. Therefore, the projection Q from Theorem 15 equals P 0 .
Theorem 16, which provides a formula for minimal projections, can also be applied to spaces with other norms.
Theorem 3 can be applied to our spaces
S = L p i = 1 n X i and T 2 = i = 1 n L p j = 1 j i n X j ,
and, consequently, we obtain further extensions of Theorem 16.
Recall that by D ( X , V ) , we denote the algebra of all continuous linear operators from the Banach space X into its subspace V. If V = X , we use the shorter notation D ( X ) .
Remark 4. 
Let N be any norm on D ( S ) satisfying the condition (2) from Theorem 3. The dimension of S is finite, so N is continuous in the strong operator topology on D ( S ) and satisfies the assumptions of Theorem 3.
In the following, we present a couple of norms that satisfy the condition (2) required to use Theorem 3. Details of why these norms meet the assumptions can be found in [18].
Example 3 
(Numerical radius). For every L D ( S ) define the numerical radius as
L w = sup { | λ | : λ W ( L ) } ,
where
W ( L ) = { x * ( L x ) : x S , x = x * = x * ( x ) = 1 } .
Then the projection Q given by (30) is minimal and cominimal.
Example 4 
(p-integrable operators). Operator L : S T is called p-integrable, for 1 p + , if there exists a compact set K and a probabilistic measure μ on K such that L can be decomposed into the form
L = B i d A ,
where A D ( S , C ( K ) ) , i d : C ( K ) L p ( K , μ ) and B D ( L p ( K , μ ) , T ) . Consider a space P ( S , T ) with a norm
N ( L ) = inf { A B : A , B   s a t i s f y   c o n d i t i o n   ( 31 ) } .
Then the projection Q defined by (30) is minimal and cominimal.
Example 5 
(p-nuclear operators). Operator L D ( S , T ) is p-nuclear (where 1 p < ) if it can be written in the form
L x = j = 1 x j * ( x ) v j ,
where x j * S * , v j T ,
Δ : = j = 1 x j * p 1 / p sup j = 1 | v * ( v j ) | q 1 / q : v * T * , v * = 1 <
and 1 / p + 1 / q = 1 . Then we define N ( L ) as
N ( L ) = inf { Δ : Δ   s a t i s f i e s   c o n d i t i o n s   ( 32 )   a n d   ( 33 ) } ,
and the projection Q given by (30) is minimal and cominimal.
Remark 5. 
Theorem 16 also works for spaces S and T with a norm . , which is a convex combination of norms.
Note that Theorem 5 provides us with many other examples if we consider modulars that preserve isometries associated with elements of a group G.
Particular examples of norm-preserving modulars are Orlicz modulars (see Definition 8). This is important because, in general, the minimum projections in relation to two equivalent norms do not have to be the same. The following example shows that a minimal projection need not be an orthogonal projection in the sense of the classical scalar product in R n .
Example 6. 
Let X = ( n ) , Y = ker ( f ) , and | f 1 | > j = 2 n | f j | , f i 0 for i > 1 . Then the operator P 0 given by
P 0 x = x f ( x ) · 1 f 1 , 0 , , 0
is a projection on Y and P 0 = 1 . By Theorem III.3.1 from [24] (p. 105), there is a unique minimal projection. Note that y = ( f 2 , f 1 , 0 , , 0 ) Y and ker P 0 = L i n 1 f 1 , 0 , , 0 . Since f 2 0 , the vector 1 f 1 , 0 , , 0 is not orthogonal to Y. Hence, P 0 is not an orthogonal projection on Y.

5. Summary

This study successfully generalizes the existence and uniqueness results concerning projections onto subspaces of functions defined on a product of finite sets. Building upon the groundwork laid in [6,7], we provided an explicit formula for the unique projection from the space S (of all functions on X 1 × X 2 × × X n ) onto its crucial subspace, T 2 , consisting of sums of functions depending on n 1 variables. Furthermore, we established the validity of these results not only for the traditional L p -spaces but also for a broad class of other function spaces, highlighting the underlying algebraic and structural stability of this projection operator.
The derived explicit formula for the generalized projection holds significant utility. In possible applications like data decomposition, functional ANOVA, and machine learning, this projection can serve as an optimal tool for extracting additive structures or for approximating high-dimensional functions with lower-dimensional components, minimizing error in the chosen norm. The fact that this unique projection exists across various function spaces (beyond L p ) confirms its deep structural role in the analysis of multivariable functions.
Our findings open up several promising avenues for future research. Current work focused on the projection onto the subspace of functions depending on at most n 1 variables. An immediate extension would be to investigate projections onto other natural additive subspaces of S, such as the space of all sums of functions depending on at most k variables, where k is any natural number less than n. Developing an explicit, generalized formula for these more restrictive subspaces would further complete the picture of optimal functional decomposition.
Another generalization involves moving beyond finite product spaces. Our current results rely heavily on the counting measure on finite sets, which guarantees the space is finite-dimensional. Future work should focus on extending these projection theorems to infinite-dimensional spaces defined over non-atomic measure spaces (e.g., L p ( [ 0 , 1 ] n ) with the Lebesgue measure). Extending our results to the case of non-atomic measures presents several challenges. Many steps in the discrete proofs rely on the action of the permutation group on finite sets, which forms a compact group of isometries. In the non-atomic setting, such a group is no longer available, so arguments based on averaging over permutations or exploiting discrete symmetries do not apply. Similarly, certain combinatorial identities used to reduce the linear systems to a single free parameter depend crucially on the finiteness of the index sets. While a complete treatment in this context remains open, one possible approach is to approximate non-atomic measures by discrete measures, or to identify alternative compact symmetry groups that could play a similar role. We leave a detailed analysis of these directions for future work.
In summary, this paper provides a robust foundation and explicit tools for the analysis of additive function structures in discrete, multi-dimensional settings. We anticipate that these results will serve as a starting point for deeper investigations into function decomposition in both finite and continuous measure spaces.

Author Contributions

Both authors A.K. and M.K. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financed from the subsidy of the Ministry of Science and Higher Education for the Hugo Kołłątaj Agricultural University in Kraków for the year 2025.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kolmogorov, A.N. On the representation of continuous functions of several variables by superpositions of continuous functions of one variable and addition. Dokl. Akad. Nauk. SSSR 1956, 108, 179–182. [Google Scholar]
  2. Chalmers, B.L.; Lewicki, G. Symmetric spaces with maximal projection constants. J. Funct. Anal. 2003, 200, 1–22. [Google Scholar] [CrossRef]
  3. Lewicki, G. Minimal Extensions in Tensor Product Spaces. J. Approx. Theory 1999, 97, 366–383. [Google Scholar] [CrossRef]
  4. Lewicki, G.; Prophet, M. Minimal shape-preserving projections in tensor product spaces. J. Approx. Theory 2010, 162, 931–951. [Google Scholar] [CrossRef]
  5. Ciesielski, M.; Lewicki, G. Substochastic operators in symmetric spaces. Math. Nachrichten 2025, 298, 2309–2326. [Google Scholar] [CrossRef]
  6. Kozdęba, M. Uniqueness of minimal projections in smooth expanded matrix spaces. Monatshefte Math. 2021, 194, 275–289. [Google Scholar] [CrossRef]
  7. Kozdęba, M. Minimal projection onto certain subspace of Lp(X×Y×Z). Numer. Funct. Anal. Optim. 2018, 39, 1407–1422. [Google Scholar] [CrossRef]
  8. Shekhtman, B.; Skrzypek, L. Uniqueness of minimal projections onto two-dimensional subspaces. Stud. Math. 2005, 168, 273–284. [Google Scholar] [CrossRef]
  9. Skrzypek, L. Chalmers-Metcalf operator and uniqueness of minimal projections in l n and l 1 n spaces. Springer Proc. Math. 2012, 13, 331–344. [Google Scholar] [CrossRef]
  10. Deregowska, B.; Lewandowska, B. Minimal projections onto hyperplanes in vector-valued sequence spaces. J. Approx. Theory 2015, 194, 1–13. [Google Scholar] [CrossRef]
  11. Skrzypek, L. Minimal projections in spaces of functions of N variables. J. Approx. Theory 2003, 123, 214–231. [Google Scholar] [CrossRef]
  12. Deręgowska, B.; Lewandowska, B.; Lewicki, G. Minimal projections onto subspaces generated by sign-matrices. J. Approx. Theory 2024, 304. [Google Scholar] [CrossRef]
  13. Kobos, T.; Lewicki, G. On the dimension of the set of minimal projections. J. Math. Anal. Appl. 2024, 529, 127250. [Google Scholar] [CrossRef]
  14. Basso, G. Almost minimal orthogonal projections. Isr. J. Math. 2021, 243, 355–376. [Google Scholar] [CrossRef]
  15. Foucart, S.; Skrzypek, L. On maximal relative projection constants. J. Math. Anal. Appl. 2017, 447, 309–328. [Google Scholar] [CrossRef]
  16. Lewicki, G.; Skrzypek, L. Minimal projections onto hyperplanes in l p n . J. Approx. Theory 2016, 202, 42–63. [Google Scholar] [CrossRef]
  17. Rudin, W. Functional Analysis; McGraw-Hill: New York, NY, USA, 1974. [Google Scholar]
  18. Aksoy, A.G.; Lewicki, G. Minimal projections with respect to various norms. Stud. Math. 2012, 210, 1–16. [Google Scholar] [CrossRef]
  19. Ciesielski, M.; Lewicki, G. On a certain class of norms in semimodular spaces and their monotonicity properties. J. Math. Anal. Appl. 2019, 475, 490–518. [Google Scholar] [CrossRef]
  20. Chen, S. Geometry of Orlicz Spaces; Dissertationes Mathematicae; Polish Academy of Sciences, Institute of Mathematics: Warsaw, Poland, 1996; Volume 356, pp. 1–204. [Google Scholar]
  21. Lewicki, G.; Skrzypek, L. Chalmers-Metcalf operator and uniqueness of minimal projections. J. Approx. Theory 2007, 148, 71–91. [Google Scholar] [CrossRef]
  22. Żwak, A. Minimal Projections in Orlicz Spaces. Univ. Iagel. Acta Math. 1995, 32, 137–147. [Google Scholar]
  23. Skrzypek, L. Uniqueness of Minimal Projections in Smooth Matrix Spaces. J. Approx. Theory 2000, 107, 315–336. [Google Scholar] [CrossRef]
  24. Odyniec, W.; Lewicki, G. Minimal Projections in Banach Spaces; Springer: Berlin/Heidelberg, Germany, 1990; Volume 1449. [Google Scholar] [CrossRef]
Figure 1. Graphical representation of the values of Q e x 1 y 1 z 1 . Each color corresponds to one of the possible values: black–- 7 24 , yellow–- 5 24 , red–- 4 24 , blue–- 3 24 , orange–- 2 24 , green–- 1 24 , purple–-0, gray–- 2 24 .
Figure 1. Graphical representation of the values of Q e x 1 y 1 z 1 . Each color corresponds to one of the possible values: black–- 7 24 , yellow–- 5 24 , red–- 4 24 , blue–- 3 24 , orange–- 2 24 , green–- 1 24 , purple–-0, gray–- 2 24 .
Symmetry 17 01835 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kozdęba, A.; Kozdęba, M. On the Minimality of Projections in Expanded Matrix Spaces. Symmetry 2025, 17, 1835. https://doi.org/10.3390/sym17111835

AMA Style

Kozdęba A, Kozdęba M. On the Minimality of Projections in Expanded Matrix Spaces. Symmetry. 2025; 17(11):1835. https://doi.org/10.3390/sym17111835

Chicago/Turabian Style

Kozdęba, Agnieszka, and Michał Kozdęba. 2025. "On the Minimality of Projections in Expanded Matrix Spaces" Symmetry 17, no. 11: 1835. https://doi.org/10.3390/sym17111835

APA Style

Kozdęba, A., & Kozdęba, M. (2025). On the Minimality of Projections in Expanded Matrix Spaces. Symmetry, 17(11), 1835. https://doi.org/10.3390/sym17111835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop