1. Introduction
There is given a bialgebra A over a field . That means that A is an associative algebra with a product , an identity , a coassociative coproduct , and a counit so that and are algebra homomorphisms. The set of all -linear maps of A forms a monoid regarding to the convolution product and the identity .
A bialgebra
H is called a
left Hopf algebra [
1] if there exists an
S in
satisfying
, i.e., if
, then
; this map
S is termed as a
left antipode of
H. If
S also satisfies
in
, i.e.,
also holds, then the bialgebra
H is called a Hopf algebra. This map
S is unique; moreover, it is an algebra and coalgebra antimorphism of
H (cf, [
2]).
For a Hopf algebra
A, if we drop the assumption that
A has an identity and if we allow the product
to have values in the so-called multiplier algebra
, we get a natural extension of the notion of a Hopf algebra. We call this a
multiplier Hopf algebra (see [
3,
4]). This theory has been recently developed to the new one of weak multiplier Hopf algebras (see [
5,
6,
7,
8]).
The purpose of the present paper is to consider the notion of a left Hopf algebra introduced by Green, Nichols, and Taft [
1] in the context of multiplier Hopf algebras as defined by Van Daele [
3].
This article is laid out as follows: The main part of
Section 2 gives some necessary definitions, notations, and some results about left multiplier Hopf algebras.
Section 3 mainly gives a study of the bijective and anti-bialgebra homomorphic properties of left antipodes on a left Hopf algebra (see Theorems 1 and 2).
In
Section 4, we introduce and study the notion of a multiplier left (right) Hopf algebra (Definition 3); we mainly investigate construction and properties of the counit (see Theorem 3) and the left antipode’s properties (see Theorem 4). In
Section 5, we discuss the relation between multiplier Hopf algebras and multiplier left Hopf algebras (see Theorem 6) and in
Section 6, we give an explicit example.
2. Preliminaries
Unless otherwise specified, all vector spaces, linear maps, and tensor products are considered over an algebraically closed field
of characteristic 0. In this section, we present some facts and notations of (left) Hopf algebras that will be used throughout this paper and can be found in [
1,
2].
Let
A be an associative algebra for which we do not require an identity, but we assume that the multiplication is non-degenerate. For this
A, one has the multiplier algebra
. Obviously,
, in particular, when
A has an identity, one has
;
can be characterized as the largest unital algebra as an essential ideal. We denote the multiplier of
by
. The following conclusions is very natural:
Let
be a linear map, then have the following four Galois linear maps from
to
given by (see [
3,
4,
9,
10])
These linear maps are called the
canonical maps related to
. These maps are called
regular if their images are in
. The coproduct
is called
coassociative if
for all
.
We assume A is an associative algebra with a non-degenerate product and with or without an identity.
A
left multiplier Hopf algebra is a pair
; here,
is coassociative and an algebra map so that
and
are not only regular but also bijections of
(see Definition 1.5 in [
9]).
In the same way, we can define a right multiplier Hopf algebra by the maps and .
3. About Antipodes of Left Hopf Algebras
The objective of this section is to study the bijective and anti-bialgebra homomorphic properties of left antipodes about a left Hopf algebra.
As stated in the introduction, a
left Hopf algebra H is a unital bialgebra
with an identity
, a coproduct
, a counit
, and a left antipode
S. We will use the standard Sweedler’s notation (cf. [
2]):
. The axioms of the left antipode and the counit can be written as follows:
for any
.
Similarly, A
right Hopf algebra H is a bialgebra
with a linear map
S satisfying the following identity:
for any
.
Remark 1. - (1)
Let C be the coalgebras of comatrices for . Then, there are the free left Hopf algebras on C. This is the first example of left Hopf algebras given in [1]. These are not Hopf algebras. The specific left antipode constructed was both an algebra and a coalgebra antimorphism.
- (2)
For a left Hopf algebra with a left antipode S, if S also obeys the formula , then there are infinite number of left antipodes on H (see Theorem 3 in [11]). - (3)
In the case of a left Hopf algebra or a right Hopf algebra, the antipode S is not generally an algebra anti-homomorphism and a coalgebra anti-homomorphism.
The following result is the motivation for our definitions of multiplier left (right) Hopf algebras.
Proposition 1. Let H be a unital bialgebra with a linear map . We consider the following two linear maps from to defined byfor any . - (1)
If H is a left Hopf algebra, then we have
- (i)
and .
- (ii)
and .
- (iii)
, for any .
- (2)
If H is a right Hopf algebra, then we have
- (i)
and .
- (ii)
and .
- (iii)
, for any .
Proof. - (1)
The proof of (i)–(ii) is straightforward.
For (iii), we have, for any
- (2)
In a similar way, we can check this.
□
As a corollary of Proposition 1, we have the main result in [
3] as follows.
Corollary 1. A bialgebra H is a Hopf algebra if and only if the maps and are bijective.
Proof. If H is a Hopf algebra, then it follows from (1)(i) in Proposition 1 that the map has the left inverse . It is easy to check that is also the right inverse of , i.e., . This is similar for the map . Thus, the maps and are bijective.
Conversely, for example for , we have for any . We can apply and obtain , i.e., S is also a right antipode of H.
Furthermore, notice that being bijective implies that for any (actually, only by using the injection of ).
We can apply and obtain , i.e., S is an algebra antihomomorphism. □
As a corollary of Proposition 1, we have
Corollary 2. Any left Hopf algebra is not a left multiplier Hopf algebra. Similarly, any right Hopf algebra is not a right multiplier Hopf algebra.
Let H be a left Hopf algebra. Then, we now consider the ring of all linear endomorphisms of and two elements and .
We introduce the following elements:
for
where
. It can be verified directly that the
thus defined satisfy the multiplication table for matrix units:
, where
is the Kronecker symbol. In particular, the elements
are orthogonal idempotent elements.
Similarly, we can consider the ring of all linear endomorphisms of and two elements and .
We introduce the elements
for
where
. The elements
are orthogonal idempotent elements.
Proposition 2. With the above notations, and for any .
Proof. For by , the vanishing of one of the implies the vanishing of all; in particular, it implies that , contrary to Proposition 1(1)(i).
In the same way, we have for any . □
The existence of an infinite set of orthogonal idempotent elements in a ring
R is incompatible with mild chain conditions on the ring. For the set of idempotent elements
, we set
, then
is an infinite properly ascending chain of right ideals. The right annihilator of an idempotent element
F is the set of elements
for any
. Since
has the identity
, this right ideal is the principal right ideal
. It is clear that the following is an infinite properly descending chain of annihilators:
By Theorem 1 in [
11] and Proposition 1, we have the following result.
Proposition 3. Let H be a left Hopf algebra with a left antipode S.
- (1)
If satisfies either the ascending or the descending chain condition for principal right ideals generated by idempotent elements , then is bijective.
- (2)
If satisfies either the ascending or the descending chain condition for principal right ideals generated by idempotent elements , then is bijective.
Combine Corollary 1 and Proposition 3 to get:
Theorem 1. Let H be a left Hopf algebra with a left antipode S. If satisfies either the ascending or the descending chain condition for principal right ideals generated by idempotent elements and satisfies either the ascending or the descending chain condition for principal right ideals generated by idempotent elements , then H is a Hopf algebra.
By Theorem 2 in [
11] and Proposition 1, we have the following result.
Proposition 4. Let H be a left Hopf algebra or a right Hopf algebra. Then contains a right ideal that is a direct sum of an infinite number of -isomorphic right ideals.
In what follows, we wish to determine the structures of the algebra generated by and and generated by and .
By Theorem 4 in [
11] and Proposition 1, we have the following.
Theorem 2. The algebras and are isomorphic under an isomorphism that pairs the and the , and the and the , respectively. The algebras and are primitive algebras that have minimal one-sided ideals.
It is similar for the case of right Hopf algebras. We can get the analogue of Propositions 2 and 3. Theorem 1 and Proposition 4 carry over without change.
4. Multiplier Left Hopf Algebras
In this section, let
be an algebra, with a nondegenerate product
m;
A may or may not have an identity. We work with the following notion of a coproduct (see Definition 1.1 in [
12]).
Definition 1. A
coproduct
(or
comultiplication) on is a homomorphism such that
- (i)
and for all ,
- (ii)
Δ
is
coassociative
in the following sense:for all . Moreover, coassociativity of Δ
can be expressed in terms of the canonical maps as
We remark that condition (i) makes sense because we have . These conditions, in fact, imply that Δ is a nondegenerate homomorphism.
First, we want the existence of a left (right) counit.
Definition 2. Let Δ be a coproduct on the algebra as in Definition 1.
- (i)
A linear functional ε on is called a
left counit
iffor all . - (ii)
A linear functional ε on is called a
right counit
iffor all .
We now define our multiplier left (right) Hopf algebras.
Definition 3. Assume that is a non-degenerate algebra with a comultiplication Δ. Then
- (i)
We say that is a
multiplier left Hopf algebra
if A has a right counit and the maps is injective map and is surjective map of to itself. We call A
regular
if , where for the flipping map. Then, is also a multiplier left Hopf algebra.
- (ii)
We call a
multiplier right Hopf algebra
if A has a left counit and the map is a surjective map and is sn injective map of to itself. We call A
regular
if , where is same as in (i). Then, is also a multiplier right Hopf algebra.
Clearly, we get that is a multiplier left Hopf algebra if and only if is a multiplier right Hopf algebra, where is the opposite algebra of A. For this reason, we just discuss the case of multiplier left Hopf algebras. Similarly for the right version.
Example 1. - (1)
A left Hopf algebra is a multiplier left Hopf algebra. Similarly, any right Hopf algebra is a multiplier right Hopf algebra.
- (2)
Any multiplier Hopf algebra is both a multiplier left Hopf algebra and a multiplier right Hopf algebra since and are bijective (see [3]).
Definition 4. If is a ∗-algebra, we say that Δ a coproduct if itself is also a ∗-homomorphism. A multiplier left (resp. right) Hopf ∗-algebra is a ∗-algebra with a comultiplication, making it into a multiplier left (resp. right) Hopf algebra.
Also, for a multiplier left (right) Hopf ∗-algebra, the regularity is automatic.
In what follows, we let be a multiplier left Hopf algebra with a right counit ε. We will show that the ε is a homomorphism that has the properties of a counit in usual left Hopf algebra theory.
Following the idea of [
3], however, our map
is not bijective, we have
Definition 5. Let be a multiplier left Hopf algebra as defined in Definition 3(i). We define a map , the right multiplier algebra of A, by Notice that if , then it follows from this formula and applying to get . It means that E is well-defined as right multipliers.
In the following proposition, we show that this right multiplier is actually scalar multiples of the identity.
Lemma 1. With the notation E as above, we havefor all . Proof. Let
and let
f be any linear functional on
A. Assume that
. Then, we do a calculation as follows:
By the definition of
E, we obtain
Because this hold for all
f, we obtain
This gives the required formula in the multiplier algebra. □
Proposition 5. With the above notations, for all , we have
- (i)
and .
- (ii)
.
- (iii)
If ε is also a right counit, then .
Proof. - (i)
From the definition of E, we have for all . This gives the result.
- (ii)
By Lemma 1, for all
we have
. It implies that
- (iii)
Since
, for all
. Then
By the surjectivity of
, we get
and .
This means . □
Therefore, altogether we obtain the following result.
Theorem 3. Let A be a multiplier left Hopf algebra with a right counit ε. Then, ε is an algebra homomorphism such thatfor all . It is clear that if A has an identity, then ε is a counit in the usual sense.
We now show that a left antipode exists with the expected properties of the left antipode in the usual left Hopf algebra theory.
Definition 6. Define a map byfor all . We can rewrite these formulas as As before, from this formula, it is easy to see that, indeed, is well-defined linear maps from A to the algebra of right multipliers for all .
Proposition 6. For all , we have that and Proof. One begins as in the proof of Lemma 1. For
and set
, we have
It follows that
and that
□
Using the Sweedler notation, one gets
From the result above, we have that
is a well-defined element in
. This is what we need for the formula in Equation (
4).
Then, we have the following main result.
Theorem 4. If is a multiplier left Hopf algebra, then there exists linear map such thatfor all . It is not so hard to show that the above formulas determine S, just as in the case of the counit. If A has an identity, then we have the usual formula If we combine this with the results on ε, we obtain the following theorem.
Theorem 5. If A is a multiplier left Hopf algebra with an identity, then A is a left Hopf algebra.
Example 2. Let A be a left Hopf algebra with a left antipode and B a multiplier Hopf algebra with an antipode . Then, the tensor product algebra can be made into a multiplier left Hopf algebra with the following structures: We notice thatand, similarly,It is easy to check that Since is a left antipode with and by Proposition 6, we have 5. Multiplier Hopf Algebras and Multiplier Left Hopf Algebras
Let A be an algebra with a nondegenerate product and with or without an identity. In this section we mainly generalize the results obtained in the section to the case of multiplier algebras. Therefore, we give a sufficient condition for a multiplier left Hopf algebra to be a multiplier Hopf algebra.
Let
be a coproduct on
A. From Definition 1, we have that
and
for all
. We can define
and
as in Equations (
1) and (
2), respectively.
Similar to Proposition 3, we have
Proposition 7. Let A be a multiplier left Hopf algebra.
- (1)
If satisfies either the ascending or the descending chain condition for principal right ideals generated by idempotent elements , then is bijective.
- (2)
If satisfies either the ascending or the descending chain condition for principal right ideals generated by idempotent elements , then is bijective.
Similar to Theorem 1, we have
Theorem 6. Let A be a multiplier left Hopf algebra. If satisfies either the ascending or the descending chain conditions for principal right ideals both generated by idempotent elements and generated by idempotent elements , then A is a multiplier Hopf algebra.
Finally, the following is easy to see.
Proposition 8. - (i)
Any multiplier left Hopf algebra is not a left multiplier Hopf algebra. Conversely, any left multiplier Hopf algebra is a multiplier left Hopf algebra.
- (ii)
Similarly, any multiplier right Hopf algebra is not a right multiplier Hopf algebra. Conversely, any right multiplier Hopf algebra is a multiplier right Hopf algebra.
Proof. - (i)
If
A is a multiplier left Hopf algebra, then we cannot get that
is bijective. Indeed, we cannot get that
is bijective. Conversely, it follows from Theorem 3.8 in [
9] that any left multiplier Hopf algebra is a regular multiplier Hopf algebra. Then, it follows Example 1(2).
- (ii)
For this, we have a similar statement to the one in (i).
□
Proposition 9. Let A be a multiplier left Hopf algebra that is left or right Noetherian as an algebra. Then, A is a multiplier Hopf algebra.
Proof. First, assume that
A is left Noetherian. For
, we write
. Set
, with
and
as above;
V is clearly
-stable, and, since
,
V is also
-stable. We have that
Since
, we have that
. (Note that for
). Since
A is left Noetherian,
V is a Noetherian left
A-module. As before,
imply
, so the restriction of
to
V is injective. This implies that
is injective. For if
, each
is contained in a finitely generated left
A-submodule
of
for which
. Therefore,
,
V is a finitely generated left
A-module with
. Since
V is Noetherian,
is injective on
V as above. Thus,
, and we continue as before.
If A is right Noetherian, we consider as a right A-module and consider and . Then, . We set and proceed as above. □
6. Examples
We give some examples in this section to illustrate various notions and results obtained in the previous sections. In particular, we provide insight into previously given theorems.
Recall from [
13] or [
14] that the bialgebra
,
in
is a free noncommutative algebra on the
, with the following three relations:
with coalgebra structure given by
and
. The quantum determinant
is group-like but no longer central. We can invert
d by introducing a non-central variable
y and imposing the relations
and
. Let
X be the generic matrix
. Then, the left antipode
S is given by
and
.
Recall from [
4] that a multiplier Hopf algebra
is a nonunital associative algebra generated by a linear basis
with the product induced by the following commutation rules:
where
and
denotes the Kronecker-delta symbol.
The unit in
is given as
. The coproduct, counit, and antipode on
are given as follows:
By Example 2, we have
Example 3. With the above notations. is a multiplier left Hopf algebra with the following structures:
The product: for all , The coproduct: for all , The counit: for all , The antipode: for all ,
We can compute as follows:In a same way, we can getandand . Similarly, we can compute the following items: Since is a left antipode and from Proposition 6, we havefor all , where S is the antipode of .