Weak Multiplier Hopf Algebras II. The source and target algebras

In this paper, we continue the study of weak multiplier Hopf algebras. We recall the notions of the source and target maps $\varepsilon_s$ and $\varepsilon_t$, as well as of the source and target algebras. Then we investigate these objects further. Among other things, we show that the canonical idempotent $E$ (which is eventually $\Delta(1)$) belongs to the multiplier algebra $M(B\otimes C)$ where $B=\varepsilon_s(A)$ and $C=\varepsilon_t(A)$ and that it is a separability idempotent. We also consider special cases and examples in this paper. In particular, we see how for any weak multiplier Hopf algebra, it is possible to make $C\otimes B$ (with $B$ and $C$ as above) into a new weak multiplier Hopf algebra. In a sense, it forgets the 'Hopf algebra part' of the original weak multiplier Hopf algebra and only remembers the source and target algebras. It is in turn generalized to the case of any pair of non-degenerate algebras $B$ and $C$ with a separability idempotent $E\in M(B\otimes C)$. We get another example using this theory associated to any discrete quantum group (a multiplier Hopf algebra of discrete type with a normalized cointegral). Finally we also consider the well-known 'quantization' of the groupoid that comes from an action of a group on a set. All these constructions provide interesting new examples of weak multiplier Hopf algebras (that are not weak Hopf algebras).


Introduction
Consider a groupoid G. It is a set with a product that is not defined for all pairs of elements p, q ∈ G. Only if the so-called source s(p) of p equals the target t(q) of q, then pq is defined in G. The source and target are maps from G to the so-called units of G. Here we consider the units as a subset of G. The product is associative in the obvious sense and for any element p ∈ G, there is a unique inverse p −1 characterized by the property that s(p −1 ) = t(p) and t(p −1 ) = s(p) and that p −1 p = s(p) and pp −1 = t(p). We refer to basic works on groupoids for a more precise definition and details. See further in this introduction under the item basic references.
To any groupoid G are associated two (regular) weak multiplier Hopf algebras (in duality). First there is the algebra A, defined as the space K(G) of complex functions on G with finite support and pointwise product. A coproduct ∆ on K(G) is defined by The pair (A, ∆) is a regular weak multiplier Hopf algebra (in the sense of ). The idempotent multiplier E in M (A ⊗ A) (playing the role of ∆(1) and sometimes called the canonical idempotent of the weak multiplier Hopf algebra) is given by the function on pairs (p, q) in G × G that is 1 if pq is defined and 0 if this is not the case. The antipode S is defined by (S(f ))(p) = f (p −1 ) whenever f ∈ K(G) and p ∈ G.
In this example, the source algebra A s is the algebra of functions on G so that f (p) = f (q) whenever p, q ∈ G satisfy s(p) = s(q). The source map ε s from A to A s is defined by (ε s (f ))(p) = f (p −1 p) whenever p ∈ G and f ∈ K(G). The target algebra A t consists of functions f on G so that f (p) = f (q) if t(p) = t(q) for p, q ∈ G. The target map ε t from A to A t is defined by (ε t (f ))(p) = f (pp −1 ) for all p and f ∈ K(G). Recall that these two algebras are subalgebras of the multiplier algebra M (A) (here the algebra of all complex functions on G). Observe that the ranges ε s (A) and ε t (A) of the source and target maps can be strictly smaller than the source and target algebras A s and A t respectively. This happens when the set of units is infinite. We refer to Section 1 in  for more details on this example.
For the second case, we take the algebra B, defined as the groupoid algebra CG of G. If we use p → λ p for the canonical imbedding of G in CG, then if p, q ∈ G, we have λ p λ q = λ pq if pq is defined and 0 otherwise. Here the canonical idempotent E is given by λ e ⊗ λ e where the sum is only taken over the units e of G. The antipode is given by S(λ p ) = λ p −1 for all p ∈ G. The source and target maps are given by ε s (λ p ) = λ e where e = s(p) and ε t (λ p ) = λ e where now e = t(p). Here the source and target algebras B s and B t coincide and it is the multiplier algebra of the span of elements of the form λ e where e is a unit of G. Also in this case, the images of the source and target maps can be strictly smaller than the source and target algebras.
Again, we refer to Section 1 in  for more details on this example.
These two cases are dual to each other. The duality is given by f, λ p = f (p) whenever f ∈ K(G) and p ∈ G. We give more details (about this duality) in  where we treat duality for regular weak multiplier Hopf algebras with integrals.
In this paper we focus on the study of the source and target algebras A s and A t for a general (mostly regular) weak multiplier Hopf algebra (A, ∆). We also study the source and target maps ε s : A → M (A) and ε t : A → M (A) (where M (A) is the multiplier algebra of A) and we are particularly interested in the images ε s (A) and ε t (A). They turn out to be two-sided ideals in A s and A t respectively. Moreover the algebras A s and A t can be seen as the multiplier algebras of these subalgebras. Also in this paper, we construct certain examples of weak multiplier Hopf algebras. The constructions are known in the case of finite-dimensional weak Hopf algebras and it turns out that they can be formulated also in this more general framework. Of course, some care is needed because coproducts do not map into the tensor product A ⊗ A, but into the multiplier M (A ⊗ A) of this tensor product. In this sense, the (sub)title of this paper is somewhat misleading and too restrictive. In earlier papers on the subject, we hardly looked at other than the trivial motivating examples coming from a groupoid or weak Hopf algebras. In this paper we take advantage of the further development of the theory to consider examples that are closely related and using the results we obtain on the source and target algebras. In our forthcoming paper on the subject [VD-W3], where we treat integrals and duality, we will use these examples again and apply duality to get still other examples of weak multiplier Hopf algebras. Because we do not have to restrict to the finite-dimensional case, we get many more interesting examples that do not fit into the original theory of weak Hopf algebras.

Content of the paper
In Section 1 we recall some of the basic notions and results on (regular) weak multiplier Hopf algebras as studied in our first papers on the subject ([VD-W1] and ). In particular, we will explain some of the covering properties as this will be important for the rest of the paper. In the earlier papers on the subject, we briefly looked already at the source and target algebras A s and A t as well as the source and target maps ε s and ε t . In Section 2 we investigate these objects further. We recall the definitions and some of the basic properties that are found already in . Then we show that the images ε s (A) and ε t (A) of the source and target maps are two-sided ideals in the source and target algebras A s and A t respectively and that even the algebras A s and A t can be identified with the multiplier algebras M (ε s (A)) and M (ε t (A)) of these ideals. We show that the canonical idempotent E has all the properties of a separability idempotent (as studied in [VD4]). In Section 3 we study special cases and examples. We start again with the two examples associated to a groupoid. We will be very short here as we include this mainly for completeness. These examples have been considered in earlier papers (see e.g. ).
Then we consider any regular weak multiplier Hopf algebra (A, ∆) and we associate a new weak multiplier Hopf algebra (P, ∆ P ) where the underlying algebra P is ε t (A) ⊗ ε s (A) and the coproduct is given by the formula (A) and c ∈ ε t (A) and where E is the canonical multiplier in M (A ⊗ A). We also use this example further as a model for the construction of an abstract version of this case. Then we take any pair of non-degenerate algebras B and C and start with a so-called separability idempotent E in the multiplier algebra M (B ⊗ C). We take P = C ⊗ B and ∆ P as above. These two examples are 'quantizations' of the trivial groupoid G constructed from a set X by taking G = X × X with product (z, y)(y, x) = (z, x) when x, y, z ∈ X.
This groupoid in turn is related with the case of a groupoid G constructed from a (left) action of a group H on a set X. Now G consists of triples (y, h, x) where x, y ∈ X and h ∈ H and y = h ⊲ x and where ⊲ is used to denote the action. The product is given by (z, k, y)(y, h, x) = (z, kh, x). And finally, also this groupoid will be quantized (at least in a certain sense to be explained in this section).
The starting point is again a pair of non-degenerate algebras B and C with a separability idempotent E in the multiplier algebras M (B ⊗ C). Moreover there is multiplier Hopf algebra Q that acts from the right on B and from the left on C in such a way that B is a right Q-module algebra and C a left Q-module algebra. These objects are related with the requirement that the right action of Q on C induces via E the left action of Q on B. See Section 3 for a more precise statement. The double smash product P is defined as the algebra generated by B, C and Q with B and C commuting and the commutation rules between B and Q determined by the left action of Q on B and the ones between C and Q determined by the right action of Q on C. It carries a natural coproduct making P into a weak multiplier Hopf algebra.
Finally, in Section 4 we draw some conclusions and discuss possible further research on this subject.
In a forthcoming paper on the subject, we treat integrals and duality and we consider more examples (cf. ).
The material studied in this paper is closely related with the theory of (regular) multiplier Hopf algebroids, as developed in , where the theory of weak multiplier Hopf algebras is treated within an algebroid framework. See also  for the relation between the two concepts.
We also like to refer to the paper on weak multiplier bialgebras by Böhm, Gómez-Torecillas and López-Centella (see [B-G-L]) where the notion of a weak multiplier bialgebra is developed. In this theory, the source and target maps, as well as the source and target algebras, play a crucial role. See also [K-VD] where a Larson-Sweedler type theorem is proven for these weak multiplier bialgebras.

Conventions and notations
We only work with algebras A over C (although we believe that this is not essential and that it is possible to obtain the same results for algebras over other, more general fields).
We do not assume that they are unital but we need that the product is non-degenerate. We also assume our algebras to be idempotent (that is A 2 = A). In fact, as we have seen already in , at least in the regular case, the algebras must have local units. Then of course, the product is automatically non-degenerate and also the algebra is idempotent. When A is such an algebra, we use M (A) for the multiplier algebra of A. When m is in M (A), then by definition we can define am and mb in A for all a, b ∈ A and we have (am)b = a(mb). The algebra A sits in M (A) as an essential two-sided ideal and M (A) is the largest algebra with identity having this property. We consider A ⊗ A, the tensor product of A with itself. It is again an idempotent, nondegenerate algebra and we can consider the multiplier algebra M (A ⊗ A). The same is true for a multiple tensor product. We use ζ for the flip map on A ⊗ A, as well as for its natural extension to M (A ⊗ A). We use 1 for the identity in any of these multiplier algebras. On the other hand, we mostly use ι for the identity map on A (or other spaces), although sometimes, we also write 1 for this map. The identity element in a group is denoted by e. If G is a groupoid, we will also use e for units. Units are considered as being elements of the groupoid and we use s and t for the source and target maps from G to the set of units. When A is an algebra, we denote by A op the algebra obtained from A by reversing the product. When ∆ is a coproduct on A, we denote by ∆ cop the coproduct on A obtained by composing ∆ with the flip map σ. For a coproduct ∆, as we define it in Definition 1.1 of [VD-W2], we assume that ∆(a)(1⊗b) and (a ⊗ 1)∆(b) are in A ⊗ A for all a, b ∈ A. This allows us to make use of the Sweedler notation for the coproduct. The reader who wants to have a deeper understanding of this, is referred to [VD3] where the use of the Sweedler notation for coproducts that do not map into the tensor product, but rather in its multiplier algebra is explained in detail.

Basic references
For the theory of Hopf algebras, we refer to the standard works of Abe [A] and Sweedler [S]. For multiplier Hopf algebras and integrals on multiplier Hopf algebras, we refer to [VD1] and [VD2]. Weak Hopf algebras have been studied in [B-N-S] and [B-S] and more results are found in [N] and [N-V1]. Various other references on the subject can be found in [Va]. In particular, we refer to [N-V2] because we will use notations and conventions from this paper when dealing with weak Hopf algebras. For the theory of groupoids, we refer to [Br], [H], [P] and [R].

Acknowledgments
The first named author (Alfons Van Daele) would like to express his thanks to his coauthor Shuanhong Wang for motivating him to start the research on weak multiplier Hopf algebras and for the hospitality when visiting the University of Nanjing in 2008 for the first time and later again in 2012 and 2014. The second named author (Shuanhong Wang) would like to thank his coauthor for his help and advice when visiting the Department of Mathematics of the University of Leuven in Belgium during several times the past years. This work is partially supported by the NSF of China (No 11371088) and the NSF of Jiangsu Province, China (No. BK 2012736).

Preliminaries on weak multiplier Hopf algebras
Let (A, ∆) be a weak multiplier Hopf algebra as in Definition 1.14 of . For most of this paper, we will need regularity. However, in this section, we will recall the notions and results, not only for regular weak multiplier Hopf algebras but also for the non-regular case.
A is an algebra over C, with or without identity but with a product that is non-degenerate (as a bilinear map). The algebra is also idempotent in the sense that A = A 2 (meaning that any element in A is a sum of products of elements of A). In fact, we have seen that in the regular case, the algebra automatically has local units (see Proposition 4.9 in ). This is needed for some results about duality that we prove in . Remark that for an algebra with local units, the product is automatically non-degenerate and the algebra is idempotent. There is a coproduct ∆ on A. It is a homomorphism from A to the multiplier algebra M (A ⊗ A) of the tensor product A ⊗ A of A with itself. It is not assumed that it is nondegenerate (see further). The canonical maps T 1 , T 2 , T 3 and T 4 are linear maps defined on A ⊗ A by In general, it is assumed that T 1 and T 2 have range in A ⊗ A. If also T 3 and T 4 map into A ⊗ A, then the coproduct is called regular. The coproduct is assumed to be full. This means that the smallest subspaces V and W of A satisfying If the coproduct is regular, then a similar property will also be true for the maps T 3 and T 4 and so both the flipped coproduct ∆ cop on A and the original coproduct on A op will also be full coproducts. Furthermore, it is assumed that there is a counit. This is a linear map ε : for all a, b in A. Similar formulas will be true for the other canonical maps in the case of a regular coproduct. Because the coproduct is assumed to be full, this counit is unique. Remark however that in general, it is not a homomorphism. It is also not possible to construct a counit, even given that the coproduct is full. Therefore, the existence of the counit is part of the axioms for weak multiplier Hopf algebras.
There is an idempotent element E, called the canonical idempotent in M (A ⊗ A), giving the ranges of the canonical maps T 1 and T 2 as If the weak multiplier Hopf algebra is regular, we also have these properties for the ranges of the canonical maps T 3 and T 4 . So in that case, we also have with the same idempotent. This element is uniquely determined and it satisfies for all a ∈ A.
We see that the coproduct is degenerate if E is strictly smaller than 1. However, still the coproduct can be extended in a unique way to a homomorphism from M (A) to M (A ⊗ A) (again denoted by ∆) provided we assume ∆(1) = E. Similarly, the homomorphisms ∆ ⊗ ι and ι ⊗ ∆ have unique extension to M (A ⊗ A) such that, again using the same symbols for these extensions, we have We have also (∆ ⊗ ι)(E) = (ι ⊗ ∆)(E) and The last equality means, in a sense that can be made precise, that the left and the right legs of E commute.
There is a unique antipode S. It is a linear map from A to the multiplier algebra M (A). It is an anti-algebra map in the sense that S(ab) = S(b)S(a) for all a, b ∈ A and it is an anti-coalgebra map meaning that ∆(S(a)) = ζ(S ⊗ S)∆(a) for all a ∈ A (in an appropriate sense -see e.g. Proposition 3.7 and more comments in [VD -W4] for a correct formulation). Recall that we use ζ for the flip map. Moreover, the antipode satisfies the formulas  for all a in A. One has to multiply with an element of A, left or right, in order to be able to use the Sweedler notation, and so strictly speaking, the formulas hold in M (A) (see also the remark below).
We have the equalities for all a. These equations are equivalent with In the regular case, we have that the antipode maps A to itself and is bijective. In fact, this property of the antipode characterizes the regular weak multiplier Hopf algebras. In that case, we have the following counterparts of the formulas (1.1) and (1.2) above. We have (1 ⊗ S −1 (a (2) ))∆(a (1) ) (1.6) for all a. Again these formulas can also be written as for all a, b. We now make an important remark about the covering of the previous formulas.
1.1 Remark i) First rewrite the (images of the) canonical maps T 1 and T 2 , and of T 3 and T 4 in the regular case, using the Sweedler notation, as where a, b ∈ A. Remark that we have interchanged the symbols a and b in two of the expressions. In all these four expressions, either a (1) or a (2) is covered by b. This is by the assumption put on the coproduct by requiring that the canonical maps have range in A ⊗ A. ii) Next consider the expressions bS(a (1) ) ⊗ a (2) (1.11) (a) a (1) ⊗ bS(a (2) ) and where a, b ∈ A. In the first two formulas (1.11), we have a covering by the assumption that the generalized inverses R 1 and R 2 of the canonical maps exist as maps on A ⊗ A with range in A ⊗ A (see ). In the second pair of formulas (1.12), we have a good covering only in the regular case. It follows by considering the expressions in (1.9) and using that S is a bijective anti-algebra map from A to itself. In the regular case, we can also consider the above expressions with S replaced by S −1 . iii) If on the one hand, we first apply S in the first or the second factor of the expressions in (1.9) and multiply and if on the other hand we simply apply multiplication on the expressions in (1.11), we get the four elements bS(a (1) )a (2) in A for all a, b ∈ A. This is used to define the source and target maps in the next section (see Proposition 2.3 in the next section). iv) Now, we combine the coverings obtained in i) and ii). Consider e.g. the two expressions where a, b ∈ A. The first expression (1.13) is obtained by applying the canonical map T 1 to the first of the two expressions in (1.11). So this gives an element in A ⊗ A and we know that it is E(a ⊗ b) as we can see from the formula (1.1). Similarly, the second expression (1.14) is obtained by applying the canonical map T 2 to the second of the two expressions in (1.11). We know that this is (b ⊗ a)E as we see from the formula (1.2) above. Remark that E(a ⊗ b) and (b ⊗ a)E belong to A ⊗ A because by assumption E ∈ M (A ⊗ A), but that on the other hand, it is not obvious (as we see from the above arguments) that the expressions that we obtain for these elements belong to A ⊗ A.
v) Finally, as a consequence of the above statements, also the four expressions This justifies a statement made earlier about the properties of the antipode.
In the regular case, we also have many other nice formulas (see Section 4 in [VD-W2]. One of them is (S ⊗ S)E = ζE (as expected because E = ∆ (1)). Other formulas that we will use, will be recalled later. In any case, they are all found in [VD-W2] and we refer to this paper for details.

The source and target algebras
As in the previous section we consider a weak multiplier Hopf algebra (A, ∆). Some of our results are also valid in the non-regular case, but mostly we will need to assume that (A, ∆) is regular. We will first give the definition of the source and target algebras. They have not yet been defined explicitly in . In [VD-W2] we only considered the source and target maps and their images as far as we needed this to prove results about the antipode in Section 3 and in the appendix of [VD-W2]. We will recall some of these results and view them in this broader context.
We consider the canonical idempotent E in M (A ⊗ A) as reviewed in the previous section and we use that the coproduct ∆ can be extended to the multiplier algebra as we have mentioned earlier.
Then the source and target algebras are defined as follows.

Definition Define
In this paper, we will systematically use the letter x for an element in A t and the letter y for an element in A s . For elements in A itself, we keep using a, b, c. This convention is different from what is sometimes done in other papers on this subject (e.g. ).
We have a couple of remarks about this definition.

Remarks i) If y ∈ M (A), then y ∈ A s if and only if
for all a, b ∈ A. This means that we can define these spaces without reference to E. We only need the coproduct.
ii) Also observe that the formulas above can be written as for a, b ∈ A and y in A s and x in A t . This is the underlying idea when we consider the canonical maps on balanced tensor products in .
iii) These observations justify the choices we have made when defining A s and A t as above. We will see later (cf. a remark following the proof of Proposition 2.6 below) that we also have ∆ but it would not be so natural to use these formulas to define A s and A t .
iv) It is immediately clear from these observations (and in fact already from the defining formulas), that the spaces A s and A t are subalgebras. Remember however that they are subalgebras of the multiplier algebra M (A) and that in general, the intersection with A itself may be trivial. This is the case already for a multiplier Hopf algebra with a non-unital underlying algebra.
vi) Finally it follows from the fact that S flips the product and the coproduct that, again in the regular case, S (after being extended to M (A)), will be a bijective map from A s to A t and from A t to A s . In general however, these maps are not each others inverses.
In the case where the algebra has an identity so that actually M (A) = A and that E ∈ A⊗A (as for weak Hopf algebras), we can apply the counit to the defining properties and then we find that A s and A t will be contained in the left, respectively the right leg of E. Later in this section we will say more about this and obtain results also in the general case (see a remark following Proposition 2.5 below).
We now recall the source and target maps ε s : A → A s and ε t : A → A t and prove the first properties. The source and target maps have been introduced in Definition 3.1 in  already and a few properties have been proven. However, we need some more results.
The following result is very important and the proof has been given already in Lemma 3.3 in [VD-W2]. We include it here for convenience of the reader.

Proposition
where S is the antipode. Then ε s (a) ∈ A s and ε t (a) ∈ A t for all a ∈ A.
Proof: First remark that ε s (a) and ε t (a) are well-defined in M (A) as we have explained in detail in Section 1 (Remark 1.1 iii)). If we use formula (1.1) of the previous section and the definition of ε t , we find for all a, b ∈ A. We now apply ι⊗∆ to this equality. Because (ι⊗∆)E = (E ⊗1)(1⊗E) we get, using the above formula once more, that We can cover this formula if we multiply with an element of A from the right in the last factor. Then it is allowed to apply the counit on the first factor. Alternatively, we could cover extra by multiplication with an element in the first factor and then apply fullness of the coproduct. In both cases, we finally get for all b ∈ A (see formula (1.2) in the previous section), we will find that ε s (a) ∈ A s for all a.
Let us make another remark about covering. As a consequence of the covering results discussed in Remark 1.1, we find that in ε s (a (2) )b the element b covers a (2) while in bε s (a (1) ) the element b covers a (1) . Similarly, a (2) will be covered in ε t (a (2) )b whereas a (1) will be covered in bε t (a (1) ) by b. This property is used in the proof above in order to be able to apply the counit at a certain point.
In the regular case, we can use that S flips the product and the coproduct and we get that also ε s (S(a)) = S(ε t (a)) and similarly ε t (S(a)) = S(ε s (a)). See also Proposition A.4 in the appendix of  where this result is obtained by using that S is bijective.
In the next section, we will need the following relation with the counit.

Proposition
Proof: From formula (1.4) in the previous section, we get for all a, b, c and because a (1) is covered by c in the right hand side, we can apply the counit on the second factor. We find that In this formula, we can cancel c (because the coproduct is regular) and then we can apply ε once more to arrive at ε(ab) = ε(ε s (a)b) for all a, b. The other formula is proved when we start with the equation (1.3) (with a and b interchanged).
The formulas make sense also in the non-regular case, but it seems that for making this argument precise, we need regularity of the coproduct.
The subalgebras ε s (A) and ε t (A) We know from the examples, that it can happen that ε s (A) and ε t (A) are proper subsets of A s and A t respectively. We always have the identity in A s and A t , but it need not be in ε s (A) and ε t (A), cf. the examples in the introduction of this paper. On the other hand, as we will see from the various properties that we will now prove, these subsets are large enough (and also good enough) for our purposes.
Here is the first of these properties (see also the proof of Lemma 3.4 in [VD-W2]).

Proposition
For all a ∈ A and y ∈ A s we have ε s (ay) = ε s (a)y. Therefore ε s (A) is a right ideal in A s . Similarly, for all a ∈ A and x ∈ A t , we have ε t (xa) = xε t (a) and so ε t (A) is a left ideal in A t . In particular, these sets are subalgebras.
Proof: Take a ∈ A and y ∈ A s . Then and if we apply m(S ⊗ ι) (where m is multiplication), we find ε s (ay)b = ε s (a)yb for all b. This proves the first part of the proposition. The second part is proven in a completely similar way.
Later we will show that (in the regular case) actually ε s (A) is a two-sided ideal in A s and that ε t (A) is a two-sided ideal in A t (see Proposition 2.7 below).
As we have seen already in Lemma 3.2 of [VD-W2], we can consider the subalgebras ε s (A) and ε t (A) as the left and the right leg of E respectively, in the following sense. Let a, b ∈ A and ω ∈ A ′ . Then and by the fullness of the coproduct, we find that ε t (A) is the span of elements of the form This implies that ε t (A) and ε s (A) commute as we know that E ⊗ 1 and 1 ⊗ E commute.
If we combine this with the previous results, we get more.

Proposition
The algebras A s and A t commute.
Proof: Take y ∈ A s and a, b ∈ A. Then As this is true for all a ∈ A, we can replace a by a (2) and multiply with a (1) in this formula and get that ayε t (b) = aε t (b)y. Now we can cancel a and arrive at yε t (b) = ε t (b)y. This holds whenever y ∈ A s and b ∈ A. A similar argument will then give that also xy = yx if y ∈ A t and x ∈ A s . Indeed, for all a ∈ A we get Now apply this to a (1) and multiply from the right with a (2) . Then we get yxa = xya for all a and now we can cancel a.
In the proof above, we essentially already use some of the module properties that we will get later in this section (see Proposition 2.9). As a consequence, we have also that (1 ⊗ y)E = E(1 ⊗ y) when y ∈ A s and that (x ⊗ 1)E = E(x ⊗ 1) when x ∈ A t . This result is not unexpected when we have a look at the defining formulas in Definition 2.1. It also has some nice consequences.

Proposition
For all a ∈ A and y ∈ A s we have ε t (ya) = ε t (a)S(y). Similarly, for all a ∈ A and x ∈ A t , we have ε s (ax) = S(x)ε s (a).
Proof: Take a ∈ A and y ∈ A s . Then and if we apply m(ι ⊗ S) we find bε t (ya) = bε t (a)S(y) for all b. This proves the first formula. The second formula is proven in a completely similar way.
Using techniques, similar as in the proof above and of Proposition 2.5, we find other formulas of this type. We do not include them here as we will not need them.
If we combine the formulas of Proposition 2.7 with the ones, earlier obtained in Proposition 2.5, we arrive at the following result.

Proposition
In the case of a regular weak multiplier Hopf algebra, we have that ε s (A) and ε t (A) are two-sided ideals in A s and A t respectively. Proof: Let a ∈ A. In Proposition 2.5 we have seen that ε s (a)y ∈ ε s (A) for all y ∈ A s while in Proposition 2.7 we find that S(x)ε s (a) ∈ ε s (A) for all x ∈ A t . Because in a regular weak multiplier Hopf algebra, the antipode (extended to M (A)) is a bijective map from A t to A s , it follows that ε s (A) is a two-sided ideal in A s . Similarly for A t .
We will now prove some module properties giving more information about the algebras ε s (A) and ε t (A) and how they sit as two-sided ideals in A s and A t respectively. First we have the following result (compare with Proposition 3.6 in [VD-W2]). Remark that we do not need regularity even for these results. The results above say that A as an ε s (A)-bimodule and as an ε t (A)-bimodule is unital. Remark that this is not a trivial statement because the algebras ε s (A) and ε t (A) are not necessarily unital.

Proposition We have
If we combine the above result with the property in Proposition 2.5 we can conclude that the algebras ε s (A) and ε t (A) are idempotent. Indeed, for all a, b we have e.g. ε s (aε s (b)) = ε s (a)ε s (b). Similarly for ε t (A).
In the next proposition, we show that these bimodules are also non-degenerate. First consider the following general lemma. Proof: As we have seen before, in the proof of the previous proposition, we will get in this case that A is a non-degenerate B-bimodule. We claim that B is a non-degenerate subalgebra of M (A). To show this assume that b ∈ B and that bc = 0 for all c ∈ B. Multiply with an element a ∈ A from the right and use that BA = A. This implies that ba = 0 for all a ∈ A. Then b = 0. Similarly on the other side. So the algebra B is non-degenerate and we can consider its multiplier algebra M (B). It is not hard to show that in this case, this extension is still an imbedding. Because obviously for any x ∈ M (B) we have xb ∈ B and bx ∈ B for all b ∈ B, we find one inclusion of the statement (2.1). The other inclusion is proven by using again that the B-bimodule A is unital.

Lemma
We can now apply this lemma. Proof: Observe that we can apply the lemma for the two algebras ε s (A) and ε t (A) because of the results in Proposition 2.9.

Proposition
As a first result, we already get that these algebras are non-degenerate. They are also idempotent (see a remark following the proof of Proposition 2.9).
Furthermore, we have seen in Proposition 2.8 that the algebras ε s (A) and ε t (A) are two-sided ideals in the algebras A s and A t respectively. This proves one of the inclusions of (2.1) in the lemma for these two cases.
Conversely, assume e.g. that y ∈ M (A) and that yb and by belong to ε s (A) for all b ∈ ε s (A). Then and if we cancel b, we find that y ∈ A s . Similarly for the other case.
We will now use the above results to prove some important properties of the idempotent E.
2.13 Proposition Assume that (A, ∆) is regular. For any y ∈ A s and x ∈ A t we have Moreover, for all y ∈ ε s (A) and x ∈ ε t (A), we have that all of the four elements E(y ⊗ 1) and (y ⊗ 1)E, Proof: Take y ∈ A s and a ∈ A. Then using formula (1.1) combined with the fact that ∆(ya) = (1 ⊗ y)∆(a), we find ∆(a (1) )(1 ⊗ S(a (2) )S(y)) = E(a ⊗ S(y)).
If we cancel a, we find the first formula in (2.2). In a similar way, we find the other formula in (2.2).
Because (A, ∆) is now assumed to be regular, we can apply formula (1.6) and obtain that, for all a, b ∈ A, we have The right hand side belongs to A ⊗ S −1 (ε s (A)) and this is the same as A ⊗ ε t (A).
Because the left leg of E belongs to A s , it follows from Proposition 2.5 that for all a ∈ A. In a similar way, we can prove that E(1 ⊗ ε t (A)) ∈ ε s (A) ⊗ ε t (A). If we combine this with the formulas in (2.2) and use that S maps ε s (A) to ε t (A) and vice versa, we can complete the proof.
It follows from the above results that E is a separability idempotent in the multiplier algebra M (ε s (A) ⊗ ε t (A)) (as discussed in [VD4]. We need to have that E is full in the sense that the legs of E are precisely the subalgebras ε s (A) and ε t (A)) in an appropriate sense, but this is what we have seen right after the proof of Proposition 2.5.
We can also use some of the arguments above to obtain the existence of local units in A for any regular weak multiplier Hopf algebras as we have shown already in Proposition 4.9 in [VD-W2]. A similar argument can be used to show that local units exist in the case of any separability idempotent E in M (B ⊗ C) for non-degenerate algebras B and C. This is done in [VD4]. Of course, this general result also applies here and implies the existence of local units in a regular weak multiplier Hopf algebra. We include a proof for completeness to illustrate these statements.
2.14 Proposition If (A, ∆) is a regular weak multiplier Hopf algebra, then the underlying algebra A has local units.
Proof: i) First we show that aε s (A) ⊆ aA for all a ∈ A. To prove this, assume that a is given and that ω is a linear functional on A so that ω(ab) = 0 for all b ∈ A.
Then for any b, c ∈ A we have that is an element of A ⊗ A, we can replace c by b (2) in the above formula and obtain that ω(aε s (b)) = 0. This implies that aε s (b) ∈ aA for all b. That proves the first claim. ii) Next we show that a ∈ aε s (A) for all a ∈ A. To show this, assume again that a is given and now that ω is a linear functional on A so that ω(ay) = 0 for all y ∈ ε s (A). Then, because the left leg of E is ε s (A), we have that (ω ⊗ ι)((a ⊗ 1)E(y ⊗ 1)) = 0 for all y ∈ ε s (A). We have seen in the proof of Proposition 2.13 above that (a⊗1)E ∈ A ⊗ ε t (A). So we can write (a ⊗ 1)E = i p i ⊗ q i where p i ∈ A, q i ∈ ε t (A) and where the q i are linearly independent. Then we find that ω(p i y) = 0 for all i and all y ∈ ε s (A) and if we replace y by S(q i ), we get ω(aE (1) S(E (2) )) = 0 (using E = E (1) ⊗ E (2) ). However, from Equation (1.6) in the previous section, we see that Therefore we have that ω(a) = 0 and this proves the second claim.
iii) If we combine the two results, we see that a ∈ aA for all a. In a similar way (or by applying the antipode), we find that also a ∈ Aa for all a. We know from [V] that this implies that A has local units.

Examples and special cases
In this section we will treat some examples and special cases. The main purpose is to illustrate results in Section 2 about the source and target algebras. However we will also use some of the examples for the illustration of the general theory of weak multiplier Hopf algebras because this has not yet been done in the earlier papers we wrote on the subject.

The groupoid examples
For completeness we begin with a very brief review of the two basic motivating examples associated with a groupoid. We will not give details as they can be found in our earlier papers on the subject (see [VD-W1] and [VD-W2]). On the other hand, we use these examples to illustrate some of the statements we made earlier in this paper, as well as for some other examples further in this section.

Example i) Consider a groupoid G.
First there is the algebra A, defined as the space K(G) of complex functions on G with finite support and pointwise product. Recall that the coproduct ∆ on K(G) is defined by The pair (A, ∆) is a regular weak multiplier Hopf algebra (in the sense of Definitions 1.14 and 4.1 in [VD-W2]). The canonical idempotent E in M (A ⊗ A) is given by the function on pairs (p, q) in G × G that is 1 if pq is defined and 0 if this is not the case. The antipode S is defined by (S(f ))(p) = f (p −1 ) whenever f ∈ K(G) and p ∈ G.
In this example, the source algebra A s is the algebra of all complex functions on G so that f (p) = f (q) whenever p, q ∈ G satisfy s(p) = s(q). It is naturally identified with the algebra of all complex functions on the set G 0 of units in G. The source map ε s from A to A s is defined by (ε s (f ))(p) = f (p −1 p) whenever p ∈ G and f ∈ K(G). The image of the source map is identified with the algebra of complex functions with finite support on the units. The target algebra A t consists of functions f on G so that f (p) = f (q) if t(p) = t(q) for p, q ∈ G. It is also identified with the space of all complex functions on the units. The target map ε t from A to A t is defined by (ε t (f ))(p) = f (pp −1 ) for all p and f ∈ K(G). The image is again identified with the space of functions with finite support on the units. Recall that these two algebras are subalgebras of the multiplier algebra M (A) (here the algebra of all complex functions on G). Observe also that the ranges ε s (A) and ε t (A) of the source and target maps can be strictly smaller than the source and target algebras A s and A t respectively. This happens when the set of units is infinite. In that case, we see that A s is the multiplier algebra of ε s (A) and similarly for the target.
ii) For the second case, we take the algebra B, defined as the groupoid algebra CG of G. If we use p → λ p for the canonical imbedding of G in CG, then if p, q ∈ G, we have λ p λ q = λ pq if pq is defined and 0 otherwise. The coproduct on B is given by ∆(λ p ) = λ p ⊗ λ p for all p ∈ G. The idempotent E is λ e ⊗ λ e where the sum is only taken over the units e of G. The antipode is given by S(λ p ) = λ p −1 for all p ∈ G.
The source and target maps are given by ε s (λ p ) = λ e where e = s(p) and ε t (λ p ) = λ e where now e = t(p) for p ∈ G. Here the source and target algebras B s and B t coincide and it is the multiplier algebra of the span of elements of the form λ e where e is a unit of G. Also here the images of the source and target maps can be strictly smaller than the source and target algebras.
Recall that these two cases are dual to each other. The duality is given by f, λ p = f (p) whenever f ∈ K(G) and p ∈ G. We will give more details (about this duality) in  where we treat duality for regular weak multiplier Hopf algebras with integrals.

Examples with separable algebras
For the next example, we start with any regular weak multiplier Hopf algebra. In some sense, we isolate the source and target algebras with what remains of the original coproduct. This example illustrates very well the use of different properties of the source and target algebras, obtained in the previous section. Immediately after this example, we will see how we can make abstraction of the underlying weak multiplier Hopf algebra and generalize it.
So let (A, ∆) be any regular weak multiplier Hopf algebra. In what follows, we denote ε s (A) by B and ε t (A) by C. We know that B and C are non-degenerate idempotent subalgebras of M (A) and that their multiplier algebras are A s and A t respectively (see Proposition 2.12 in Section 2). The canonical element E can be considered as sitting in M (ε s (A) ⊗ ε t (A)) (see Proposition 2.13 in Section 2).
Then we can prove the following. Remember that the antipode S of (A, ∆) maps B to C and C to B, and that both maps are bijections (but not necessarily each others inverses). See a remark following Proposition 2.3 in the previous section.

Proposition
Let P be the tensor product algebra C ⊗ B. There exists a coproduct ∆ P on P defined by It makes of the pair (P, ∆ P ) a regular weak multiplier Hopf algebra. The counit ε P is given by where f is a linear functional on B satisfying f (ε s (a)) = ε(a) for any a ∈ A. It also satisfies ε P (c ⊗ b) = g(cS(b)) where now g is a linear functional on C characterized by g(ε t (a)) = ε(a) for any a ∈ A. The antipode S P is given by The new source and target algebras P s and P t are given by and the source and target maps are given by for all b ∈ B and c ∈ C. In these formulas, 1 is the identity in M (A).
Proof: In the following, we will systematically use ι P , 1 P , etc. for objects related with P . For the objects related with the original weak multiplier Hopf algebra, we will use no index. This convention should clarify the upcoming formulas.
We also skip some of the arguments. On the one hand, it is a good exercise for the reader to complete the arguments. On the other hand, some of the arguments will be given in the proof of Proposition 3.3 that generalizes Proposition 3.2.
i) First remark that the algebra P is non-degenerate and idempotent because this is true for its components B and C. Again see Proposition 2.12 in the previous section.
ii) Next observe that E ∈ M (B ⊗ C) and therefore ∆ P (c ⊗ b), defined as c ⊗ E ⊗ b, belongs to M (P ⊗ P ). Because E 2 = E, it is clear that ∆ P is a homomorphism. We have seen in Proposition 2.13 that E(1 ⊗ C) and (1 ⊗ C)E and that also E(B ⊗ 1) and (B ⊗ 1)E are all subsets of B ⊗ C. Then it is not hard to show that ∆ P is a regular coproduct on P . One can also show that this coproduct is full. Consider e.g. the formula where a, b ∈ A and ω a linear functional on A (see e.g. the proof of Proposition 2.13).
As we remarked already after Proposition 2.13, this gives that any element of ε t (A) is of the form (ω ⊗ ι)((a ⊗ 1)E) for some a ∈ A. This can be used to show that any element in P is a linear combination of elements of the form (γ(p · ) ⊗ ι P )∆ P (q) where p, q ∈ P and where γ is a linear functional on P .
iii) Now we will construct a counit ε P on (P, ∆ P ). First we claim that there are linear functionals f and g on B and C respectively satisfying and defined by f (ε s (a)) = ε(a) and g(ε t (a)) = ε(a) for all a. We will prove the existence of g. The other case is treated in a similar way, or by applying the antipode.
We have seen in Proposition 2.4 that ε(ba) = ε(bε t (a)) for all a, b ∈ A. If now a is such that ε t (a) = 0, then we have ε(ba) = 0 for all b ∈ A. Because A has local units (see Proposition 4.9 in Section 4 of [VD-W2] and also Proposition 2.14 in this paper), this implies that ε(a) = 0. Therefore, we can define g on C by g(ε t (a)) = ε(a) when a ∈ A.
If we apply ι ⊗ g to the formula we find that (ι⊗g)(E(ab⊗1)) = ab for all a, b and hence also (ι⊗g)(E(a⊗1)) = a for all a.
Similarly we can define f by f (ε s (a)) = ε(a) and this will satisfy (f ⊗ι)((1⊗a)E) = a for all a. Now we define ε P on P by ε P (c ⊗ b) = g(cS(b)). If we apply f ⊗ g to the equation where we have used that E(b ⊗ 1) = E(1 ⊗ S(b)) (see Proposition 2.13). Using the other formula for ε P , we find that also (ε P ⊗ ι P )∆ P (c ⊗ b) = c ⊗ b for all b ∈ B and c ∈ C. iv) It is straightforward to verify that E P = 1 ⊗ E ⊗ 1. For this it is needed that E(B ⊗ C) = E(1 ⊗ C) and that (B ⊗ C)E = (B ⊗ 1)E, a result that follows from the formulas in Proposition 2.13. It is also clear that the legs of E P commute and that v) One can verify the formulas for ε P s and ε P t on P . Indeed, if b ∈ B and c ∈ C, we see that By definition, we have also Therefore we have As the right leg of E generates C we get ε P t (c ⊗ b) = cS(b) ⊗ 1 for all b ∈ B and c ∈ C. The other formula is proven in a similar way. vi) Finally if we define S P on P by S P (c ⊗ b) = S(b) ⊗ S(c) one can verify that all the requirements of Theorem 2.9 in [VD-W2] are fulfilled. Therefore, we do have a regular weak multiplier Hopf algebra and the antipode is S P .
For more details on the last item, we refer to the proof of the next result where this is treated in a more general case. Using the theory of separability idempotents, as we have treated it in [VD4], the above example can be generalized in the obvious way. In fact, this amounts to making abstraction of the underlying weak multiplier Hopf algebra as mentioned earlier.
Recall that a separability idempotent is an idempotent in the multiplier algebra M (B ⊗ C) of the tensor product of two non-degenerate idempotent algebras B and C with certain properties. In particular there exist anti-isomorphisms S : B → C and S ′ : C → B characterized by the formulas There are also unique linear functionals ϕ and ψ on B and C respectively, called the left and the right integral, characterized by In what follows, we think of ϕ and ψ as satisfying (ι ⊗ ϕ)E = 1 and (ψ ⊗ ι)E = 1. We refer to [VD4] for details. Then the result is as follows.

Proposition
Let B and C be non-degenerate idempotent algebras and assume that E is a separability idempotent in M (B ⊗ C). Let P = C ⊗ B. There is a coproduct ∆ P on P defined by for c ∈ C and b ∈ B. It makes from the pair (P, ∆ P ) a regular weak multiplier Hopf algebra. The counit ε P is given by ). The canonical multiplier E P is 1 ⊗ E ⊗ 1. The antipode S P is given by and c ∈ C. The source and target algebras P s and P t are given by P s = 1 ⊗ M (B) and P t = M (C) ⊗ 1 and the source and target maps are given by for all b ∈ B and c ∈ C. In these formulas, 1 is the identity in M (C) and M (B) respectively.
Proof: The proof is very similar to the one of Proposition 3.2. However, in the proof of Proposition 3.2, we did not give all details. So, we will not only treat those aspects that are really different but we also give more details that have been omitted in the previous proof. Again, just as we did in the proof of Proposition 3.2, we will systematically use ι P , 1 P , etc. for objects related with P . For the objects related with the original weak multiplier Hopf algebra, we will use no index.
i) The algebra P is non-degenerate and idempotent because this is true for its components B and C.
. Because E 2 = E, it is clear that ∆ P is a homomorphism. By assumption, we have that E(1 ⊗ C) and (1 ⊗ C)E and that also E(B ⊗ 1) and (B ⊗ 1)E are all subsets of B ⊗ C. Therefore ∆ P is a regular coproduct on P . This coproduct is full because E is assumed to be full (as in Definition 1.2 of [VD4]). iii) Now we construct a counit ε P on (P, ∆ P ). We define ε P (c ⊗ b) = ϕ(cS(b)). We find that also ε P where b ∈ B and c ∈ C. Also here it is easy to show that ε P is a counit. Indeed, for all b ∈ B and c ∈ C we have that Similarly we get (ε P ⊗ ι)∆ P (c ⊗ b) = c ⊗ b for all b ∈ B and c ∈ C. iv) As in the proof of Proposition 3.2, also here it is straightforward to verify that E P = 1 ⊗ E ⊗ 1, that the legs of E P commute and that v) One can verify the formulas for ε P s and ε P t on P as in Proposition 3.2. vi) Finally, we define S P (c ⊗ b) = S(b) ⊗ S ′ (c) for all b and c and we show that all the conditions of Theorem 2.9 of [VD-W2] are fulfilled. This will complete the proof. Take elements b, b ′ ∈ B and c, c ′ ∈ C. First apply the candidate for the generalized inverse R 1 of the canonical map T 1 to c ⊗ b ⊗ c ′ ⊗ b ′ using the expression for S P . We get, using formally E (1) ⊗ E (2) for E, that Now apply T 1 . Then we find by using the formula characterizing S on B (as recalled before the formulation of the result) and the fact that E (1) S ′ (E (2) ) = 1. Remark that we use the Sweedler type notation for one of the elements E in the formulas above. This proves that T 1 R 1 , where R 1 is defined with the antipode S P is indeed left multiplication with E P . A similar argument will give that T 2 R 2 is right multiplication with E P . It remains to argue that the formulas (2.5) in Theorem 2.9 of [VD-W2] are satisfied. For this take b ∈ B, c ∈ C. Then We will need to use the Sweedler type notation for both elements E in the formula above. We will write E (1) ⊗ E (2) for the first one and E ′ (1) ⊗ E ′ (2) for the second one. So we write If we apply S P in the middle and multiply, we find and because E (1) S(E ′ (2) ) = 1 as well as S ′ (E ′ (1) )E ′ (2) = 1, we find c ⊗ b. This takes care of the first formula. On the other hand, if we apply S P ⊗ ι P ⊗ S P on the right hand side of formula (3.1) above and multiply, we find and this will give S(b) ⊗ S ′ (c), precisely what we need to have the second part of formula (2.5) in Theorem 2.9 of [VD-W2].
The result in Proposition 3.3 is more general than the previous one. Indeed, the result in Proposition 3.2 is a consequence of Proposition 3.3 because it has been shown at the end of the previous section that the canonical idempotent E of any regular weak multiplier Hopf algebra is a separability element for the source and target algebras. See also [VD4].
In , where we treat integrals and duality, we will consider this example again and show that integrals on (P, ∆ P ) automatically exist and therefore that we can obtain a dual version of this example.
Next we consider the following special case of the above situation. We will use the term discrete quantum group for a multiplier Hopf algebra (A, ∆) of discrete type with a (left) cointegral h satisfying the extra condition that ε(h) = 1 (where ε is the counit). This is the case when h is an idempotent. Then S(h) = h (where S is the antipode) and h is also a right cointegral. It is shown in [VD4] that ∆(h) is a separability idempotent in M (A ⊗ A). The maps S and S ′ associated with the separability element are nothing else but the antipode S of A.
The linear functionals ϕ and ψ are the left and right integrals on (A, ∆), normalized so that ϕ(h) = ψ(h) = 1. Then as a consequence of Proposition 3.3, we get the following.
3.4 Corollary Let (A, ∆) be a discrete quantum group and h the normalized cointegral. The algebra P defined as A ⊗ A is a regular weak multiplier Hopf algebra for the coproduct ∆ P defined by ∆ P (a⊗b) = a⊗∆(h)⊗b with a, b ∈ A. The counit ε P is given by the linear map a⊗b → ϕ(aS(b)). We also have ε P (a⊗b) = ψ(S(a)b). The canonical multiplier E P is 1 ⊗ ∆(h) ⊗ 1. The antipode S P is given by The source and target algebra P s and P t are given by and the source and target maps are given by Again we have integrals and we can construct the dual. This will be done in [VD-W3].

A 'quantization' of the groupoid associated with a group action
Let us now go back to the first example in Example 3.1. Denote the space of units by X. The algebras B and C are identified with the algebra K(X) of complex functions with finite support on X. If we apply Proposition 3.2, we get for P the algebra K(X × X) of all complex functions with finite support on X × X. The element E P is the function of four variables x, u, v, y in X that is 1 if u = v and 0 if u = v. The antipodes S on B and C are given by the identity map on the algebra K(X). The antipode S P on K(X ⊗ X) is given by the flip map. The left and right integrals g and f coincide on K(X) and they are simply given by f → x∈X f (x). In fact, the weak multiplier Hopf algebra we get in this way, is nothing else but the algebra of functions on the trivial groupoid X × X where the product of two elements (x, u) and (v, y) is only defined when u = v and then is (x, y). We see that this has very little to do anymore with the original groupoid. And of course, we end up with a special case of Proposition 3.3. For this, we just take any set X and look at the above construction.
Let us now consider the groupoid that results from a group action on a set. So let X be any set and assume that a group H acts on X, say from the left. Denote the action as h ⊲ x for x ∈ X and h ∈ H. Then there is a groupoid G associated as follows. One has G = {(y, h, x) | x, y ∈ X and h ∈ H so that y = h ⊲ x}.
The product of two elements (z, k, y ′ ) and (y, h, x) is defined if y = y ′ and then (z, k, y)(y, h, x) = (z, kh, x).
The set of units is X and the source and target maps are given by s(y, h, x) = x and t(y, h, x) = y.
The set of units is considered as a subset of G by the imbedding x → (x, e, x) where e is the identity in H.
We can construct the weak multiplier Hopf algebras, associated with this groupoid, as in Example 3.1. In the case where the group is trivial, we then get the example we just mentioned above. If on the other hand, the space X is trivial (i.e. it consists only of one point), then we get the multiplier Hopf algebras associated with the group H.
There is however another way to associate a weak multiplier Hopf algebra. It is a special case of a construction that we will consider next.
The starting point is as in Proposition 3.3. We have a separability idempotent E in the multiplier algebra M (B ⊗ C) of the tensor product of two non-degenerate idempotent algebras B and C. Furthermore, we have a regular multiplier Hopf algebra (Q, ∆). We assume that it acts from the left on C and from the right on B. The actions are denoted by q ⊲ c and b ⊳ q when b ∈ B, c ∈ C and q ∈ Q. It is assumed that B is a right Q-module algebra and that C is a left Q-module algebra. Moreover, these data are required to satisfy where we use the Sweedler type notation E = E (1) ⊗ E (2) and where the equation is given a meaning by multiplying with an element b of B in the first factor from the left and an element c of C in the second factor from the right. The underlying algebra P that we use in this example is a double smash product of Q with B and C. The construction has probably been studied for Hopf algebras but not yet for multiplier Hopf algebras. However, the results and the arguments are very similar to the theory of smash products as developed in [Dr-VD-Z]. Therefore, in the following proposition, we do not give all the details. We concentrate on the correct statements and briefly indicate how things are proven.

Proposition
As above, assume that Q is a multiplier Hopf algebra, that B is a right Q-module algebra and that C is a left Q-module algebra. Then the tensor product C ⊗ Q ⊗ B is an associative algebra P with the product defined as The proof of this result is straightforward.
The double smash product can be considered in two ways as a twisted product in the sense of [VD-VK]. First one considers the twisting of the algebras C and QB (where QB is the ordinary smash product of Q and B). In this case, the twist map is given by the formula where b ∈ B, c ∈ C and q ∈ Q. For the second possibility, one takes the twisting of the algebras CQ and B (where CQ is the smash product of C and Q). Now the twist map is given by the formula where again b ∈ B, c ∈ C and q ∈ Q. In the two cases, one now has to verify that the twist map is compatible with the product in the two algebras (ensuring that the result is an associative algebra). One easily verifies that the two constructions give the same algebra and that the result is also the same as in the proposition above.
Just as in the case of smash products, one has obvious embeddings of B, C and Q in the multiplier algebra of P and if we identify these three algebras with their images in M (P ), we see that P is the linear span of elements cqb with b ∈ B, c ∈ C and q ∈ Q and that we have the commutation rules i) B and C commute, ii) bq = (q) q (1) (b ⊳ q (2) ) for all b ∈ b and q ∈ Q, iii) qc = (q) (q (1) ⊲ c)q (2) for all c ∈ C and q ∈ Q.
Therefore we can view P as the algebra generated by B, C and Q subject to these commutation rules.
By definition we have that the map c ⊗ q ⊗ b → cqb is a linear bijection from C ⊗ Q ⊗ B to P . However one also has various other maps that are also bijective. One can consider e.g. the maps where always b ∈ B, c ∈ C and q ∈ Q. This property will be used in the proof of Proposition 3.6 below.
Also remark that this construction reduces to well-known constructions in the following three special situations. If the multiplier Hopf algebra Q is trivial, then we obtain for P simply the tensor product algebra C ⊗ B. If the algebra B is trivial we obtain the smash product C#Q, constructed with the right action of Q on C while if C is trivial, we get the smash product Q#B, for the left action of B on C. Recall that in the original paper [Dr-V-Z], we developed the theory for left actions. The reader can also have a look at Section 1 of the expanded version of [De-VD-W] found on the arXiv where the two types of smash products are reviewed. Then we are ready for the following example.
3.6 Proposition Assume that B and C are non-degenerate idempotent algebras and that E is a separability idempotent in M (B ⊗ C). Let Q be a regular multiplier Hopf algebra and assume that B is a right Q-module algebra and C a left Q-module algebra. Moreover assume the compatibility relation (3.2) as above.
Consider the double smash product P as given in the previous proposition. Then ∆(q) and E commute in the multiplier algebra of P ⊗ P for all q ∈ Q and the double smash product P can be equipped with a regular coproduct ∆ P , defined by whenever b ∈ B, c ∈ C and q ∈ Q. It makes of the pair (P, ∆ P ) a regular weak multiplier Hopf algebra. The canonical idempotent E P is E, considered as sitting in M (P ⊗ P ). The counit ε P is given by the linear map cqb → ϕ(c(q ⊲ S(b))) where ϕ is the left integral on C satisfying (ι ⊗ ϕ)E = 1 and where S is used for the anti-isomorphism from B to C associated with the separability idempotentE. It is also given by ε P (cqb) = ψ((S ′ (c) ⊳ q)b) where now ψ is the right integral on B and S ′ the anti-isomorphism from C to B. The antipode S P is given by The source and target algebras P s and P t in M (P ) are given by P s = M (B) and P t = M (C) and the source and target maps are given by ε P s (cqb) = (S ′ (c) ⊳ q)b and ε P t (cqb) = c(q ⊲ S(b)) for all b ∈ B, c ∈ C and q ∈ Q. Proof: i) First, it is not hard to show that E and ∆(q) for all q ∈ Q are elements of M (P ⊗ P ). This is a consequence of the fact that the multiplier algebras of B, C and Q all sit in M (P ) and similarly for tensor products. ii) We now show that E and ∆(q) commute in M (P ⊗P ). Using the Sweedler notation, both for E as before and for ∆(q) we get In the above calculation, we first have used the commutation rule between B and Q (as the first leg of E is in B), then the relation of the actions of Q on E as in formula (3.2) and finally the commutation rule between C and Q (as the second leg of E is in C). Of course, to make things precise, we need to cover at the right places with the right elements. This can be done if we multiply from the left in the first factor with bp and from the right in the second factor with rc, where b ∈ B, c ∈ C and p, r ∈ Q. Then we can define ∆ P on P by the formula (3.3) in the formulation of the proposition. Using the commutation rules, the fact that E is an idempotent, that it commutes with elements ∆(q) and that ∆ is a coproduct on Q, it can be shown that ∆ P is a coproduct on P . It is regular and full. It is also clear that E, as sitting in M (P ⊗ P ) has to be the canonical idempotent for ∆ P . See also item iv) in this proof. iii) We now prove that there is a counit and that it is given by the formulas in the formulation of the proposition. First define ε P on P by ε P (qcb) = ε Q (q)ϕ(cS(b)) for b, c, q in B, C, Q respectively. Observe that we use a different order of the elements in this definition. Then we get for all b, c, q that If on the other hand, we would define ε P on P by the formula ε(cbq) = ψ(S ′ (c)b))ε Q (q), a similar calculation will give then that (ε P ⊗ ι P )∆ P (cqb) = cqb for all b, c, q. Let us now verify that these two definitions are the same and are in agreement with the formulas in the formulation of the proposition. If we apply ϕ on the second leg of the equation (3.2), we find that ϕ(q ⊲ c) = ε(q)ϕ(c) for all c, q. Then using the first formula above for ε P we get for all b, c, q. On the other hand, if we apply ψ to the first leg of the equation (3.2), we find ψ(b ⊳ q) = ε(q)ψ(b) for all b, q. Then using the second formula above for ε P we get ε P (cqb) = (q) ε P (c(b ⊳ S −1 Q (q (2) ))q (1) ) = (q) ψ(S ′ (c)(b ⊳ S −1 Q (q (2) )))ε Q (q (1) ) = (q) ψ(S ′ (c)((b ⊳ S −1 Q (q (2) )) ⊳ q (1) )) = ψ((S ′ (c) ⊳ q)b) for all b, c, q. We already get the two formulas announced in the proposition. It remains to verify that these two expression are the same. For all b, c, q we have, using again the Sweedler type notation for E, This implies that and if we apply ψ ⊗ ϕ, we find ϕ(c(q ⊲ S(b))) = ψ((S ′ (c) ⊳ q)b).
for all b, c, q and this is what we had to show. This takes care of the counit.
iv) Let us now look at the source and target maps and the antipode. It is expected that the antipode S P must coincide with S, S ′ , S Q on B, C, Q respectively. Then we can verify the formulas for the source and target maps. Indeed, for all b, c, q we get (q) This is in agreement with the given formula for ε P s . In a similar way we can check the formula for ε P t . Finally, let us verify e.g. that E(cqb ⊗ 1) = (ι ⊗ ε P t )∆ P (cqb) for all b, c, q. For the left hand side we have We find precisely (ι ⊗ ε P t )∆ P (cqb). All this shows that we have the correct formulas for these source and target maps. Another immediate consequence is that the source and target algebras P s and P t are precisely M (B) and M (C). v) Finally, as in the previous cases, we need to verify that all conditions of Theorem 2.9 of [VD-W2] are fulfilled when we define the antipode as above.
By showing that we have the correct formulas for the source and target maps, we have in fact already shown that T 1 R 1 is left multiplication with E P and that T 2 R 2 is right multiplication with E P . Again the only thing left is to show that (p) p (1) S P (p (2) )p (3) = p and (p) S P (p (1) )p (2) S P (p (3) ) = S P (p) for all p ∈ P . We do this e.g. for the first one. We use that (p) p (1) S P (p (2) )p (3) = (p) ε t (p (1) )S(p (2) ). Now, if p = cqb we get using the Sweedler notation for E that (p) ε P t (p (1) )S(p (2) ) = (q) The other formula is proven in a similarly way. This completes the proof.
Of course, the result in Proposition 3.3 is a special case of the above. Just remark that we have to reformulate the formulas in Proposition 3.3 by considering the algebra P , defined as C ⊗ B as the algebra generated by B and C, subject to the commutation of elements of B and elements of C as in i) above. Elements in P are then linear combinations of products cb with b ∈ B and c in C. The coproduct ∆ P is now given as ∆ P (cb) = (c ⊗ 1)E(1 ⊗ b) in M (P ⊗ P ). Also P s and P t are identified with M (B) and M (C), as sitting in M (P ) whereas the source and target maps are ε P s (cb) = S ′ (c)b and ε P t (cb) = cS(b) when b ∈ B and c ∈ C.

Conclusions and further research
In this paper, we have studied the source and target maps, as well as the source and target algebras, associated with a weak multiplier Hopf algebra. Many of the results in Section 2 could be obtained in the general case, while the others are only proven for regular weak multiplier Hopf algebras.
It remains an open question if all those results that can be formulated also in the non-regular case, are still true. We expect that this will not be easy, neither to prove these results if they are true, nor to find counter examples if they are not. We refer also to the modification procedure as explained in [VD5] to construct new examples of regular weak multiplier Hopf algebras. The separability elements for non-unital algebras play an important role in Section 3. It is certainly worthwhile to carry out a more thorough study of these separable non-unital algebras and the associated separability idempotents (and to relate our approach with other approaches in the literature). This is partly done already in [VD4]. A new version of this paper is being prepared at the moment and will contain more information already. Finally, as we mentioned already in the introduction, the material studied in this paper relates intimately with other research. On the one hand there is the study of weak multiplier bialgebras as introduced in [B-G-L]). We also have [K-VD] where a Larson-Sweedler type theorem is proven. Roughly it says that a weak multiplier bialgebra with enough integrals is a weak multiplier Hopf algebra. Here we have properties of the source and target maps and source and target algebras, proven in the context of weak multiplier bialgebras and separability idempotents. The other obvious link with the literature is the theory of multiplier Hopf algebroids as developed in . In particular, there is the paper  where the relation between weak multiplier Hopf algebras and multiplier Hopf algebroids is studied. It seems interesting to observe that there are various possible reasons why a multiplier Hopf algebroid does not have an underlying weak multiplier Hopf algebra.