# Scalar-on-Function Relative Error Regression for Weak Dependent Case

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Re-Regression Model and Its Estimation

## 3. The Consistency of the Kernel Estimator

- (D1)
- For all $d>0$${\varphi}_{x}\left(d\right):=\mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{P}(X\in B(x,d))>0$, and $\underset{d\to 0}{lim}{\varphi}_{x}\left(d\right)=0$.
- (D2)
- For all $({x}_{1},{x}_{2})\in {\mathcal{N}}_{x}^{2}$,$$|{R}_{\gamma}\left({x}_{2}\right)-{R}_{\gamma}\left({x}_{1}\right)|\le C\phantom{\rule{0.277778em}{0ex}}{d}^{{k}_{\gamma}}({x}_{2},{x}_{1})\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}{k}_{1},\phantom{\rule{0.166667em}{0ex}}{k}_{2}>0.$$
- (D3)
- The covariance coefficient is ${\left({\lambda}_{k}\right)}_{k\in \mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{N}}$, such that ${\lambda}_{k}\le C{e}^{-ak},\phantom{\rule{1.em}{0ex}}a>0,\phantom{\rule{1.em}{0ex}}C>0.$
- (D4)
- K is the Lipschitzian kernel function, which has $(0,1)$ as support and satisfies the following:$$0<{C}_{2}\le K(\xb7)\le {C}_{3}<\infty .$$
- (D5)
- The endogenous variable Y gives:$$\mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{E}\left(\right)open="["\; close="]">\mathrm{exp}(|Y{|}^{-{\gamma}_{1}})\le {C}^{\prime}\infty ,\phantom{\rule{1.em}{0ex}}{\left({\gamma}_{i}\right)}_{i=1,2,3}=1,2.$$
- (D6)
- For all $i\ne j$,$$0<\underset{i\ne j}{sup}\mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{P}\left(\right)open="["\; close="]">({X}_{j},{X}_{i})\in B(x,d)\times B(x,d)$$
- (D7)
- There exist $\xi \in (0,1)$ and ${\xi}_{1}\in (0,1-\xi ),\phantom{\rule{1.em}{0ex}}x{i}_{2}\in (0,a-1),\mathrm{such\; that}$$$\frac{log{n}^{5}}{{n}^{1-\xi -{\xi}_{1}}}\le {\varphi}_{x}\left({h}_{n}\right)\le \frac{1}{log{n}^{1+{\xi}_{2}}}.$$

- Brief comment on the conditions: Note that the required conditions stated above are standard in the context of Hilbertian time series analysis. Such conditions explore the fundamental axes of this contribution. The functional path of the data is explored through the condition (D1), the nonparametric nature of the model is characterized by (D2), and the correlation degree of the Hilbertian time series is explored by conditions (D3) and (D6). The principal parameters used in the estimator, namely the kernel and the bandwidth parameter, are explored through the conditions, (D4), (D5), and (D6). Such conditions are of a technical nature. They allow for retaining the usual convergence rate in nonparametric Hilbertian time series analysis.

**Theorem**

**1.**

**Proof**

**of**

**Theorem**

**1.**

**Lemma**

**1.**

**Lemma**

**2.**

**Corollary**

**1.**

- (K1)
- $K(\xb7)$ has a bounded derivative on $[0,1]$;
- (K2)
- The function ${\varphi}_{x}(\xb7)$, such that$${\varphi}_{x}\left(a\right)=\varphi \left(a\right)L\left(x\right)+O\left({a}^{\alpha}\varphi \left(a\right)\right)\phantom{\rule{1.em}{0ex}}\mathrm{and}\phantom{\rule{1.em}{0ex}}\underset{a\u27f60}{lim}\frac{\varphi \left(ua\right)}{\varphi \left(a\right)}=\zeta \left(u\right),$$
- (K3)
- There exist $\xi \in (0,1)$ and ${\xi}_{1},\phantom{\rule{0.166667em}{0ex}}{\xi}_{2}>0,$ such that$${n}^{\xi +{\xi}_{1}}log{n}^{5}\le k\le nlog{n}^{-1-{\xi}_{2}}.$$

**Theorem**

**2.**

**Proof**

**of**

**Theorem**

**2.**

**Lemma**

**3.**

**Lemma**

**4.**

**Corollary**

**2.**

## 4. Smoothing Parameter Selection

#### 4.1. Leave-One-Out Cross-Validation Principle

#### 4.2. Bootstrap Approach

**Step****1.**- We choose an arbitrary bandwidth ${h}_{0}$ (resp. ${k}_{0}$), and we calculate ${\tilde{R}}_{{h}_{0}}\left(x\right)$ (resp. ${\widehat{R}}_{{k}_{0}}\left(x\right)$).
**Step****2.**- We estimate $\tilde{\u03f5}=Y-{\tilde{R}}_{{h}_{0}}\left(x\right)$ (resp. $\widehat{\u03f5}=Y-{\widehat{R}}_{{k}_{0}}\left(x\right)$).
**Step****3.**- We create a sample of residual ${\u03f5}^{\ast}$ (resp. ${\u03f5}^{\ast \ast}$ ) from the distribution$${G}^{\ast}=((\sqrt{5}+1)/2\sqrt{5}){\delta}_{\tilde{\u03f5}(1-\sqrt{5})/2}-((1-\sqrt{5})/2\sqrt{5}){\delta}_{\tilde{\u03f5}(\sqrt{5}+1)/2},$$$$(\mathrm{resp}.\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{G}^{\ast}=((\sqrt{5}+1)/2\sqrt{5}){\delta}_{\widehat{\u03f5}(1-\sqrt{5})/2}-((1-\sqrt{5})/2\sqrt{5}){\delta}_{\widehat{\u03f5}(\sqrt{5}+1)/2}),$$
**Step****4.**- We reconstruct the sample ${({Y}_{i}^{\ast},{X}_{i}^{\ast})}_{i}=({\u03f5}^{\ast}-{\tilde{R}}_{{h}_{0}}\left({X}_{i}\right),{X}_{i}),$ (resp. ${({Y}_{i}^{\ast \ast},{X}_{i}^{\ast \ast})}_{i}=({\u03f5}^{\ast \ast}-{\widehat{R}}_{k0}\left({X}_{i}\right),{X}_{i}),$
**Step****5.**- We use the sample ${({Y}_{i}^{\ast},{X}_{i}^{\ast})}_{i}$ to calculate ${\tilde{R}}_{{h}_{0}}\left({X}_{i}\right)$ and ${({Y}_{i}^{\ast \ast},{X}_{i}^{\ast \ast})}_{i}$ to calculate ${\widehat{R}}_{{k}_{0}}\left({X}_{i}\right)$.
**Step****6.**- We repeat the previous steps ${N}_{B}$ times and put ${\tilde{R}}_{{h}_{0}}^{r}\left({X}_{i}\right)$ (resp. ${\widehat{R}}_{{k}_{0}}^{r}\left({X}_{i}\right)$), the estimators, at the replication r.
**Step****7.**- We select h (resp. k) according to the criteria$${h}_{op{t}^{Boo}}=arg\underset{h\in {H}_{n}}{min}\sum _{r=1}^{{N}_{B}}\sum _{i=1}^{n}{({\tilde{R}}_{h}^{r}\left({X}_{i}\right)-{\tilde{R}}_{{h}_{0}}^{r}\left({X}_{i}\right))}^{2},$$$${k}_{op{t}^{Boo}}=arg\underset{k\in {K}_{n}}{min}\sum _{r=1}^{{N}_{B}}\sum _{i=1}^{n}{({\widehat{R}}_{k}^{r}\left({X}_{i}\right)-{\tilde{R}}_{{k}_{0}}^{r}\left({X}_{i}\right))}^{2}.$$

## 5. Computational Study

#### 5.1. Empirical Analysis

#### 5.2. A Real Data Application

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Appendix A

**Proof**

**of**

**Lemma**

**1.**

_{i}. It requires evaluating asymptotically two quantities: $Var\left(\right)open="("\; close=")">\sum _{i=1}^{n}{\mathrm{{\rm Y}}}_{i}$ and $Cov({\mathrm{{\rm Y}}}_{{s}_{1}}\dots {\mathrm{{\rm Y}}}_{{s}_{u}},{\mathrm{{\rm Y}}}_{{t}_{1}}\dots {\mathrm{{\rm Y}}}_{{t}_{v}})$, for all $({s}_{1},\dots ,{s}_{u})\in \mathrm{I}\phantom{\rule{-0.166667em}{0ex}}{\mathrm{N}}^{u}$ and $({t}_{1},\dots ,{t}_{v})\in \mathrm{I}\phantom{\rule{-0.166667em}{0ex}}{\mathrm{N}}^{v}$

- The first case is ${t}_{1}>{s}_{u}$; based on the definition of quasi-association, we obtain$$\begin{array}{cc}\hfill |Cov({\mathrm{{\rm Y}}}_{{s}_{1}}\dots {\mathrm{{\rm Y}}}_{{s}_{u}},{\mathrm{{\rm Y}}}_{{t}_{1}}\dots {\mathrm{{\rm Y}}}_{{t}_{v}})|& {\displaystyle \le {\left(\right)}^{{\left(\right)}^{{\left({\mu}_{n}h\right)}^{-1}}}{\left(\right)}^{n}-1}& 2\end{array}$$$$\begin{array}{ccc}\hfill |Cov({\mathrm{{\rm Y}}}_{{s}_{1}}\dots {\mathrm{{\rm Y}}}_{{s}_{u}},{\mathrm{{\rm Y}}}_{{t}_{1}}\dots {\mathrm{{\rm Y}}}_{{t}_{v}})|& \le & {\left(\right)}^{\frac{{C\parallel K\parallel}_{\infty}}{n{\mu}_{n}\mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{E}\left(\right)open="["\; close="]">{K}_{1}\left(x\right)}}u+v-2\hfill & \times \end{array}& & \left(\right)open="("\; close=")">\left(\right)open="|"\; close="|">\mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{E}\left[{\mathrm{{\rm Y}}}_{{s}_{u}}{\mathrm{{\rm Y}}}_{{t}_{1}}\right]\hfill & +\mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{E}\left(\right)open="|"\; close="|">{\mathrm{{\rm Y}}}_{{s}_{u}}\\ \mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{E}\left(\right)open="|"\; close="|">{\mathrm{{\rm Y}}}_{{t}_{1}}$$$$\begin{array}{ccc}\hfill |Cov({\mathrm{{\rm Y}}}_{{s}_{1}}\dots {\mathrm{{\rm Y}}}_{{s}_{u}},{\mathrm{{\rm Y}}}_{{t}_{1}}\dots {\mathrm{{\rm Y}}}_{{t}_{v}})|& \le & {\varphi}_{x}\left({h}_{n}\right){\left(\right)}^{\frac{C}{n{\varphi}_{x}\left({h}_{n}\right)}}u+vv{e}^{-a({t}_{1}-{s}_{u})/\left(2(a+1)\right)}.\hfill \end{array}$$
- The second one is where ${t}_{1}={s}_{u}$. In this case, we have$$\begin{array}{ccc}\hfill |Cov({\mathrm{{\rm Y}}}_{{s}_{1}}\dots {\mathrm{{\rm Y}}}_{{s}_{u}},{\mathrm{{\rm Y}}}_{{t}_{1}}\dots {\mathrm{{\rm Y}}}_{{t}_{v}})|& \le & {\left(\right)}^{\frac{{C\parallel K\parallel}_{\infty}}{n{\mu}_{n}\mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{E}\left(\right)open="["\; close="]">{K}_{1}\left(x\right)}}u+v\hfill & \mathrm{I}\phantom{\rule{-0.166667em}{0ex}}\mathrm{E}\left(\right)open="["\; close="]">\left(\right)open="|"\; close="|">{K}_{1}^{2}\left(x\right)\end{array}$$

**Proof**

**of**

**Lemma**

**2.**

**Proof**

**of**

**Corollary**

**1.**

## References

- Narula, S.C.; Wellington, J.F. Prediction, linear regression and the minimum sum of relative errors. Technometrics
**1977**, 19, 185–190. [Google Scholar] [CrossRef] - Chatfield, C. The joys of consulting. Significance
**2007**, 4, 33–36. [Google Scholar] [CrossRef] - Chen, K.; Guo, S.; Lin, Y.; Ying, Z. Least absolute relative error estimation. J. Am. Statist. Assoc.
**2010**, 105, 1104–1112. [Google Scholar] [CrossRef] [Green Version] - Yang, Y.; Ye, F. General relative error criterion and M-estimation. Front. Math. China
**2013**, 8, 695–715. [Google Scholar] [CrossRef] - Jones, M.C.; Park, H.; Shin, K.-I.; Vines, S.K.; Jeong, S.-O. Relative error prediction via kernel regression smoothers. J. Stat. Plan. Inference
**2008**, 138, 2887–2898. [Google Scholar] [CrossRef] [Green Version] - Mechab, W.; Laksaci, A. Nonparametric relative regression for associated random variables. Metron
**2016**, 74, 75–97. [Google Scholar] [CrossRef] - Attouch, M.; Laksaci, A.; Messabihi, N. Nonparametric RE-regression for spatial random variables. Stat. Pap.
**2017**, 58, 987–1008. [Google Scholar] [CrossRef] - Demongeot, J.; Hamie, A.; Laksaci, A.; Rachdi, M. Relative-error prediction in nonparametric functional statistics: Theory and practice. J. Multivar. Anal.
**2016**, 146, 261–268. [Google Scholar] [CrossRef] - Cuevas, A. A partial overview of the theory of statistics with functional data. J. Stat. Plan. Inference
**2014**, 147, 1–23. [Google Scholar] [CrossRef] - Goia, A.; Vieu, P. An introduction to recent advances in high/infinite dimensional statistics. J. Multivar. Anal.
**2016**, 146, 1–6. [Google Scholar] [CrossRef] - Ling, N.; Vieu, P. Nonparametric modelling for functional data: Selected survey and tracks for future. Statistics
**2018**, 52, 934–949. [Google Scholar] [CrossRef] - Aneiros, G.; Cao, R.; Fraiman, R.; Genest, C.; Vieu, P. Recent advances in functional data analysis and high-dimensional statistics. J. Multivar. Anal.
**2019**, 170, 3–9. [Google Scholar] [CrossRef] - Aneiros, G.; Horova, I.; Hušková, M.; Vieu, P. On functional data analysis and related topics. J. Multivar. Anal.
**2022**, 189, 3–9. [Google Scholar] [CrossRef] - Chowdhury, J.; Chaudhuri, P. Convergence rates for kernel regression in infinite-dimensional spaces. Ann. Inst. Stat. Math.
**2020**, 72, 471–509. [Google Scholar] [CrossRef] [Green Version] - Li, B.; Song, J. Dimension reduction for functional data based on weak conditional moments. Ann. Stat.
**2022**, 50, 107–128. [Google Scholar] [CrossRef] - Douge, L. Théorèmes limites pour des variables quasi-associées hilbertiennes. Ann. L’Isup
**2010**, 54, 51–60. [Google Scholar] - Bouzebda, S.; Laksaci, A.; Mohammedi, M. The k-nearest neighbors method in single index regression model for functional quasi-associated time series data. Rev. Mat. Complut.
**2023**, 36, 361–391. [Google Scholar] [CrossRef] - Ferraty, F.; Vieu, P. Nonparametric Functional Data Analysis; Springer Series in Statistics; Theory and Practice; Springer: New York, NY, USA, 2006. [Google Scholar]
- Hardle, W.; Marron, J.S. Bootstrap simultaneous error bars for nonparametric regression. Ann. Stat.
**1991**, 16, 1696–1708. [Google Scholar] [CrossRef] - Wilcox, R. Introduction to Robust Estimation and Hypothesis Testing; Elsevier Academic Press: Burlington, MA, USA, 2005. [Google Scholar]
- Kallabis, R.S.; Neumann, M.H. An exponential inequality under weak dependence. Bernoulli
**2006**, 12, 333–335. [Google Scholar] [CrossRef]

Months | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |

Outliers | 15 | 26 | 13 | 5 | 24 | 25 | 7 | 9 | 11 | 8 | 9 | 15 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Chikr Elmezouar, Z.; Alshahrani, F.; Almanjahie, I.M.; Kaid, Z.; Laksaci, A.; Rachdi, M.
Scalar-on-Function Relative Error Regression for Weak Dependent Case. *Axioms* **2023**, *12*, 613.
https://doi.org/10.3390/axioms12070613

**AMA Style**

Chikr Elmezouar Z, Alshahrani F, Almanjahie IM, Kaid Z, Laksaci A, Rachdi M.
Scalar-on-Function Relative Error Regression for Weak Dependent Case. *Axioms*. 2023; 12(7):613.
https://doi.org/10.3390/axioms12070613

**Chicago/Turabian Style**

Chikr Elmezouar, Zouaoui, Fatimah Alshahrani, Ibrahim M. Almanjahie, Zoulikha Kaid, Ali Laksaci, and Mustapha Rachdi.
2023. "Scalar-on-Function Relative Error Regression for Weak Dependent Case" *Axioms* 12, no. 7: 613.
https://doi.org/10.3390/axioms12070613