Next Article in Journal
Tsallis Extended Thermodynamics Applied to 2-d Turbulence: Lévy Statistics and q-Fractional Generalized Kraichnanian Energy and Enstrophy Spectra
Next Article in Special Issue
Q-Neutrosophic Soft Relation and Its Application in Decision Making
Previous Article in Journal
Detection of Parameter Change in Random Coefficient Integer-Valued Autoregressive Models
Previous Article in Special Issue
The Complex Neutrosophic Soft Expert Relation and Its Multiple Attribute Decision-Making Method
Article Menu
Issue 2 (February) cover image

Export Article

Open AccessArticle
Entropy 2018, 20(2), 108; https://doi.org/10.3390/e20020108

Sequential Change-Point Detection via Online Convex Optimization

H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
*
Author to whom correspondence should be addressed.
Received: 1 September 2017 / Revised: 2 December 2017 / Accepted: 5 February 2018 / Published: 7 February 2018
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Full-Text   |   PDF [400 KB, uploaded 7 February 2018]   |  

Abstract

Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length) meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory. View Full-Text
Keywords: sequential methods; change-point detection; online algorithms sequential methods; change-point detection; online algorithms
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Cao, Y.; Xie, L.; Xie, Y.; Xu, H. Sequential Change-Point Detection via Online Convex Optimization. Entropy 2018, 20, 108.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top