Next Article in Journal
Counterintelligence Technologies: An Exploratory Case Study of Preliminary Credibility Assessment Screening System in the Afghan National Defense and Security Forces
Next Article in Special Issue
Adaptive Machine Learning for Robust Diagnostics and Control of Time-Varying Particle Accelerator Components and Beams
Previous Article in Journal
Strategic Challenges of Human Resources Allocation in Industry 4.0
Previous Article in Special Issue
Virtual Diagnostic Suite for Electron Beam Prediction and Control at FACET-II
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Approach for Classification and Forecasting of Time Series in Particle Accelerators

by
Sichen Li
1,†,
Mélissa Zacharias
1,†,
Jochem Snuverink
1,
Jaime Coello de Portugal
1,
Fernando Perez-Cruz
2,
Davide Reggiani
1 and
Andreas Adelmann
1,*
1
Paul Scherrer Institut, 5232 Villigen, Switzerland
2
Swiss Data Science Center, ETH Zürich and EPFL, Universitätstrasse 25, 8092 Zürich, Switzerland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Information 2021, 12(3), 121; https://doi.org/10.3390/info12030121
Submission received: 5 February 2021 / Revised: 5 March 2021 / Accepted: 10 March 2021 / Published: 12 March 2021
(This article belongs to the Special Issue Machine Learning and Accelerator Technology)

Abstract

:
The beam interruptions (interlocks) of particle accelerators, despite being necessary safety measures, lead to abrupt operational changes and a substantial loss of beam time. A novel time series classification approach is applied to decrease beam time loss in the High-Intensity Proton Accelerator complex by forecasting interlock events. The forecasting is performed through binary classification of windows of multivariate time series. The time series are transformed into Recurrence Plots which are then classified by a Convolutional Neural Network, which not only captures the inner structure of the time series but also uses the advances of image classification techniques. Our best-performing interlock-to-stable classifier reaches an Area under the ROC Curve value of 0.71 ± 0.01 compared to 0.65 ± 0.01 of a Random Forest model, and it can potentially reduce the beam time loss by 0.5 ± 0.2 s per interlock.

1. Introduction

Recent years have seen a boost in the development of machine learning (ML) algorithms and their applications [1]. With the rapid growth of data volume and processing capabilities, the value of data has been increasingly recognized both academically and in social life, and the prospect of ML has been pushed to an unprecedented level.
Particle accelerators, as large facilities operating with a steady stream of structured data, are naturally suitable for ML applications. The highly complicated operation conditions and precise control objectives fall perfectly inside ML’s scope [2]. Over the past few years, the interest and engagement of ML applications in the field of particle accelerators has started to grow [1,3] along with the trend of data-driven approaches in other disciplines. Fast and accurate ML-based beam dynamics modelling can serve as a guide for future accelerator design and commissioning [4]. An ML surrogate model could achieve the same precision by using only a few runs of the original high-fidelity simulation [5]. In addition, various types of problems in accelerator control and operation, such as fast and safe parameter tuning to achieve optimal beam energy in the SwissFEL [6], optics correction [7] and collimator alignment in the LHC [8], and model-free stabilization of source size in the ALS [9], are intrinsically suitable for ML solutions due to the complexity of the input space, and a target function that is rather straightforward to implement. Applications in beam diagnostics have also attracted much interest due to ML’s advantages on non-linear behavior and multi-objective optimization [10].
Apart from the above-mentioned applications, anomaly detection and prevention has always been an inescapable and significant issue for accelerators to achieve high efficiency and reliability. There is also an increasing number of studies on data-driven approaches on this topic [11,12]. However, most existing research focuses on diagnostic techniques that distinguish anomalies from normal circumstances, rather than prognostic approaches that predict potential anomalies in advance. Several recent studies have attempted to address the issue of prediction. Donon et al. present different approaches, including signal processing and statistical feature extraction, to identify and predict jitters in the RF sources of Linac4 [13,14]. All approaches manage to detect the example jitters when they occur, and some even detect first symptoms ahead of time, which implies predictive power. Contrary to our beam interruptions, most jitters appear progressively rather than abruptly, which makes it more likely to locate an earlier precursor. There also lacks a quantified validation of the model performance among all the jitters. Rescic et al. applied various binary classification algorithms on beam pulses from the SNS to identify the last pulse ahead of the failure from normal pulses [15], in which predictive power is embedded, but their approach depends highly on the discrete pulse structure and cannot be directly adapted to continuous cases. It is also worth noting that both studies deal with univariate time series, i.e., RF power for Donon et al. and beam current for Rescic et al. Our work presents a novel and more ambitious approach to classify a continuous stream of multivariate time series data by cutting out windows from different regions, and trigger an alarm for a potential beam interruption before it happens. Model performance on the validation set is evaluated by two metrics, with examples from mimicked live prediction as well.
Outside the accelerator community, the prediction and prevention of failures has for long attracted researchers’ interest, especially in the field of predictive maintenance [16,17] where a typical research deals with the lifetime prediction of an engine or bearing. However, in contrast to the gradual degradation process of a machinery equipment, the accelerator interruptions may appear rather suddenly due to momentary causes, thus traditional signal processing methods such as noise reduction or vibration analysis [18,19], are largely ineffective in the rare events scenario. Instead of long-term evolution in time, short-scale structures should raise more concern. By generating Recurrence Plots (RPs) in close adjacency of the failures as well as in the middle of stable operation period, we managed to extract finer information out of the original time series, which is better suited to the abrupt cases.
This work aims at capturing the precursor of beam interruptions—or for short “nterlocks” —of the High-Intensity Proton Accelerators (HIPA) by classifying whether the accelerator is in stable or unstable operation mode. A novel approach, a Recurrence Plots-based Convolutional Neural Network (RPCNN), is introduced and adapted to the problem setting. The method transforms multivariate time series into images, which allows the extraction of enough structures and exploit the mature ML techniques of image classification. The RPCNN model achieves an AUC value of 0.71 ± 0.01 and reduces 0.5 ± 0.2 s of beam time loss per interlock, compared to an Area under the ROC Curve (AUC) value of 0.65 ± 0.01 and 0.7 ± 0.1 s of a Random Forest (RF) model. The RPCNN model predicts 4.9% of interlock samples, and would potentially save 7.45 min more beam time during the HIPA run from September to December 2019.
The paper is structured as follows: Section 2 introduces the dataset, the underlying algorithms and the evaluation metrics for presenting the result. Section 3 displays the results and interpretation. Finally, Section 4 discusses the ambiguity and limitations as well as possible extensions of the work.

2. Materials and Methods

2.1. Dataset and Preprocessing

The HIPA at the Paul Scherrer Institut (PSI) is one of the most powerful proton cyclotron facilities in the world, with nearly 1.4 MW of beam power [20]. The HIPA Archiver is an in-house developed archiver software that stores values of EPICS [21] Process Variables, i.e., channels, according to tailored configuration settings [22]. A data API is available as a Python library to access the archiver and export the data into Pandas Dataframes [23].
The dataset was taken from 14 weeks in 2019 starting from September from the HIPA Archiver, composed of 376 operating channels of the RING section and the PKANAL (see Figure 1) section for the model input (SINQ beamline was not operating during the period), and the Interlock records for the model output. Figure 1 shows the different sections of the channels, and Table 1 lists some example channels with detailed information. Figure 2 shows some examples of typical interlock records, which are text messages reporting timestamp and all possible causes. Each interlock has at least one cause, though multiple causes can be attributed to one interlock if no single cause could be identified. For instance, an interlock of “MRI GR.3” indicates that at least one loss monitor at the end of Ring Cyclotron exceeds its limit, as shown in the second line of Figure 2. The detailed description “MRI14>MAX-H” gives the exact cause that the beam loss monitor “MRI14” is over its maximal allowed value.
Taking into account the original causes of the interlocks, to ease the classification problem we separated the causes into four different general types. Since as discussed interlocks can have multiple causes, an interlock can be assigned to multiple types. Table 2 and Figure 3 show the four types and their statistics in the considered dataset that contains 2027 interlocks in total.
The archiver records channel values only when they change, as shown by the blue circles in Figure 4, with a certain maximum frequency per channel. Thus, before the next point being recorded, the channel value stays constant with the last recorded point. To have input samples at a fixed frequency, the data of all channels are synchronized onto a predefined 5 Hz time grid (Figure 4). The interlock timestamps as given by the archiver, were found to be up to a second off with respect to the real time of the beam interruption. Therefore, the interlock timestamp is chosen to be the closest point in the 5 Hz grid in which the beam current drops below a certain threshold.

2.2. Problem Formulation

The problem of interlock prediction is formulated as a classification problem of two types of samples—interlock windows labeled as class 1 and stable windows labeled as class 0, taken from the channel time series. The model prediction of a sample is a value in [ 0 , 1 ] , and a sample is predicted to be class 1 once its output gets above a custom classification threshold determined by performance.
An interlock window is taken by setting its endpoint closely before but not exactly at an interlock event. This allows the model to identify the precursors of the interlock and not the interlock event itself, in case the signals synchronization is slightly off. Thus, if the last timestamp of the window has been taken 1 s before the interlock event, the classification of a sample as interlock by the model means that an interlock is expected to happen in 1 s. To increase the number of interlock samples, an interval of 5 overlapping windows are assigned the label “interlock” rather than only one window per interlock, which is shown in the first two columns of Table 3. The stable windows are taken as the series of non-overlapping windows that are in the periods between two interlocks. To stay away from the unstable situations, two buffer regions of 10 min before and after the interlocks are ignored, as displayed in Figure 5.
Taking sliding windows as interlock samples mimics the real-time application scenario where there is a constant incoming data flow. Since the machine is mostly in stable operation mode, the stable samples are abundant and similar, thus it is neither necessary nor economical to cut sliding stable windows. All windows have a length of 12.8 s (64 time steps, and each time step takes 0.2 s), which is determined by the model performance.
In addition, data where the beam current (channel “MHC1:IST:2”) presents a value of less than 1000 μ A, at the first time stamp of the sample, are excluded from the dataset. This measure removes samples where the HIPA machine was not in a stable state. Such samples may not be a true representative of the respective class and thus not suitable as training data.
As shown in Figure 3, interlocks of the type “Losses” make up a majority (60.8%) of the available samples. Since the vast difference in types may be problematic for a binary classifier and as not enough samples of each type are available to perform a multi-class classification, only interlocks of the type “Losses” are considered in the current study. The number of considered interlocks reduces from 2027 to 894 after the above cleaning measures.
The model was trained on the first 80% of the available data and validated on the remaining 20% in time order to assess the predictive power of the model in future samples.
On average, there is about one interlock event per hour in the dataset (the interlock distribution and counts during our data collection period are given in Appendix A). Due to the nature of the data, the two classes of windows are highly imbalanced, as shown in Table 3. In order to compensate for the class imbalance, bootstrapping is employed on the training set—Interlock samples are drawn with replacement from the training set until their number equals the number of stable samples. Numbers of interlock and stable samples in training and validation sets after all preprocessing steps with and w/o bootstrapping are listed in Table 3.

2.3. The RPCNN Model

The HIPA channels present behaviors on various time scales, such as slow signal drifts or sudden jumps, which are very hard to clean up in the manner necessary for usual time series prediction methods. Recurrence plots are intrinsically suited to deal with these issues, as they are capable of extracting time dependent correlations on different time scales and complexity via tunable parameters.
Starting from the two classes of multivariate time series— the interlock samples and stable samples listed in Table 3—we first run a feature reduction algorithm as preprocessing, then feed the simplified time series into the RPCNN model as input. The model consists of a RP generation part that transform the input time series into recurrence plots, then the following Feature extraction and Classification part would process the data further and eventually give binary outputs.

2.3.1. Preprocessing

To decrease the model parameters and thus reduce over-fitting, only 97 of the available 376 channels were used as model input. Since there are groups of input channels that measure approximately the same quantity and have a high correlation between each other (for instance beam position monitors), a lot of redundancy exists in the input channels, which enables an automated random feature selection procedure based on model performance. The selection was done by a combination of random search and evaluation of past experimental results, with details presented in Algorithm 1. Please note that the resulting set of channels is only one out of many possible channel combinations.
Algorithm 1: Random feature selection
Information 12 00121 i001
The channel signals that are used as input features differ vastly with respect to their scale. As this might impact the convolution operations, the channel signals are standardized to mean 0 and standard deviation 1.

2.3.2. Implementation of the RPCNN Model

The RPCNN model is constructed in TensorFlow using the Keras API [24]. The architecture of the RPCNN model is outlined in Figure 6. Details are presented in Appendix C. Table 4 lists the training parameters and a selection of the layer settings.

(a) RP generation

Recurrence plots were developed as a method to analyze dynamical systems and detect hidden dynamical patterns and nonlinearities [25]. The structures in a recurrence plot contain information about time evolution of the system it portrays. In this study, recurrence plots are used to transform the time series into images, which are then classified by a subsequent Convolutional Neural Network.
The original recurrence plot described in [25] is defined as
R i , j = θ ( ϵ i x i x j ) , x i R m , i , j = 1 , , N .
with θ the Heaviside function, i , j the indices of time steps inside a time window taken from the signal, and N the length of the window. Here the radius ϵ i is chosen for each i in such a way that the neighborhood it defines contains a fixed number of states x j . As a consequence the recurrence plot is not symmetric, but all columns have the same recurrence density. The most common definitions use a formulation with a fixed radius ϵ i = ϵ , i which was first introduced by Zbilut et al. in [26]. The variation we use here is a so-called global recurrence plot [27] with a fixed epsilon, as defined in Equation (1)
D i , j = x i x j , x i x j ϵ ϵ , x i x j > ϵ
where D is symmetric. Figure 7 shows the process of transforming an original signal to a recurrence plot with fixed ϵ = 2 in a diagrammatic way.
The patterns of recurrence plots convey a wealth of information not directly available from the time series they are based on, such as white areas or band structures in Figure 8, which indicates abrupt changes [28,29]. Typical methods to extract such information include Recurrence Quantification Analysis (RQA) [30], which deals with the analysis of the small-scale structures in recurrence plots. The RQA presents several metrics to quantify the recurrences, such as the recurrence rate, trapping time or divergence, furthermore RQA can be applied to non-stationary or very short time series as are present in this study. Instead, we choose to feed the recurrence plots into a CNN on account of its great success on image classification. CNNs can construct novel and optimal features beyond the limits of RQA’s predefined metrics.
In model implementation, the RP generation part starts with a L 2 -regularized fully connected layer to reduce the number of available features from 97 to 20. Gaussian noise with a standard deviation of 0.3 is added as a regularization and data augmentation measure before the signals are converted into recurrence plots. The process is illustrated in Figure 9.
The recurrence plots are produced internally by a custom “Recurrence Plot layer” in the model. This approach allows the setting of trainable ϵ parameters and generate recurrence plots on the fly, which eases deployment of the trained models as the recurrence plots do not have to be generated and stored explicitly during preprocessing. For each feature a recurrence plot is drawn and the respective ϵ parameters are initialized with random numbers in the interval [ 0 , 1 ] . Examples of recurrence plots of an interlock sample and a stable sample are given in Appendix B, where each sample contains 20 plots respectively.

(b) Feature extraction

The structure of the feature extraction section was adapted from [31] and consists of a depthwise separable convolution performed in parallel with a pointwise convolution. Both are followed by a ReLU non-linearity. Batch normalization layers are added after both convolutional layers.

(c) Classification

The classification part of the network consists of three fully connected layers separated by dropout layers with a ratio of 0.1 . L 2 regularization was applied to all fully connected layers.

2.4. The RF Model

As a feasibility study of the dataset and baseline model for the RPCNN, we trained another Random Forest [32] model on all 376 input channels. The Random Forest classifier is an ensemble learning method built upon Decision Trees. Although each tree decides its optimal splits based on a random sub-sample of training data and features, the ensemble takes the average classification scores from all trees as the output classification. It is widely applied due to its robustness against over-fitting, straightforward implementation and relatively simple hyper-parameter tuning. It also does not require the input features to be standardized or re-scaled, and no difference is noticed on trial runs.
For the RPCNN model, information on time dimension is embedded in the recurrence plots implicitly. It is thus incompatible to use a window with both time and channel dimensions directly as input for the RF model. Instead, only the last timesteps of each of the same interlocks and stable windows are fed as input into the RF model. The RF model is implemented using the RandomForestClassifier method of the Scikit-learn [33] ensemble module with parameters listed in Table 5. The RF on the 97 channels from the RPCNN shows the same performance as with all 376 channels.

2.5. Evaluation Metric

Typical binary classification metrics in a confusion matrix are defined and applied in our setting. A True Positive (TP) means an interlock sample—a sample less than 15 s before an interlock – being classified as an interlock. A False Positive (FP) means a stable sample—a sample at least 10 min away from an interlock—being mistaken as an interlock. A True Negative (TN) means a stable sample being reported as stable. And the rest False Negative (FN) means that the model fails to identify an interlock. All the metrics are counted regarding samples, namely T P + F N = 815 for validation set according to Table 3.
For an imbalanced dataset such as the present one, evaluation of the model performance through the classification accuracy ( T P + T N T P + F N + F P + T N ) is not sufficient. The performance of the model is evaluated with a Receiver Operating Curve (ROC) as well as a target defined to maximize the beam time saved.

2.5.1. Receiver Operating Curve

The ROC curve shows the true positive rate ( T P R = T P T P + F N ) against the false positive rate ( F P R = F P F P + T N ) of the predictions of a binary classification model as a function of a varying classification threshold [34]. The AUC measures the area under the ROC curve bounded by the axes. It represents the probability that a random positive (i.e., interlock) sample receives a higher value than a random negative (i.e., stable) sample, which ranges from 0 to 1 and 0.5 corresponds to random guess. Aggregated over all possible classification thresholds, it indicates the general capacity of a model to distinguish between the two classes.

2.5.2. Beam Time Saved

We also propose a more practical metric for this particular use case. The beam is shut off when an interlock occurs. After each interlock the beam current is ramped up automatically again. Each interlock event causes a beam time loss of about 25 s. The average uptime of the HIPA facility is about 90% [35]. Short interruptions, mostly from beam losses and electrostatic elements, are responsible for about 2% beam time lost at HIPA, according to the records of beam downtime over 15 years (2004–2018). Hence an “actually negative” state (i.e., F P + T N ) is about 45 times (90% over 2%) more likely to occur than an “actually positive” state (i.e., T P + F N ). This ratio is a constant indicating machine availability, but we could also understand it as the probability of beam interruptions happening during the run. We denote p as the probability of “actually positive”, i.e., the probability that an interlock would occur. Thus, p = 2 % 2 % + 90 % = 1 46 .
For the target definition it is assumed that each correctly predicted interlock can be prevented by a 10% beam current reduction for 60 s. Each interlock thus leads to an equivalent of six seconds of lost beam time. Please note that this is only for performance evaluation purposes and the 10% reduction is not necessarily able to prevent interlocks in real life. The actual mitigation measure is still under investigation.
Under this assumption, the evaluation of beam time saved is illustrated in Figure 10 and shown in detail in Table 6.
Please note that since we take 5 consecutive interlock samples right before each interlock, and arbitrary number of stable samples in between, the sample-based classification results do not exactly correspond to the classification of interlocks. Therefore, we put forward the metric “average beam time saved per interlock” T s ¯ defined in Equation (2) based on the above analysis, with N i n t as the number of interlocks. We decide on the best classification threshold corresponding to the largest T s ¯ , with also a considerably large AUC value.
T s ¯ = 19 · T P N i n t 6 · F P N i n t = 19 · T P R 6 · F P R · 1 p p = 19 · T P R 270 · F P R ( s / interlock )

3. Results

3.1. Model Performance in Terms of Evaluation Metrics

Figure 11 shows the ROC curves of the best-performing RF and RPCNN models, as well as the ROC curve over varying initializations of the RPCNN model.
The confidence intervals are a measure for the variability of the model, which may result from different selection of validation dataset, or randomness in model and parameter initialization. The variability over configurations of the validation dataset was calculated for both the RF and RPCNN model in Figure 11a,b. Specifically, the models are trained on the same training set, then validated 20 times over randomly sampled subsets of the validation set. Initialization in the RPCNN model could also introduce various uncertainties depending on the convergence behavior of the model. Therefore, a corresponding confidence interval of the RPCNN model is calculated. For Figure 11c 25 RPCNN models were trained and evaluated using identical training and validation sets. The mean results with the confidence intervals, as well as the same best-performing model shown in Figure 11b are displayed.
The black dashed line marks the boundary T s ¯ = 19 · T P R 270 · F P R = 0 . A model could only save beam time ( T s ¯ > 0 ) if its ROC curve reaches the left side of this dashed line and saves more beam time the further away it is from this line. The thresholds are chosen at points where the dashed line is tangent to the ROC curve (as shown in the zoomed-in figures of Figure 11). Since the metric imposes strong requirements on the number of FPs, the resulting threshold corresponds to a low TPR and very low FPR.
The AUC and beam time saved for the models displayed in Figure 11 can be found in Table 7. The TP, FP and FN counts for the best-performing RF an RPCNN models are shown in Table 8. The RF model saves 0.7 s per interlock, better than 0.5 s of the RPCNN, but the RPCNN achieves a higher AUC value. The RPCNN successfully predicts 4.9 % of interlock samples (40 out of 815). Although it correctly identifies more interlocks than the RF (40 vs. 25), it also triggers more false alarms (75 vs. 28).
During the HIPA run from September to December 2019, there are 894 interlocks of the “Losses” type, ignoring the secondary ones. Taking the result of the best RPCNN model, 0.5 s would be saved for each interlock, which means we would deliver 7.45 min more beam altogether.

3.2. Model Performance in a Simulated Live Setting

Although the AUC and Beam time loss metrics allow an evaluation of the expected performance of the model, the final test is the behavior of the model during live, i.e., continuous predictions. Figure 12 presents a selection of snapshots of the RPCNN and RF models during mimicked live predictions on the extended validation set—to examine the model behavior in unseen regions, the buffer regions are reduced from 10 min to 14 s and predictions are generated until 0.2 s before the interlock event.
Figure 12 shows that the RPCNN model achieves an overall better performance than the RF model in this mimicked live setting. In the successful cases of (a) and (b), the RPCNN not only reports imminent predictions of interlocks in time scale of seconds (green circle), as how the training set is defined, but could also generate alarms farther in advance (purple circle), even several minutes ahead of time. However, there is much subtlety in the distinction between an early alarm and a FP according to the time from the earliest threshold crossing until the interlock. The three crossings (red circle) in (c) could be either taken as FPs if the machine is believed to remain stable and only start deviating shortly before interlocks, or recognized as early precursors if the machine keeps operating in an unstable mode. As for the prediction failure in (d), the RPCNN still outperforms RF in the sense of a growing trend in output values (red arrow), which indicates some underlying progress of anomalies.
The behavior of the RPCNN model as seen in Figure 12 suggests that some interlocks are detected well before the 1 s range defined in the problem formulation. This observation is supported by the findings shown in Figure 13. The time range before the interlock event at which the RPCNN model prediction first crossed and stayed above the threshold is obtained for interlocks in the validation set and compiled into a reverse cumulative histogram. Detection times vary between the interlock events with some events being only detected a second before their occurrence and others close to 5 min beforehand.

4. Discussion

4.1. Achievements

As shown in Table 7, all variations of models achieve an AUC value larger than 0.6 , indicating that our models possess the ability to distinguish between potential interlocks and stable operation. The customized target T s , the “beam time saved”, poses strict requirements on the model performance since it has little tolerance for FPs, as described in Equation (2). In spite of such constraints, our model can increase the amount of beam for the experiments.
It is observed from Figure 12 that the RPCNN model, while showing comparable performance with the RF on binary classification metrics in Table 7, is more suitable for real-time predictions.

4.2. Limitations

Despite the mentioned achievements and potential, there are still limitations in multiple aspects. As shown in Figure 11c, the RPCNN model under random initialization does not converge well, which is expected to be resolved by acquisition of more training data. Moreover, the available channels might change for each data run. For example the 2019 model is invalid in 2020, since the SINQ beamline was restarted, so that several old channels were no longer available.
Another limitation lies in the ambiguity in manipulating the data and model to achieve better training, from sample selection (number and size of windows, buffer region etc.), extent of preprocessing and data augmentation, feature engineering considering the redundancy in channels, to the selection of model architecture and hyper-parameter values.
Although a thorough cross-validation is a more convincing proof of the model performance, the confidence interval in Figure 11a,b are computed solely from constructing different subsets of the validation dataset. A cross-validation is done for the RF model with same level performance. For the RPCNN model, there has not yet been a complete cross-validation over the full dataset, since the model still diverges from initialization. Future studies should address these issues with new data.

4.3. Outlook

The results obtained in a mimicked live setting as displayed in Figure 12 and Figure 13 indicate that the problem formulation as well as the performance metrics need to be reevaluated. As the detection times of the interlock events are inconsistent, labeling a fixed number of 5 samples per event as interlock might not be optimal. A next step could be a closer investigation to determine if the detection time can be linked to some property of the interlocks such as their type, length or location, and to adjust the sample labeling accordingly.
Following the discussion in Section 3.2 about the FPs in Figure 12c, another adaptation could be changing the currently sample-based counting of TP, FP and FN to bring the metrics closer to the intended use case of live predictions. For instance, TPs could be counted regarding interlocks rather than samples, i.e., if at least one sample is positive in some range before an interlock, one TP is counted. It would also be reasonable to ignore consecutive FPs in a certain range or to ignore FP and FN occurring in a certain range before the interlock. Our customized target T s (beam time saved) is constructed upon the definition of TP and FP. With the above adaptation, T s could also be brought closer to reality and become more instructive for real practices.
To enhance the model performance, one possible method is to tailor the training set according to model output. By excluding the interlocks that receive a low detection from the model, it might be possible to improve the detection of the remaining interlocks and reduce the number of FPs, thus increasing the overall model performance. Given the high correlations among channels, another possibility lies in dimensionality reduction techniques other than the current random feature selection in Algorithm 1, such as removing channels with high mutual information, or extracting the low-dimensional representation from an autoencoder.
Currently, a Graphical User Interface (GUI) for real-time implementation of the model has already been developed and tested on archived data. One of the next steps is to apply transfer learning to the model, train and update it with new data continuously, and eventually develop a real prognostic tool with GUI to be controlled by the machine operators.

Author Contributions

Conceptualization, S.L., F.P.-C. and A.A.; methodology, M.Z., F.P.-C. and A.A.; software, M.Z. and S.L.; validation, J.S., J.C.d.P. and A.A.; formal analysis, M.Z. and S.L.; investigation, J.S., J.C.d.P. and D.R.; resources, J.S. and D.R.; data curation, J.S., J.C.d.P. and D.R.; writing—original draft preparation, S.L. and M.Z.; writing—review and editing, J.S., J.C.d.P., F.P.-C., D.R. and A.A.; visualization, S.L. and M.Z.; funding acquisition, J.S. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by the Swiss Data Science Center.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We would like to place our special thanks to Anastasia Pentina and other colleagues from the Swiss Data Science Center for their insightful collaboration and generous support throughout the research. We also thank Hubert Lutz and Simon Gregor Ebner from PSI for their expert knowledge of the HIPA Archiver and help in data collection. We acknowledge the assistance of Derek Feichtinger and Marc Chaubet for their help with the Merlin cluster which enables the computational work of the research.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Interlock Statistics

Figure A1. Interlock occurrences per day from 18-9-2019 to 25-12-2019. Regular maintenance periods can be observed.
Figure A1. Interlock occurrences per day from 18-9-2019 to 25-12-2019. Regular maintenance periods can be observed.
Information 12 00121 g0a1
Figure A2. Number of interlocks happened per day from 18-9-2019 to 25-12-2019.
Figure A2. Number of interlocks happened per day from 18-9-2019 to 25-12-2019.
Information 12 00121 g0a2

Appendix B. Example Recurrence Plots

Figure A3. The 20 Recurrence Plots of an interlock sample.
Figure A3. The 20 Recurrence Plots of an interlock sample.
Information 12 00121 g0a3
Figure A4. The 20 Recurrence Plots of a stable sample.
Figure A4. The 20 Recurrence Plots of a stable sample.
Information 12 00121 g0a4

Appendix C. Architecture of the RPCNN Model

Figure A5. The RPCNN architecture.
Figure A5. The RPCNN architecture.
Information 12 00121 g0a5

References

  1. Edelen, A.; Mayes, C.; Bowring, D.; Ratner, D.; Adelmann, A.; Ischebeck, R.; Snuverink, J.; Agapov, I.; Kammering, R.; Edelen, J.; et al. Opportunities in machine learning for particle accelerators. arXiv 2018, arXiv:1811.03172. [Google Scholar]
  2. Edelen, A.L.; Biedron, S.; Chase, B.; Edstrom, D.; Milton, S.; Stabile, P. Neural networks for modeling and control of particle accelerators. IEEE Trans. Nucl. Sci. 2016, 63, 878–897. [Google Scholar] [CrossRef] [Green Version]
  3. Arpaia, P.; Azzopardi, G.; Blanc, F.; Bregliozzi, G.; Buffat, X.; Coyle, L.; Fol, E.; Giordano, F.; Giovannozzi, M.; Pieloni, T.; et al. Machine learning for beam dynamics studies at the CERN Large Hadron Collider. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2020, 985, 164652. [Google Scholar] [CrossRef]
  4. Zhao, W.; Patil, I.; Han, B.; Yang, Y.; Xing, L.; Schüler, E. Beam data modeling of linear accelerators (linacs) through machine learning and its potential applications in fast and robust linac commissioning and quality assurance. Radiother. Oncol. 2020, 153, 122–129. [Google Scholar] [CrossRef] [PubMed]
  5. Adelmann, A. On nonintrusive uncertainty quantification and surrogate model construction in particle accelerator modeling. SIAM/ASA J. Uncertain. Quantif. 2019, 7, 383–416. [Google Scholar] [CrossRef]
  6. Kirschner, J.; Nonnenmacher, M.; Mutnỳ, M.; Krause, A.; Hiller, N.; Ischebeck, R.; Adelmann, A. Bayesian Optimisation for Fast and Safe Parameter Tuning of SwissFEL. In Proceedings of the 39th International Free-Electron Laser Conference, FEL2019, Hamburg, Germany, 26–30 August 2019; JACoW Publishing: Geneva, Switzerland, 2019; pp. 707–710. [Google Scholar]
  7. Fol, E.; Coello de Portugal, J.; Franchetti, G.; Tomás, R. Optics corrections using Machine Learning in the LHC. In Proceedings of the 2019 International Particle Accelerator Conference, Melbourne, VIC, Australia, 19–24 May 2019. [Google Scholar]
  8. Azzopardi, G.; Muscat, A.; Redaelli, S.; Salvachua, B.; Valentino, G. Operational Results of LHC Collimator Alignment Using Machine Learning. In Proceedings of the 10th International Particle Accelerator Conference (IPAC’19), Melbourne, VIC, Australia, 19–24 May 2019; JACoW Publishing: Geneva, Switzerland, 2019; pp. 1208–1211. [Google Scholar] [CrossRef]
  9. Leemann, S.; Liu, S.; Hexemer, A.; Marcus, M.; Melton, C.; Nishimura, H.; Sun, C. Demonstration of Machine Learning-Based Model-Independent Stabilization of Source Properties in Synchrotron Light Sources. Phys. Rev. Lett. 2019, 123, 194801. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Fol, E.; Coello de Portugal, J.; Franchetti, G.; Tomás, R. Application of Machine Learning to Beam Diagnostics. In Proceedings of the FEL’19, Hamburg, Germany, 26–30 August 2019; JACoW Publishing: Geneva, Switzerland, 2019; pp. 311–317. [Google Scholar] [CrossRef]
  11. Tilaro, F.; Bradu, B.; Gonzalez-Berges, M.; Roshchin, M.; Varela, F. Model Learning Algorithms for Anomaly Detection in CERN Control Systems. In Proceedings of the 16th International Conference on Accelerator and Large Experimental Control Systems (ICALEPCS’17), Barcelona, Spain, 8–13 October 2017; JACOW: Geneva, Switzerland, 2018; pp. 265–271. [Google Scholar]
  12. Piekarski, M.; Jaworek-Korjakowska, J.; Wawrzyniak, A.I.; Gorgon, M. Convolutional neural network architecture for beam instabilities identification in Synchrotron Radiation Systems as an anomaly detection problem. Measurement 2020, 165, 108116. [Google Scholar] [CrossRef]
  13. Donon, Y.; Kupriyanov, A.; Kirsh, D.; Di Meglio, A.; Paringer, R.; Serafimovich, P.; Syomic, S. Anomaly Detection and Breakdown Prediction in RF Power Source Output: A Review of Approaches. In Proceedings of the NEC 2019 27th Symposium on Nuclear Electronics and Computing, Budva, Montenegro, 30 September–4 October 2019. [Google Scholar]
  14. Donon, Y.; Kupriyanov, A.; Kirsh, D.; Di Meglio, A.; Paringer, R.; Rytsarev, I.; Serafimovich, P.; Syomic, S. Extended anomaly detection and breakdown prediction in LINAC 4’s RF power source output. In Proceedings of the 2020 International Conference on Information Technology and Nanotechnology (ITNT), Samara, Russia, 26–29 May 2020; IEEE: New York, NY, USA, 2020; pp. 1–7. [Google Scholar]
  15. Rescic, M.; Seviour, R.; Blokland, W. Predicting particle accelerator failures using binary classifiers. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2020, 955, 163240. [Google Scholar] [CrossRef]
  16. Hashemian, H.M. State-of-the-art predictive maintenance techniques. IEEE Trans. Instrum. Meas. 2010, 60, 226–236. [Google Scholar] [CrossRef]
  17. Selcuk, S. Predictive maintenance, its implementation and latest trends. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2017, 231, 1670–1679. [Google Scholar] [CrossRef]
  18. Scheffer, C.; Girdhar, P. Practical Machinery Vibration Analysis and Predictive Maintenance; Elsevier: Amsterdam, The Netherlands, 2004. [Google Scholar]
  19. Kovalev, D.; Shanin, I.; Stupnikov, S.; Zakharov, V. Data mining methods and techniques for fault detection and predictive maintenance in housing and utility infrastructure. In Proceedings of the 2018 International Conference on Engineering Technologies and Computer Science (EnT), Moscow, Russia, 20–21 March 2018; IEEE: New York, NY, USA, 2018; pp. 47–52. [Google Scholar]
  20. Reggiani, D.; Blau, B.; Dölling, R.; Duperrex, P.A.; Kiselev, D.; Talanov, V.; Welte, J.; Wohlmuther, M. Improving beam simulations as well as machine and target protection in the SINQ beam line at PSI-HIPA. J. Neutron Res. 2020, 22, 1–11. [Google Scholar] [CrossRef]
  21. Dalesio, L.R.; Kozubal, A.; Kraimer, M. EPICS Architecture; Technical Report; Los Alamos National Lab.: Los Alamos, NM, USA, 1991. [Google Scholar]
  22. Lutz, H.; Anicic, D. Database driven control system configuration for the PSI proton accelerator facilities. In Proceedings of the ICALEPCS’11, Grenoble, France, 10–15 October 2011; JACoW Publishing: Geneva, Switzerland, 2011; pp. 289–291. [Google Scholar]
  23. Ebner, S. Data API for PSI SwissFEL DataBuffer and EPICS Archiver. Available online: https://github.com/paulscherrerinstitute/data_api_python (accessed on 10 March 2021).
  24. Chollet, F. Keras. Available online: https://keras.io (accessed on 10 March 2021).
  25. Eckmann, J.P.; Kamphorst, S.O.; Ruelle, D. Recurrence Plots of Dynamical Systems. EPL Europhys. Lett. 1987, 4, 973. [Google Scholar] [CrossRef] [Green Version]
  26. Zbilut, J.P.; Koebbe, M.; Loeb, H.; Mayer-Kress, G. Use of recurrence plots in the analysis of heart beat intervals. In Proceedings of the Computers in Cardiology, Chicago, IL, USA, 23–26 September 1990; IEEE: New York, NY, USA, 1990; pp. 263–266. [Google Scholar]
  27. Webber, C.L., Jr.; Zbilut, J.P. Recurrence quantification analysis of nonlinear dynamical systems. Tutor. Contemp. Nonlinear Methods Behav. Sci. 2005, 94, 26–94. [Google Scholar]
  28. Recurrence Plots and Cross Recurrence Plots. Available online: www.recurrence-plot.tk (accessed on 15 January 2021).
  29. Marwan, N.; Romano, M.C.; Thiel, M.; Kurths, J. Recurrence plots for the analysis of complex systems. Phys. Rep. 2007, 438, 237–329. [Google Scholar] [CrossRef]
  30. Webber, C.L., Jr.; Marwan, N. Recurrence Quantification Analysis: Theory and Best Practices; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  31. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  32. Ho, T.K. Random decision forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995; IEEE: New York, NY, USA, 1995; Volume 1, pp. 278–282. [Google Scholar]
  33. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  34. Fawcett, T. An introduction to ROC analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  35. Grillenberger, J.; Humbel, J.M.; Mezger, A.; Seidel, M.; Tron, W. Status and Further Development of the PSI High Intensity Proton Facility. In Proceedings of the 20th International Conferenceon Cyclotrons and Their Applications (Cyclotrons’13), Vancouver, BC, Canada, 16–20 September 2013; pp. 37–39. [Google Scholar]
Figure 1. Different HIPA sections of the channels.
Figure 1. Different HIPA sections of the channels.
Information 12 00121 g001
Figure 2. Example interlock record messages.
Figure 2. Example interlock record messages.
Information 12 00121 g002
Figure 3. Distribution of the interlock events by type. “Other Types” denotes all interlock causes that are not prevalent enough to have a separate category. Please note that an interlock may be labeled as more than one type.
Figure 3. Distribution of the interlock events by type. “Other Types” denotes all interlock causes that are not prevalent enough to have a separate category. Please note that an interlock may be labeled as more than one type.
Information 12 00121 g003
Figure 4. Synchronization of the beam current as an example channel and the interlocks.
Figure 4. Synchronization of the beam current as an example channel and the interlocks.
Information 12 00121 g004
Figure 5. Definition of interlock windows (orange) and stable windows (green). 5 overlapping interlock windows are cut at 1 s before the interlocks, and non-overlapping stable windows are cut in between two interlocks with a 10-min buffer region. Each window has a length of 12.8 s (64 timesteps).
Figure 5. Definition of interlock windows (orange) and stable windows (green). 5 overlapping interlock windows are cut at 1 s before the interlocks, and non-overlapping stable windows are cut in between two interlocks with a 10-min buffer region. Each window has a length of 12.8 s (64 timesteps).
Information 12 00121 g005
Figure 6. Schematic representation of the RPCNN model architecture.
Figure 6. Schematic representation of the RPCNN model architecture.
Information 12 00121 g006
Figure 7. Generation of recurrence plot from a signal with fixed ϵ = 2 .
Figure 7. Generation of recurrence plot from a signal with fixed ϵ = 2 .
Information 12 00121 g007
Figure 8. Examples of recurrence plots generated from the RPCNN model. From left to right: uncorrelated stochastic data, data starting to grow, and stochastic data with a linear trend. The top row shows the signals before the recurrence plot generation, and the bottom row shows the corresponding recurrence plots.
Figure 8. Examples of recurrence plots generated from the RPCNN model. From left to right: uncorrelated stochastic data, data starting to grow, and stochastic data with a linear trend. The top row shows the signals before the recurrence plot generation, and the bottom row shows the corresponding recurrence plots.
Information 12 00121 g008
Figure 9. Illustration of section (a) RP generation of the RPCNN model (c.f. Figure 6).
Figure 9. Illustration of section (a) RP generation of the RPCNN model (c.f. Figure 6).
Information 12 00121 g009
Figure 10. Expected lost beam time without (top) and with (bottom) the proposed 10% current reduction. An interlock equivalently leads to 25 s of lost beam time, i.e., 25 s of beam time saved. With the current reduction, the interlock is expected to be avoided with a cost of six s of lost beam time, i.e., 6 s of beam time saved.
Figure 10. Expected lost beam time without (top) and with (bottom) the proposed 10% current reduction. An interlock equivalently leads to 25 s of lost beam time, i.e., 25 s of beam time saved. With the current reduction, the interlock is expected to be avoided with a cost of six s of lost beam time, i.e., 6 s of beam time saved.
Information 12 00121 g010
Figure 11. ROC curves of the best-performing models in terms of Beam time saved and AUC: (a) RF; (b) RPCNN. The mean and 95% confidence interval is calculated from re-sampling of the validation sets. (c) Varying initialization of the same RPCNN model as in (b). The mean and 95% confidence interval are calculated from the validation results of 25 model instances.
Figure 11. ROC curves of the best-performing models in terms of Beam time saved and AUC: (a) RF; (b) RPCNN. The mean and 95% confidence interval is calculated from re-sampling of the validation sets. (c) Varying initialization of the same RPCNN model as in (b). The mean and 95% confidence interval are calculated from the validation results of 25 model instances.
Information 12 00121 g011
Figure 12. Screenshots from simulated live predictions over the validation data. The blue line is the prediction value from the model, with higher values indicating higher probability that interlocks are approaching. The orange line is the readout of the beam current channel “MHC1:IST:2” where a drop to zero indicates an interlock. The green line is the binary classification threshold, here taking 0.65 for the RPCNN and 0.7 for the RF. (a,b) show two examples of successful prediction of interlocks by RPCNN compared to the result of RF on the same time periods. The positive predictions closer to interlocks are regarded as TP marked by green circles, and the earliest times that the model output cross the threshold are enclosed in purple circles. (c) shows the FPs from the RPCNN in red circles, compared to the result of RF. (d) shows an example of FN from the RPCNN compared to the result of RF. A clear trend increasing with time is present in the output of RPCNN, shown by the red arrow.
Figure 12. Screenshots from simulated live predictions over the validation data. The blue line is the prediction value from the model, with higher values indicating higher probability that interlocks are approaching. The orange line is the readout of the beam current channel “MHC1:IST:2” where a drop to zero indicates an interlock. The green line is the binary classification threshold, here taking 0.65 for the RPCNN and 0.7 for the RF. (a,b) show two examples of successful prediction of interlocks by RPCNN compared to the result of RF on the same time periods. The positive predictions closer to interlocks are regarded as TP marked by green circles, and the earliest times that the model output cross the threshold are enclosed in purple circles. (c) shows the FPs from the RPCNN in red circles, compared to the result of RF. (d) shows an example of FN from the RPCNN compared to the result of RF. A clear trend increasing with time is present in the output of RPCNN, shown by the red arrow.
Information 12 00121 g012
Figure 13. Reverse cumulative histogram of the first instance the alarm threshold was crossed before an interlock event and the model prediction values remained above the threshold. From evaluation of the ROC curve, a threshold value of 0.65 is chosen. Displayed are the results for the best-performing RPCNN model. Please note that the interlock event is at 0 s, i.e., time runs from right to left.
Figure 13. Reverse cumulative histogram of the first instance the alarm threshold was crossed before an interlock event and the model prediction values remained above the threshold. From evaluation of the ROC curve, a threshold value of 0.65 is chosen. Displayed are the results for the best-performing RPCNN model. Please note that the interlock event is at 0 s, i.e., time runs from right to left.
Information 12 00121 g013
Table 1. List of some example channels.
Table 1. List of some example channels.
ChannelSectionDescription (Unit)
AHA:IST:2PKANALBending magnet ( A )
CR1IN:IST:2RINGVoltage of Ring RF Cavity 1 ( kVp )
MHC1:IST:2PKANALBeam current measurement at the ring exit ( μ A )
Table 2. The different types of interlocks in the dataset.
Table 2. The different types of interlocks in the dataset.
TypeDescription
ElectrostaticInterlock related to electrostatic elements
TransmissionInterlock related to transmission through target
LossesInterlock related to beam losses
Other typeInterlock related to another type or unknown
Table 3. Numbers of samples in the training and validation sets, before and after bootstrapping, where interlock samples are randomly drawn to have the same number as stable samples.
Table 3. Numbers of samples in the training and validation sets, before and after bootstrapping, where interlock samples are randomly drawn to have the same number as stable samples.
# Interlock Events# Interlock Samples# Stable Samples# Interlock Samples after Bootstrapping
Training set7313655176,046176,046
Validation set16381544,110not applied
Table 4. Overview of the RPCNN model training parameters.
Table 4. Overview of the RPCNN model training parameters.
ParameterValue
Learning rate0.001
Batch size512
OptimizerAdam
Window size12.8 s
Number of channels97
Std of the Gaussian noise 0.3
Dropout ratio 0.1
Table 5. Overview of the RF model training parameters.
Table 5. Overview of the RF model training parameters.
ParameterValue
Number of estimators90
Maximum depth9
Number of channels376
Maximum leaf nodes70
CriterionGini
Table 6. Detail of beam time saved T s according to different classification result and action for one sample.
Table 6. Detail of beam time saved T s according to different classification result and action for one sample.
T s per TP(s) T s per FN (s) T s per FP(s) T s per TN(s)
Without current reduction−25−2500
With current reduction−6−25−60
Incentive190−60
Table 7. Beam time saved ( T s ) and AUC results of the best-performing RF and RPCNN models over re-sampling of the validation sets.
Table 7. Beam time saved ( T s ) and AUC results of the best-performing RF and RPCNN models over re-sampling of the validation sets.
Model T s [s/interlock]AUC
RF best 0.7 ± 0.1 0.65 ± 0.01
RPCNN best 0.5 ± 0.2 0.71 ± 0.01
RPCNN mean over initialization 0.1 ± 0.1 0.61 ± 0.04
Table 8. Classification results in terms of True Positive (TP), False Positive (FP), True Negative (FN), False Negative (FN), True Positive Rate (TPR) and True Negative Rate (TNR) of the best-performing RF and RPCNN models. The thresholds for both models were obtained from the ROC curves so that the beam time saved is maximized, with  0.70 for the RF and 0.65 for the RPCNN.
Table 8. Classification results in terms of True Positive (TP), False Positive (FP), True Negative (FN), False Negative (FN), True Positive Rate (TPR) and True Negative Rate (TNR) of the best-performing RF and RPCNN models. The thresholds for both models were obtained from the ROC curves so that the beam time saved is maximized, with  0.70 for the RF and 0.65 for the RPCNN.
ModelTPFPTNFNTPR or RecallTNR or Specificity
RF best2528 44,082 790 3.1 % 99.9 %
RPCNN best4075 44,035 775 4.9 % 99.8 %
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, S.; Zacharias, M.; Snuverink, J.; Coello de Portugal, J.; Perez-Cruz, F.; Reggiani, D.; Adelmann, A. A Novel Approach for Classification and Forecasting of Time Series in Particle Accelerators. Information 2021, 12, 121. https://doi.org/10.3390/info12030121

AMA Style

Li S, Zacharias M, Snuverink J, Coello de Portugal J, Perez-Cruz F, Reggiani D, Adelmann A. A Novel Approach for Classification and Forecasting of Time Series in Particle Accelerators. Information. 2021; 12(3):121. https://doi.org/10.3390/info12030121

Chicago/Turabian Style

Li, Sichen, Mélissa Zacharias, Jochem Snuverink, Jaime Coello de Portugal, Fernando Perez-Cruz, Davide Reggiani, and Andreas Adelmann. 2021. "A Novel Approach for Classification and Forecasting of Time Series in Particle Accelerators" Information 12, no. 3: 121. https://doi.org/10.3390/info12030121

APA Style

Li, S., Zacharias, M., Snuverink, J., Coello de Portugal, J., Perez-Cruz, F., Reggiani, D., & Adelmann, A. (2021). A Novel Approach for Classification and Forecasting of Time Series in Particle Accelerators. Information, 12(3), 121. https://doi.org/10.3390/info12030121

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop