Next Article in Journal
MMFNet: A Mamba-Based Multimodal Fusion Network for Remote Sensing Image Semantic Segmentation
Previous Article in Journal
Noninvasive Acoustic Recognition of Water Flow Sources for Human Activity Monitoring in Smart Homes
Previous Article in Special Issue
Semantic Path-Guided Remote Sensing Recommendation for Natural Disasters Based on Knowledge Graph
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SensorAI: A Machine Learning Framework for Sensor Data

The Center for Cyber-Physcial Systems, University of Georgia, Athens, GA 30602, USA
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(19), 6223; https://doi.org/10.3390/s25196223
Submission received: 3 September 2025 / Revised: 22 September 2025 / Accepted: 28 September 2025 / Published: 8 October 2025
(This article belongs to the Collection Machine Learning and AI for Sensors)

Abstract

As machine learning and artificial intelligence are being integrated into cyber-physical systems, it is becoming important for engineers to know and understand these topics. In particular, sensor data is on the rise in these systems and therefore engineers need to understand which models are appropriate to time-series sensor data and how signal processing can be used with them. The Center for Cyber-Physical Systems (CCPS) at the University of Georgia (UGA) is addressing these issues. Student researchers in the CCPS require skills in these areas. This paper demonstrates a machine learning framework for time-series sensor data that can be used to quickly build, train, and test multiple models on CCPS testbed data. The framework is also a tool that can be used as a tutorial to help student researchers understand the concepts required to be successful in the CCPS.

1. Introduction

Artificial intelligence and machine learning are becoming ubiquitous in modern industries [1], often referred to as Industry 4.0 [2,3,4]. In particular, this includes the use of sensors in many fields, such as the Internet of Things, manufacturing, power, medical systems, digital twins, and other smart systems. Sensor data is rapidly becoming the most common form of big data [5].
Industry has seen an increasing number of electronic control units (ECUs) and programmable logic controllers (PLCs), and other types of programmable electronics have been deployed into cyber-physical systems. While such progress increased productivity and product quality, it also introduced vulnerabilities to both hardware and software.
In response to this issue, the Center for Cyber-Physical Systems (CCPS) is developing a testbed containing motors in order to collect realistic data for analysis and development of artificial intelligence and machine learning algorithms to solve fault and attack detection and diagnosis problems. The CCPS has also developed a smart plug, called ElectricDot, that can be used to easily integrate new devices into the testbed. Any device that uses a standard wall outlet may plug into an ElectricDot and be monitored by the testbed.
CCPS research requires skills in Machine Learning, Deep Learning, and Digital Signal Processing (DSP). Generally, CCPS students are electrical or computer engineers, computer scientists, and occasionally statisticians. Most students have some background in signal processing or some background in machine or deep learning. Few have expertise in both. Thus, the CCPS students must learn these topics, generally via classes, while performing research. However, these topics are generally taught in classes that are independent of each other, and their interrelation is not formally discussed. Thus, their interrelations have been discussed during lab hours or are conducted as an independent study. Therefore, the CCPS has started to formalize training and education of the topics.
The CCPS primarily deals with sensor data gathered from cyber-physical systems and uses it with artificial intelligence and machine learning to perform research. While there are a multitude of definitions for cyber-physical systems (CPS) [6], the Sensorweb Lab defines them as any networked system that affects the physical world. Thus, the lab also primarily works with streaming data. Streaming data is a unique type of time-series data in that it is continuously generated, can only be examined in one pass, and is prone to concept drift [7].
A need has arisen for CCPS researchers to have a framework that allows easy integration of data collection, algorithm training, and testing. Additionally, the CCPS has developed smart plugs, called ElectricDot (eDot), to easily connect to electronic devices for monitoring their power electric waveform data. Thus, this phase also includes adding the capability to easily connect models to eDots and other sensor devices. This paper presents a solution to these issues.

2. Materials and Methods

2.1. Background

Development of the CCPS testbed has proceeded in phases. Initially, a small prototype test bed involving toy motors was built as a proof of concept and is detailed in [8]. In this phase, a small information technology (IT) and operational technology (OT) network was created, with the motors at the OT level. A multilevel cyber attack was created. In short, an attacker was able to access the network via WiFi, scan the IT network for hosts, use a brute force password attack to break into Raspberry Pis with weak passwords, and load an attack script. The script could then send commands to the motors on the OT network.
For defense, there were anomaly detection algorithms monitoring the network traffic, the system statistics of the Raspberry Pis, and the power usage of the motors via sensors. The goal was to demonstrate a multilevel attack on an OT/IT network and demonstrate multilevel detection.
The next phase involved moving from toy motors to industrial motors in a testbed. Details of this phase can be found in [9]. In this phase, the same basic attacks were implemented at the network level. However, a more sophisticated attack was developed for manipulating the industrial motors. Care was taken to ensure the attacks did not damage the motors. Compromised firmware and a control script, which was a compromised version of open-source software for the control system vendor, were created in lieu of the basic attacks for the toy motors.
In both of these phases, live data was streamed to InfluxDBm, and Grafana was used for visualization. However, at these phases, data for training anomaly detection algorithms were simulated offline. Models were developed, trained, and tested offline as well.
This paper discusses the current phase of the CCPS testbed, the development of a framework. The framework is intended to help CCPS student researchers learn and understand relevant concepts to aid them in quickly training and testing models for more comparison in CCPS research. The development of the framework was divided into two parts: the creation of its core functionalities along with a tutorial, and the addition of the GUI with testbed integration via ElectricDots.

2.2. Previous Efforts with Tutorials

The Sensorweb Laboratory has made previous efforts to create tutorials to aid student researchers. These initially began assignments were that lab members would create a small tutorial on a specific model. These were presented in lab meetings. They were later compiled and placed on a small website. However, tutorials were not standardized in format. Additionally, the code from one tutorial was not easily usable by another. Thus, it was decided to consolidate the code into the framework and standardize the tutorial information into it as well.

2.3. CCPS Testbed Data Collection

The CCPS uses Message Queuing Telemetry Transport (MQTT) to send data and commands between sensors and systems. MQTT uses a publish/subscribe approach to handling messaging. Devices and apps subscribe to a topic to receive messages and publish to a topic to send. All messages and subscriptions are handled by an MQTT broker. InfluxDB, a free time-series database, is used to store data. Grafana, a free tool, is used for data visualization of live streaming devices. Applications and smart devices used by the CCPS send data to the MQTT broker, which in turn forwards data to the InfluxDB servers. The testbed is located in a lab on the University of Georgia’s (UGA) campus and is connected to the university’s secure network. The Grafana and InfluxDB servers reside in this network and are not accessible from outside the network.

2.4. ElectricDot

ElectricDot is a smart device designed to monitor the power electric waveform data of devices plugged into it. An image of an ElectricDot is shown in Figure 1. ElectricDot is plugged into a wall socket, and the device that one intends to monitor is then plugged into the ElectricDot. Data is sent over WiFi via MQTT messages. The ElectricDot can be programmed to a sample rate up to 10 kHZ. Additionally, various types of feature extraction, such as amplitude, frequency, phase, and angle, can be programmed as well. The ElectricDot was designed by CCPS in a separate project. Its purpose is to easily provide anomaly detection and diagnosis for electric devices and networks. They allow for CCPS researchers to quickly add a power sensor to any device, including devices within the CCPS testbed. Thus, the SensorAI Framework is required to have the capability to interface with them.

2.5. Design Requirements

The projects and research discussed in this paper are the efforts of the Sensorweb Laboratory and the Intelligent Power Electronics and Electric Machine Laboratory. Both of which are members of the CCPS. In discussions with lab leaders and members, the following high-level requirements were determined for the framework:
  • Must use Python.
  • Must focus on time-series/sensor data.
  • Minimize coding.
  • Easy to use.
  • Must assist in training and testing.
  • Must include metrics.
  • Visualization of data and results.
  • Must have an accompanying tutorial.
  • Must include digital signal processing, classification, clustering, regression, and anomaly detection.
  • Must have a Graphical User Interface.
  • Core functionality must be directly accessible.
  • Must enable smart device and historical data connectivity to models.

2.6. Comparable Frameworks

There are many frameworks and automated machine learning software available. To narrow down a review of available products, the requirements listed above were used. In addition, only free open-source products were considered, due to limited funds and to avoid extra costs for student researchers. Thus, products such as Microsoft Azure ML Studio [10] were not considered. Products not written in Python, such as WEKA Workbench [11], were not considered. H20 AutoML [12], TPOT [13], AutoGluon [14], and Auto-sklearn [15] are three comparable products that meet some of the requirements above. TPOT, AutoGluon, and Auto-sklearn do not include digital signal processing capabilities. H20 AutoML has DSP capabilities; however, the free version does not. Thus, the CCPS was motivated to create its own framework.

2.7. Design Part 1: Core Functionality and Tutorial

The framework is built with Python. The primary interface for the framework is conducted via a browser to a Streamlit web application server. Streamlit is an open-source Python library for creating web applications. Note that the underlying code for the framework can be executed directly on a workstation with an Integrated Development Environment (IDE) or via a Google Colab TM notebook. It primarily uses the SciKit Learn [16] and SciPy [17] packages, although others are used. Most models incorporated into the framework are SciKit Learn. Any model with Time Series in its name is from tslearn [18], which is an extension of SciKit Learn designed specifically for time-series data. Users may upload their own data or generate data within the framework. Visualizations are conducted with matplotlib [19] and Plotly v6.3.0 [20].
A complete list is contained in the requirement.txt file on the SensorAI GitHub, version 9/29/2025 [21]. The code is divided into its core tasks. Digital Signal Processing, Classification, Clustering, Regression, Anomaly Detection, and Utilities; refer to Table 1. Each machine learning module contains code for the easy creation of pipelines and grid searches so that researchers can quickly train, test, and compare multiple models with minimal coding. Utilities contains the visualization and other miscellaneous functions that the core modules need. In general, a user will not need to access Utilities directly, but may do so if they choose.
For the core modules, there is very little coding required to build a model pipeline and place it in a grid search. Figure 2 shows the code for setting up a grid search for three models: Extra Trees, Random Forest, and an AdaBoost decision tree.
The rationale for this organization was based on the CCPS’s need to ensure that student researchers understand the core concepts. It also made integration of various algorithms easier, as models of each category generally rely on the same underlying utilities, such as metrics and visualization.
Most functions in DSP have a Show option, which is set to True or False, that allows the user to decide whether or not to plot the results of the operation. Thus, users only need to set a variable plot instead of having to write multiple lines of code to perform the same plotting.

2.7.1. Digital Signal Processing

The DSP module contains functions for all of the signal processing used in the framework. The groupings discussed in this section follow the general outline of the associated tutorial.
Wave generation: Wave generation allows users to create waves, such as sine, square, triangle, as well as pulses and chirps. Various types of noise can be added to these waves. Noise types include, but are not limited to, white, flicker, impulse, and echo. There are also functions that allow users to automatically generate sets of waveforms, with or without noise, to create synthetic datasets that may be used for model training and testing, as well as with tutorials. It also provides functions to generate the seismocardiography (SCG) signal of a heartbeat, with or without respiration.
Filters and Signal Averaging: The module includes basic linear filters such as high-pass, low-pass, band-pass, and band-stop. It also includes more complex filters such as adaptive, moving average, and Kalman. Signal averaging contains several methods of dynamic-time-warping averaging techniques.
Time and Frequency Domains: The DSP module includes statistical moments, peak detection, envelope extraction, and waveform complexity measures. It also contains the Fast-Fourier Transform (FFT), the short-time Fourier transform (STFT), and power spectral density functions. There are also functions to extract the average amplitude, frequency, and phase of a waveform, as well as a function to calculate the Total Harmonic Distortion (THD) of a waveform.
Signal Decomposition: Within the module, there are functions to allow the user to break down (decompose) a signal into its fundamental parts and plot them. It includes Empirical Mode Decomposition (EMD), EMD variations, Singular Spectrum Analysis (SSA), and various Blind Source Separation techniques.
Wavelet Analysis and Transforms: Multiple types of wavelets and chirplets are available in the module. Additionally, the following wavelet-based transforms are in this module: Continuous Wavelet Transform (CWT), Polynomial Chirplet Transform (PCT), Wigner–Ville Distribution (WVD), and SynchroSqueezing Transform (SST).

2.7.2. Classification

This module contains all the classification models available in the framework. It also contains the functions for streamlining pipeline building and grid searches. The groupings discussed in this section follow the basic outline of the associated tutorial. These models can also be used for anomaly detection (see Section 2.7.4 below).
Decision Trees, Bagging, and Boosting: In addition to basic decision trees, the framework includes several tree-based bagging methods: Random Forest and Extra Trees. It includes two tree-specific boosting methods: Gradient Boosting and Histogram Gradient Boosting. It includes a generic bagging model that can create ensembles of other model types called bagging. Lastly, it includes a generic boosting model that can boost other model types called AdaBoost.
Nearest Neighbors: This module contains several nearest neighbor-based models. They are K Nearest Neighbors (KNNs), Nearest Centroid, Radius Nearest Neighbors, and Time-Series K Nearest Neighbors (TS KNNs).
Support Vector Classifiers: The framework contains three support vector-based models: Support Vector, Nu Support Vector, and Time-Series Support Vector.
Other Classifiers: There are multiple other classifier models available, which include but are not limited to Discriminant Analysis, Early Classifiers, and Naive Bayes classifiers.

2.7.3. Clustering

This module contains all the clustering models available in the framework. It also contains the functions for streamlining pipeline building and grid searches. The groupings discussed in this section follow the basic outline of the associated tutorial.
Hierarchical Clustering: In the tutorial, Hierarchical Clustering is used as an introduction to clustering, as these models are simple. Agglomeration and Feature Agglomeration are available in the framework.
K-Means: K-Means is another easy clustering model to understand. The framework includes the following K-Means-based variants: K-Means, Bisecting K-Means, Mini-Batch K-Means, Time-Series K-Means, and K-Shape.
Density-Based Clustering: The density-based clustering algorithms available in the framework are Mean Shift, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), and Ordering Points To Identify the Clustering Structure (OPTICS)
Spectral Clustering: Spectral Clustering is the only other clustering algorithm currently available in the framework.

2.7.4. Detection

Unsupervised anomaly detection is divided into Outlier Detection and Novelty Detection. In outlier detection, the goal is to identify anomalous data within your dataset, usually with the intent to remove them. In novelty detection, the goal is to build a model that can label future data inputs as either normal or anomalous.
In this framework, supervised anomaly detection is part of classification. Any classification method can be an anomaly detection method if the normal condition is one class and all other conditions are treated as anomalous.
Outlier Detection: Unsupervised outlier detection is contained in the Detection module. The detectors are Local Outlier Factor, and Elliptic Envelope.
Novelty Detection: Unsupervised novelty detection is contained in the Detection module. The detectors are One-Class Support Vector Machine, One-Class Support Vector Machine with Stochastic Gradient Descent, Isolation Forests, and Local Outlier Factor for Novelty Detection.

2.7.5. Regression

This module contains all the regression models available in the framework. It also contains the functions for streamlining pipeline building and grid searches. The groupings discussed in this section follow the basic outline of the associated tutorial. When using the tutorial, regression is generally used first since it contains discussions on probability, distributions, loss functions, cross-validation, and regularization.
General Linear Models: The general linearized models available in the framework are Linear, Gamma, Poisson, and Tweedie.
LARS and LASSO: Least Angle Regression Shrinkage (LARS) and Least Absolute Shrinkage and Selection Operator (LASSO) are included together. There are several variants of the models included in the framework: LARS, LARS with Cross Validation, LASSO, LASSO with Cross Validation, LassoLars, and LassoLars with Cross Validation.
Ridge: The Ridge-based models available are Ridge, Ridge with Cross Validation, and Bayesian Ridge.
Elastic Net Regularization: Elastic Net Regularization (Elastic-Net) is a combination of LARS and Ridge. There are several variants of Elastic-Net available: Elastic-Net, Elastic-Net with Cross Validation, Multitask Elastic-Net, and Multitask Elastic-Net with Cross Validation.
Support Vector Regression: The support vector regressor models included are Linear Support Vector, Nu Support Vector, and Time-Series Support Vector.
Other Regression Methods: The list of other regressors available includes, but is not limited to, Huber, TheilSen, Random Sample Consensus (RANSAC), Time-Series KNN regressor, and Quantile.

2.7.6. Utilities

This module mostly contains the various plotting functions and some metric calculations. It has a few miscellaneous functions that do not fall into any of the other categories in the framework. The plotting functions primarily used are for confusion matrices and waveform data display.

2.7.7. Framework Tutorial

Each of the core machine learning modules of the framework has a companion tutorial on how to use it; refer to Table 2. Clustering and Unsupervised Anomaly Detection are combined into the Unsupervised tutorials. Supervised Anomaly Detection is covered under the Classification tutorials. The tutorial includes overviews of the concepts and algorithms of machine learning. There is a set of slides and a Jupyter notebook with executable framework code for each module.
The general outline of each tutorial follows the topics grouped in the discussion of each module in the code design section. The outline below highlights the main topics on which each tutorial is focused.
  • DSP: wave generation, noise, filters, transforms, decomposition, power spectral density, wavelet analysis, and transforms.
  • Classification: decision trees, nearest neighbors, support vector machines, bagging, boosting, others.
  • Clustering: hierarchical, k-means, k-shape, density-based, spectral, others.
  • Anomaly Detection: outlier vs. novelty detection, isolation forests, local outlier factor, others.
  • Regression: generalized linear models, lars, lasso, ridge, elastic nets, nearest neighbors, support vector machines, others.

2.8. Design Part 2: Graphical User Interface and Testbed Integration

A free Python package, Streamlit, was chosen for the GUI. Streamlit is for building graphical interfaces to Python code that are accessed via a web browser. It was chosen for ease of use and minimization of coding. Figure 3 illustrates the overall flow of the framework.
The graphical user interface for the framework follows the basic architecture of the core framework code. There are pages for digital signal processing, classification, clustering, detection, and regression. Additionally, there are pages for loading and generating data, downloading data from InfluxDB and running it through an existing model, and one for connecting an active ElectricDots to a trained model and sending model outputs to the MQTT broker for forwarding to the appropriate InfluxDB server. It also allows for the connection of a model to historical data from the InfluxDB server. Lastly, there is a page for the tutorials. The SensorAI Framework is integrated with the motor testbed, as shown in Figure 4 below.
The framework GUI may be run via the Homepage.py file. This file sets the basic layout and color schemes. Each page, except for the Device Connector and Historical Data Connector, has an associated Python file that contains most of the related GUI functionality. This is conducted to avoid overly large files for ease of maintenance and understanding. The code design for the framework is illustrated in Figure 5. Within the testbed, the framework is hosted on a server running the Streamlit code.

2.8.1. Data

The Data page is the second in the framework. Here, users can choose to load data from numpy data (.npy) files or a comma-separated values (.csv) files. Alternatively, they may generate a single waveform or a set of multiple waveforms. Lastly, users may generate seismocardiography data. Either a single example or a dataset of randomly generated ones. The Data page can be seen in Figure 6.

2.8.2. Digital Signal Processing

The DSP page contains all the functions for digital signal processing. It is segregated by sub menus: Noise, Filters, Decomposition, Time Domain Features, Transforms, and Misc. The DSP menu can be seen in Figure 7.

2.8.3. Classification

The Classification page offers several models that users may train and test. They may use their own data or data generated by the framework. Related models are grouped in columns. The Classification page is shown in Figure 8. Note that any classification model may be used as novelty detection in the case of having training data with normal and abnormal labels.

2.8.4. Clustering

The Clustering page offers several models that users may train and test. They may use their own data or data generated by the framework. Related models are grouped in columns. The Clustering page is shown in Figure 9.

2.8.5. Detection

The Detection page has two sub-menus, Novelty and Outlier. These menus contain a few models that are specific to these tasks. The Detection page is shown in Figure 10.

2.8.6. Regression

The Regression page offers several models that users may train and test. They may use their own data or data generated by the framework. Related models are grouped in columns. The Regression page is shown in Figure 11.

2.8.7. Device Connector

The Device Connector page allows the user to connect a trained model to an online ElectricDots. Models trained and saved by the framework are pickled, and an accompanying yaml file with model information is generated. The user may connect any of their own models, so long as it is in pickled format and the user has created an appropriately structured accompanying yaml file. The page is shown in Figure 12. When connecting a model to an ElectricDot, a separate process is opened and the main_ai_mqtt.py script is automatically run with the appropriate settings. A separate terminal window opens and displays any text output. The connection can be monitored from this terminal window.

2.8.8. Historical Download

The Historical page allows users to download and pass historical data from the InfluxDB server to a model of their choosing. As with the previous section, the models must be pickled and have their accompanying yaml file. This page is shown in Figure 13. When connecting a model to historical InfluxDB data, a separate process is opened and the main_ai_influx.py script is automatically run with the appropriate settings. A separate terminal window opens and displays any text output. The connection can be monitored from this terminal window.

2.8.9. Tutorials

The Tutorial page includes menus for the slides that may be viewed on the website and an option to open the Jupyter notebook tutorials in GitHub TM in a separate browser tab. All tutorial notebooks are set up to open them in Google Colab TM. See Figure 14.

3. Results

The framework prototype was first used as a teaching aid in the University of Georgia (UGA) Spring 2024 class, Principles of Cyber-Physical Systems. Using feedback from students and the instructor, the framework design and the tutorials were refined and improved. The framework was used by some of the students to complete homework and project assignments. The framework was used again during the Spring 2025 semester of the same class, to further refine and improve the framework and associated tutorials.
To demonstrate the effectiveness of the framework, a comparison of machine learning models utilized within the framework was performed. Training data was utilized from simulations in previous experiments of the testbed.

3.1. Data

The dataset used in this study was generated using a real-world cyber-physical security testbed specifically designed for networked electric drive systems. This testbed, referred to as the CCPS testbed, integrates four electric machine drives—comprising two induction machines and two permanent magnet synchronous machines—each controlled by a TI C2000 TMS320F28335 micro-controller operating under a field-oriented control (FOC) strategy. These drives are embedded within a hybrid IT/OT network environment to emulate realistic control and communication scenarios.
To create high-fidelity cyber-attack data, predefined false data injection attacks (FDIAs) and step-stone attacks through software back doors embedded in the digital signal controllers were created and implemented. The FDIAs target current feedback and speed reference variables, introducing precise distortions in the control loop. These attacks were activated under controlled conditions to maintain system safety while ensuring authentic physical responses. Data reflecting system behavior under normal operation and various attack conditions were collected in real time using NI cDAQ-9132 hardware and streamed to an InfluxDB time-series database.
The sample rate was 10 kHz, which is the maximum sample rate for the ElectricDots. Data was collected at varying motor speeds: 400, 800, 1200, 1300, and 1600 rpms. Data was collected with and without loads (1 V, 2 V, 3 V) at each speed. Labels were also generated along with each case. The FDIA attacks injected a constant into one or two of the three-phase current readings of the motor’s control unit. FDIA type “1” injected a constant into Phase A. FDIA type “2” injected a constant into both Phase A and Phase B. The constant values injected were a percentage of the amplitude added to the actual amplitude and varied as follows: 0.5, 1.0, 2.0, 2.5, 3.0, 3.5, 4, 4.5, and 5 percent.
Sensor readings from the motors, DC bus, and point of common coupling (PCC) were integrated via a dedicated sensor board, enabling the collection of diverse electrical signatures across all scenarios. The dataset was further enriched with synchronized waveform snapshots and labeled attack scenarios, facilitating both machine learning-based fault classification and cyber-attack diagnostics.
The current operating version of the ElectricDots sends out readings to the InfluxDB once every second. Note that the ElectricDots sends the following readings: Voltage, current, the PMU data (amplitude, frequency, and phase angle) of the voltage and current, the power, reactive power, apparent power, the power factor, the total harmonic distortion of the current and voltage, and the temperature of the device. This is conducted so that various features can be used for studies by the CCPS and so that optimal broadcast rates can be determined. In preparation for the demonstration of the framework, the dataset discussed above was downsampled to 1 sample per second to match the current configuration. Only the current from the PCC was used in this demonstration.

3.2. Framework Model Comparisons

The data is divided into windows of five samples, with each window containing the raw single-phase current waveform data. There were 63 samples in the original data. After breaking the data into windows, there are 378 cases. There are 78 normal cases, 120 FDIA 1 cases, and 180 FDIA 2 cases.
For the model comparisons, this data was used to test and train a version of each of the models shown in Table 3. Each model was trained and tested ten times with a random test-train split of 0.25. A five-fold cross-validation was used for each run. The Accuracy, weighted Precision, weighted Recall, and weighted F1 Score of each run were recorded; the average is shown in Table 3, and the standard deviation is shown in Table 4.
The best-performing models (score 95% or higher) were tree-based ensembles (bagging, Random Forests, Extra Trees, AdaBoost, Gradient Boost, Histogram Gradient Boost), Decision Tree, and K Nearest Neighbors (and its times series variant). Ensembles are a collection of models. Each model within an ensemble is called an estimator. Bagging models are composed of multiple estimators working in parallel on the same input. Voting is performed on the output of all estimators, which can be weighted or unweighted. Boosting models are simply serialized ensembles. Each estimator makes decisions based on the output of the prior estimator, with the last estimator making the final decisions. The bagged tree models are the bagging, Extra Trees, and Random Forests. The boosted models are AdaBoost, Gradient Boosting, and Histogram Gradient Boosting.
For all of the tree-based models, the grid search was performed over a tree maximum depth of 3, 5, 25, and 50. The maximum depth of a decision tree is the number of nodes to pass through before reaching the deepest leaf in the tree. The number of estimators in the ensembles chosen for the grid search was 10, 50, and 100. A sample output for an Extra Trees model is shown in Figure 15.
Nearest neighbor models assume that like things exist in close proximity to each other. The KNN model compares a new input to all previous assigned inputs, and takes the mode of the k closest inputs’ assigned labels, close being a distance measure, usually Euclidean distance. The KNN and TS KNN are essentially the same models. However, the TS KNN has a similarity measure called Global Alignment Kernel (GAK) that is specific to time-series data [22]. For these two modes, a grid search for k was performed over the following values: 3, 5, and 10.

4. Discussion

The primary novelty of the framework is its incorporation of digital signal processing. This feature was distinctly lacking in the other free Python-based frameworks. Secondly, the Sensor AI Framework allows for directly connecting models to live smart devices and databases. Other than H20 AutoML, the Sensor AI Framework utilizes a graphical user interface, and its underlying code can be accessed directly if a user prefers. Python libraries not currently incorporated into the framework can be added. Sktime [23] is a good candidate for later additions, as it focuses on time-series data and uses a Sklearn-type interface. While there are products that could be purchased with more capabilities, the Sensor AI Framework is a well-suited, free, open-source resource for quickly learning and implementing Python machine learning.
The framework was successfully demonstrated using testbed data for model comparison and evaluation. Using the framework, models were created that could correctly classify normal operation and FDIA attacks on the motor operating at different speeds. The best performing models were bagging with decision trees and Random Forests.
Additionally, its tutorial component was successfully used as a teaching tool in a UGA graduate class. Some students used the frameworks for relevant projects. Feedback was positive from both classes in which the framework was used.
However, it is currently limited to machine learning models that have been integrated into the framework. Other frameworks include deep learning, which is lacking in this framework and is thus not suitable for those wishing to use deep learning. The speed at which the framework can train and test models is limited by the computing resources accessible to student research.

Author Contributions

Conceptualization, S.C. and H.Y.; methodology, S.C.; software, S.C.; validation, S.C. and W.S.; formal analysis, S.C.; investigation, S.C., H.Y., and S.W.; resources, S.C., J.Y., and W.S.; data curation, H.Y. and S.W.; writing—original draft preparation, S.C.; writing—review and editing, S.C., H.Y., S.W., and W.S.; visualization, S.C.; supervision, J.Y., P.M., and W.S.; project administration, J.Y. and W.S.; funding acquisition, J.Y., P.M., and W.S. All authors have read and agreed to the published version of the manuscript.

Funding

Our research is partially supported by NSF-1940864, NSF-2019311, NSF-2102032, NSF-2318809, NSF-2324389, NSF-2312974, NSF-2414706, Georgia Research Alliance, DOE-EE0009026, DOD-FA8571-21-C-0020, NIH-1R01HL172291-01.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available Sensor-WebEdu/SensorAI at https://github.com/SensorWebEdu/SensorAI.git (accessed on 27 September 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CCPSCenter for Cyber-Physical Systems
CWTContinuous Wavelet Transform
DBSCANDensity-Based Spatial Clustering of Applications with Noise
DSPDigital Signal Processing
ECUElectronic Control Unit
eDotElectricDot
Elastic-NetElastic-Net Regularization
EMDEmpirical Mode Decomposition
FDIAFalse Data Injection
FFTFast-Fourier Transform
FOCField-Orient Control
GAKGlobal Alignment Kernel
GPUGraphics Processing Unit
GUIGraphical User Interface
ITInformation Technology
KNNK Nearest Neighbor
LARSLeast Angle Regression Shrinkage
LASSOLeast Absolute Shrinkage and Selection Operator
MQTTMessage Queuing Telemetry Transport
OPTICSOrdering Points To Identify the Clustering Structure
OTOperational Technology
PCCPoint of Common Coupling
PCTPolynomial Chirplet Transform
RANSACRandom Sample Consensus
SCGSeismocardiography
SSASingular Spectrum Analysis
SSTSynchroSqueezing Transform
STFTShort Time Fourier Transform
THDTotal Harmonic Distortion
TS KNN     Time-Series K Nearest Neighbor
UGAUniversity of Georgia
WVDWigner–Ville Distribution

References

  1. Weerts, H.J.P.; Pechenizkiy, M. Teaching responsible machine learning to engineers. In The Second Teaching Machine Learning and Artificial Intelligence Workshop; PMLR: New York, NY, USA, 2022; pp. 40–45. [Google Scholar]
  2. Lasi, H.; Fettke, P.; Kemper, H.G.; Feld, T.; Hoffmann, M. Industry 4.0. Bus. Inf. Syst. Eng. 2014, 6, 239–242. [Google Scholar] [CrossRef]
  3. Lu, Y. The current status and developing trends of industry 4.0: A review. Inf. Syst. Front. 2025, 27, 215–234. [Google Scholar]
  4. Ambadekar, P.K.; Ambadekar, S.; Choudhari, C.; Patil, S.A.; Gawande, S. Artificial intelligence and its relevance in mechanical engineering from Industry 4.0 perspective. Aust. J. Mech. Eng. 2025, 23, 110–130. [Google Scholar] [CrossRef]
  5. Selmy, H.A.; Mohamed, H.K.; Medhat, W. A predictive analytics framework for sensor data using time series and deep learning techniques. Neural Comput. Appl. 2024, 36, 6119–6132. [Google Scholar] [CrossRef]
  6. Putnik, G.D.; Ferreira, L.; Lopes, N.; Putnik, Z. What is a Cyber-Physical System: Definitions and Models Spectrum. FME Trans. 2019, 47, 663–674. [Google Scholar] [CrossRef]
  7. Chen, Y.; Tu, L. Density-based clustering for real-time stream data. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, CA, USA, 12–15 August 2007; pp. 133–142. [Google Scholar]
  8. Coshatt, S.J.; Li, Q.; Yang, B.; Wu, S.; Shrivastava, D.; Ye, J.; Song, W.; Zahiri, F. Design of cyber-physical security testbed for multi-stage manufacturing system. In Proceedings of the GLOBECOM 2022-2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 1978–1983. [Google Scholar]
  9. Yang, H.; Yang, B.; Coshatt, S.; Li, Q.; Hu, K.; Hammond, B.C.; Ye, J.; Parasuraman, R.; Song, W. Real-world Cyber Security Demonstration for Networked Electric Drives. IEEE J. Emerg. Sel. Top. Power Electron. 2025, 13, 4659–4668. [Google Scholar] [CrossRef]
  10. Microsoft Corporation. Microsoft Azure Machine Learning Studio. 2024. Available online: https://azure.microsoft.com/en-us/products/machine-learning (accessed on 22 September 2025).
  11. Frank, E.; Hall, M.A.; Witten, I.H. The WEKA Workbench. Online Appendix for “Data Mining: Practical Machine Learning Tools and Techniques”; Morgan Kaufmann: San Francisco, CA, USA, 2016. [Google Scholar]
  12. LeDell, E.; Poirier, S. H2O AutoML: Scalable Automatic Machine Learning. In Proceedings of the 7th ICML Workshop on Automated Machine Learning (AutoML), Online, 17–18 July 2020. [Google Scholar]
  13. Hernandez, J.G.; Saini, A.K.; Ghosh, A.; Moore, J.H. The tree-based pipeline optimization tool: Tackling biomedical research problems with genetic programming and automated machine learning. Patterns 2025, 6, 101314. [Google Scholar] [CrossRef] [PubMed]
  14. Erickson, N.; Mueller, J.; Smola, A.J.; Weil, S.; Chan, J.C.W.; Shmakov, A.; Shchur, O.; Shi, X.; Huang, E.W.; Lorraine, J.; et al. AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arXiv 2020, arXiv:2003.06505. [Google Scholar]
  15. Feurer, M.; Klein, A.; Eggensperger, K.; Springenberg, J.; Blum, M.; Hutter, F. Efficient and robust automated machine learning. Adv. Neural Inf. Process. Syst. 2015, 28, 2962–2970. [Google Scholar]
  16. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  17. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef]
  18. Tavenard, R.; Faouzi, J.; Vandewiele, G.; Divo, F.; Androz, G.; Holtz, C.; Payne, M.; Yurchak, R.; Rußwurm, M.; Kolar, K.; et al. Tslearn, A Machine Learning Toolkit for Time Series Data. J. Mach. Learn. Res. 2020, 21, 1–6. [Google Scholar]
  19. Hunter, J.D. Matplotlib: A 2D graphics environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
  20. Plotly Technologies Inc. Collaborative Data Science; Plotly Technologies Inc.: Montréal, QC, Canada, 2015. [Google Scholar]
  21. Song, W.; Coshatt, S.; Zhang, Y.; Chen, J. Sensor Data Science and AI Tutorial. 2024. Available online: https://github.com/SensorWebEdu/SensorAI.git (accessed on 27 September 2025).
  22. Cuturi, M. Fast global alignment kernels. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), Bellevue, WA, USA, 28 June–2 July 2011; pp. 929–936. [Google Scholar]
  23. Löning, M.; Bagnall, A.; Ganesh, S.; Kazakov, V.; Lines, J.; Király, F.J. sktime: A unified interface for machine learning with time series. arXiv 2019, arXiv:1909.07872. [Google Scholar] [CrossRef]
Figure 1. Picture of an ElectricDot smart plug.
Figure 1. Picture of an ElectricDot smart plug.
Sensors 25 06223 g001
Figure 2. An example of framework code. Extra Trees, Random Forest, and AdaBoost decision tree classifiers are created and then placed in a grid search. Note that hyperparameter values for the grid search are placed in lists.
Figure 2. An example of framework code. Extra Trees, Random Forest, and AdaBoost decision tree classifiers are created and then placed in a grid search. Note that hyperparameter values for the grid search are placed in lists.
Sensors 25 06223 g002
Figure 3. The overall flow fo the framework using the GUI.
Figure 3. The overall flow fo the framework using the GUI.
Sensors 25 06223 g003
Figure 4. SensorAI Framework and testbed integration: The SensorAI Framework is hosted on a server that resides in UGA’s secure network. All PCs, servers, and brokers are also hosted on UGA’s secure network. The motors, the PCs used for motor control, and the ElectricDot are all physically located in the same lab on campus. Note that the ElectricDots smart plug is connected to the PCC that all four motors are connected to.
Figure 4. SensorAI Framework and testbed integration: The SensorAI Framework is hosted on a server that resides in UGA’s secure network. All PCs, servers, and brokers are also hosted on UGA’s secure network. The motors, the PCs used for motor control, and the ElectricDot are all physically located in the same lab on campus. Note that the ElectricDots smart plug is connected to the PCC that all four motors are connected to.
Sensors 25 06223 g004
Figure 5. Framework Code Design: Homepage.py is the main Streamlit page. The functionality is divided into nine subpages. Each subpage is titled with a number followed by an underscore. Streamlit uses this numbering for page ordering. Each page is a graphical interface to the underlying functional code or to the tutorials.
Figure 5. Framework Code Design: Homepage.py is the main Streamlit page. The functionality is divided into nine subpages. Each subpage is titled with a number followed by an underscore. Streamlit uses this numbering for page ordering. Each page is a graphical interface to the underlying functional code or to the tutorials.
Sensors 25 06223 g005
Figure 6. The image is of the Data page after a demo dataset has been loaded. Note that options are selected in the left sidebar. Once data is selected, a preview of the first five entries is displayed. A data summary is displayed below it. While not shown in the screenshot, the first row of data is also plotted.
Figure 6. The image is of the Data page after a demo dataset has been loaded. Note that options are selected in the left sidebar. Once data is selected, a preview of the first five entries is displayed. A data summary is displayed below it. While not shown in the screenshot, the first row of data is also plotted.
Sensors 25 06223 g006
Figure 7. The Signal Processing page is where digital signal processing techniques may be applied to a loaded dataset. Here, basic Gaussian white noise is added to the demonstration dataset. Note that the first entry of the dataset is plotted. Both the original signal and the signal with added noise are shown.
Figure 7. The Signal Processing page is where digital signal processing techniques may be applied to a loaded dataset. Here, basic Gaussian white noise is added to the demonstration dataset. Note that the first entry of the dataset is plotted. Both the original signal and the signal with added noise are shown.
Sensors 25 06223 g007
Figure 8. The Classification page is where classification algorithms may be created and queued to run on a loaded dataset. Mul tiple inputs for each hyperparameter of multiple models may be created and run through a grid. The best model of each type placed in the queue will have results displayed. Here, a confusion matrix and the first three samples of each class are displayed. Note that plots in black are correctly classified, and those in red are incorrectly classified.
Figure 8. The Classification page is where classification algorithms may be created and queued to run on a loaded dataset. Mul tiple inputs for each hyperparameter of multiple models may be created and run through a grid. The best model of each type placed in the queue will have results displayed. Here, a confusion matrix and the first three samples of each class are displayed. Note that plots in black are correctly classified, and those in red are incorrectly classified.
Sensors 25 06223 g008
Figure 9. The Classification page is where clustering algorithms may be created and queued to run on a loaded dataset. Multiple inputs for each hyperparameter of multiple models may be created and run through a grid search. The best model of each type placed in the queue will have results displayed. Here, the first three samples of each cluster are displayed. Note that plots in black are correctly clustered, and those in red are incorrectly clustered.
Figure 9. The Classification page is where clustering algorithms may be created and queued to run on a loaded dataset. Multiple inputs for each hyperparameter of multiple models may be created and run through a grid search. The best model of each type placed in the queue will have results displayed. Here, the first three samples of each cluster are displayed. Note that plots in black are correctly clustered, and those in red are incorrectly clustered.
Sensors 25 06223 g009
Figure 10. The Detection page is where outlier and novelty detection algorithms may be created and queued to run on a loaded dataset. Multiple inputs for each hyperparameter of multiple models may be created and run through a grid search. The best model of each type placed in the queue will have results displayed. Here, a confusion matrix and the first three samples of each class are displayed, where the first column is normal (sine waves) and the second column is abnormal (square waves). Note that plots in black are correctly identified, and those in red are incorrectly identified.
Figure 10. The Detection page is where outlier and novelty detection algorithms may be created and queued to run on a loaded dataset. Multiple inputs for each hyperparameter of multiple models may be created and run through a grid search. The best model of each type placed in the queue will have results displayed. Here, a confusion matrix and the first three samples of each class are displayed, where the first column is normal (sine waves) and the second column is abnormal (square waves). Note that plots in black are correctly identified, and those in red are incorrectly identified.
Sensors 25 06223 g010
Figure 11. The Regression page is where regression algorithms may be created and queued to run on a loaded dataset. Multiple inputs for each hyperparameter of multiple models may be created and run through a grid search. The best model of each type placed in the queue will have results displayed. Here, a scatterplot is displayed. Note that predicted values are blue and the labels (true values) are red.
Figure 11. The Regression page is where regression algorithms may be created and queued to run on a loaded dataset. Multiple inputs for each hyperparameter of multiple models may be created and run through a grid search. The best model of each type placed in the queue will have results displayed. Here, a scatterplot is displayed. Note that predicted values are blue and the labels (true values) are red.
Sensors 25 06223 g011
Figure 12. The Device Connector page is where a trained model may be connected to a live ElectricDot (or other smart device) data stream. The model and the smart device each have an associate .yaml file that contains the information that the framework needs to make the connection. Based on the information in the .yaml file, the model output is automatically routed to the appropriate InfluxDB server and table. From there, Grafana may be used to visualize the smart device and model outputs in real time.
Figure 12. The Device Connector page is where a trained model may be connected to a live ElectricDot (or other smart device) data stream. The model and the smart device each have an associate .yaml file that contains the information that the framework needs to make the connection. Based on the information in the .yaml file, the model output is automatically routed to the appropriate InfluxDB server and table. From there, Grafana may be used to visualize the smart device and model outputs in real time.
Sensors 25 06223 g012
Figure 13. The Historical Data page is where a trained model may be connected to historical data stored in the testbed’s InfluxDB server. The historical data is queried and sent to the model. It otherwise functions the same as the Device Connector discussed above.
Figure 13. The Historical Data page is where a trained model may be connected to historical data stored in the testbed’s InfluxDB server. The historical data is queried and sent to the model. It otherwise functions the same as the Device Connector discussed above.
Sensors 25 06223 g013
Figure 14. The Tutorial page allows users to view the frameworks associated tutorial slides. Here, the classification tutorial title slide is shown.
Figure 14. The Tutorial page allows users to view the frameworks associated tutorial slides. Here, the classification tutorial title slide is shown.
Sensors 25 06223 g014
Figure 15. This image is an example of classification output for the results of an Extra Trees model. The parameter settings and the metrics as shown in the text. A confusion matrix is also shown.
Figure 15. This image is an example of classification output for the results of an Extra Trees model. The parameter settings and the metrics as shown in the text. A confusion matrix is also shown.
Sensors 25 06223 g015
Table 1. Core modules.
Table 1. Core modules.
Module File NameModule Information
classification.pyClassification, Supervised Anomaly Detection
clustering.pyClustering
detection.pyUnsupervised Anomaly Detection
dsp.pyDigital Signal Processing
regression.pyRegression
utils.pyPlotting and other Miscellaneous functions
All module files are in the “lib” folder in GitHub.
Table 2. Tutorials.
Table 2. Tutorials.
Module
Classificationclassification_slides.pdf, classification_tutorial.ipynb
Clusteringunsupervised_slides.pdf, unsupervised_tutorial.ipynb
DSPdsp_slides.pdf, dsp_tutorial.ipynb
Regressionregression_slides.pdf, regression_tutorial.ipynb
Supervised Detectionclassification_slides.pdf, classification_tutorial.ipynb
Unsupervised Detectionunsupervised_slides.pdf, unsupervised_tutorial.ipynb
All tutorial files are in the “tutorial” folder in GitHub.
Table 3. The average scores for each model of ten runs are shown in the table below. Each model was scored on Accuracy, Precision, Recall, and the F1 Score. Note that the weighted Accuracy, Precision, and Recall are presented here due to imbalances in the number of each class in the data.
Table 3. The average scores for each model of ten runs are shown in the table below. Each model was scored on Accuracy, Precision, Recall, and the F1 Score. Note that the weighted Accuracy, Precision, and Recall are presented here due to imbalances in the number of each class in the data.
Model TypeAverage Scores from 10 Runs
Accuracy Precision Recall F1 Score
Bagging0.99894740.99898640.99894740.9989519
Random Forest0.99684210.99702760.99684210.9968620
Extra Trees0.99578950.99596580.99578950.9958041
AdaBoost0.99368420.99378810.99368420.9936572
Decision Tree0.99263160.99279360.99263160.9926280
K Nearest Neighbors (KNNs)      0.99052640.99089820.99052640.9904392
Time-Series KNN0.99052640.99089820.99052640.9904392
Gradient Boost0.98842110.98991160.98842100.9884201
Histogram Grad. Boost0.97473690.97594210.97473690.9746843
Non-Myopic Early0.93473680.94189870.93473680.9350837
Radius NN0.86631570.87775720.86631570.8675915
Support Vector Classifier (SVC)0.73894730.77427320.73894730.7320654
Time-Series SVC0.70315800.75001770.70315800.6967196
Multilayer Perceptron0.65578960.62839730.65656790.6264954
Linear Discriminant Analysis0.61473670.49350340.61473670.5459568
Gaussian Naive-Bayes0.60631580.66924450.60631580.5770321
Quadratic Discriminant Analysis0.59894730.59346190.59894730.5924451
Gaussian Process0.58526320.50359060.58526320.5129866
Nearest Centroid0.54210510.60646160.54210510.5019513
Passive-Aggressive0.50105250.32118400.50105250.3677192
Bernoulli Naive-Bayes0.48421050.23592250.48421050.3168296
Table 4. The standard deviation for each model of ten runs is shown in the table below. Each model was scored on Accuracy, Precision, Recall, and the F1 Score.
Table 4. The standard deviation for each model of ten runs is shown in the table below. Each model was scored on Accuracy, Precision, Recall, and the F1 Score.
Model TypeStandard Deviation of Scores from 10 Runs
AccuracyPrecisionRecallF1 Score
Bagging (trees)0.00332860.00320530.00332860.0033144
Random Forest0.00710480.00664450.00710480.0070537
Extra Trees0.01016940.00975220.01016940.0101381
AdaBoost (trees)0.01131550.01119290.01131550.0113950
Decision Tree0.01317540.01291070.01317540.0131879
KNN0.01950450.01869910.01950450.0198083
Time-Series KNN0.01950450.01869910.01950450.0198083
Gradient Boost0.02072850.01740750.02072860.0208220
Histogram Grad. Boost0.02815780.02653750.02815780.0282982
Non-Myopic Early0.04291610.03665470.04291610.0423254
Radius NN0.07327390.06619570.07327390.0717121
SVC0.03963520.03124250.03963520.0435080
Time-Series SVC0.06408670.07213400.06408670.0626575
Multilayer Perceptron0.05436890.12341100.05386980.0796094
Linear Discriminant Analysis0.03585140.02988410.03585140.0334393
Gaussian Naive-Bayes0.04841600.06715920.04841600.0472427
Quadratic Discriminant Analysis0.03952640.03892160.03952640.0387197
Gaussian Process0.03944860.04811470.03944860.0474366
Nearest Centroid0.02287440.02779800.02287440.0323248
Passive-Aggressive0.14063110.22795500.14063110.1838261
Bernoulli Naive-Bayes0.04031260.03942750.04031260.0441936
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Coshatt, S.; Yang, H.; Wu, S.; Ye, J.; Ma, P.; Song, W. SensorAI: A Machine Learning Framework for Sensor Data. Sensors 2025, 25, 6223. https://doi.org/10.3390/s25196223

AMA Style

Coshatt S, Yang H, Wu S, Ye J, Ma P, Song W. SensorAI: A Machine Learning Framework for Sensor Data. Sensors. 2025; 25(19):6223. https://doi.org/10.3390/s25196223

Chicago/Turabian Style

Coshatt, Stephen, He Yang, Shushan Wu, Jin Ye, Ping Ma, and Wenzhan Song. 2025. "SensorAI: A Machine Learning Framework for Sensor Data" Sensors 25, no. 19: 6223. https://doi.org/10.3390/s25196223

APA Style

Coshatt, S., Yang, H., Wu, S., Ye, J., Ma, P., & Song, W. (2025). SensorAI: A Machine Learning Framework for Sensor Data. Sensors, 25(19), 6223. https://doi.org/10.3390/s25196223

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop