You are currently viewing a new version of our website. To view the old version click .
Atmosphere
  • Article
  • Open Access

8 November 2025

Air Sensor Network Analysis Tool: R-Shiny Application

,
,
,
,
,
,
,
,
and
1
United States Environmental Protection Agency, Office of Research and Development, Research Triangle Park, NC 27711, USA
2
General Dynamics Information Technology, Falls Church, VA 22042, USA
3
Institute for the Environment, University of North Carolina, Chapel Hill, NC 27516, USA
4
Applied Research Associates, Raleigh, NC 27615, USA
This article belongs to the Special Issue Emerging Technologies for Observation of Air Pollution (2nd Edition)

Abstract

Poor air quality can harm human health and the environment. Air quality data are needed to understand and reduce exposure to air pollution. Air sensor data can supplement national air monitoring data, allowing for a better understanding of localized air quality and trends. However, these sensors can have limitations, biases, and inaccuracies that must first be controlled to generate data of adequate quality, and analyzing sensor data often requires extensive data analysis. To address these issues, an R-Shiny application has been developed to assist air quality professionals in (1) understanding air sensor data quality through comparison with nearby ambient air reference monitors, (2) applying basic quality assurance and quality control, and (3) understanding local air quality conditions. This tool provides agencies with the ability to more quickly analyze and utilize air sensor data for a variety of purposes while increasing the reproducibility of analyses. While more in-depth custom analysis may still be needed for some sensor types (e.g., advanced correction methods), this tool provides an easy starting place for analysis. This paper highlights two case studies using the tool to explore PM2.5 sensor performance under the conditions of wildfire smoke impacts in the Midwestern United States and the performance of O3 sensors for a year.

1. Introduction

Poor air quality is associated with a variety of negative health effects [1]. Air quality data are needed to understand local conditions and reduce exposure to air pollution [2]. In the United States (U.S.), air quality is measured by ambient air monitors operated by state, local, and tribal air agencies [3]. Regulatory air monitoring includes federal reference method (FRM) and federal equivalent method (FEM) measurements [3]. Measurement technologies differ by pollutant, but FRM measurements are considered the gold standard for accuracy, with FEM measurements being equivalent in accuracy. These measurements provide information on the attainment of air quality standards in the U.S. along with a variety of other uses including real-time air quality communication to the public.
Air monitors do not capture all variation in air quality, especially during events like those related to wildfire smoke impacts, due to significant spatial and temporal variation [4,5]. Recent efforts aim to supplement the national monitoring network with localized air sensor data to investigate variations in air quality at neighborhood scales [6,7]. Air sensors typically cost one or two orders of magnitude less than conventional air monitors (e.g., USD 10-USD 10 k) and are designed to be compact, enabling additional measurements at more locations. However, air sensor data may be noisy [8], biased, or inaccurate [9]. Furthermore, sensor performance may vary over time [10,11], concentration range (e.g., nonlinear response), or the environment in which they operate (e.g., high relative humidity (RH), temperature) [12,13,14].
Limitations may be pollutant-, sensor technology- (e.g., optical, electrochemical, metal oxide), or manufacturer-specific. Quality assurance (QA) and quality control (QC) steps must be tailored for the specific sensor’s needs [15]. Particulate matter (PM) sensors are widely used, and fine particulate matter (PM2.5) is often highly correlated with reference measurements [9,16,17]. However, many of these sensors (e.g., Plantower, Sensirion) have limitations and only correctly measure particles from roughly 0.1 or 0.3 um up to 1 um with varying success depending on the sensor design [18,19,20,21,22]. While many of these sensors report particle counts and particles with diameters of 10 microns or less (PM10) in addition to PM2.5, they are often inaccurate [18,20,21,23]. Some PM sensors using optical particle counter (OPC) technology take PM10 and coarse particle (PM10–2.5) measurements that are strongly correlated with those of reference monitors [19,20,24]. Users must carefully consider specific sensor limitations.
The U.S. Environmental Protection Agency (EPA) has released air sensor performance targets and testing protocols for ozone (O3) [25], nitrogen dioxide (NO2), carbon monoxide (CO), sulfur dioxide (SO2) [26], PM2.5 [27], and PM10 [28]. These testing protocols recommend evaluating air sensors collocated with (i.e., running alongside) FEM and FRM measurements. Collocation studies have provided a lot of performance information about a variety of air sensor types over the past decade or more, allowing manufacturers, researchers, and other sensor users to better understand and improve sensor limitations [16,29,30,31,32]. In addition, correction equations are often required to generate accurate air pollutant concentration estimates [33]. Collocations have been used to develop local corrections for sensors [34,35,36,37,38,39] and to develop larger-scale (i.e., nationwide) air sensor correction when multiple collocations are considered in aggregate [40]. In recent years, sensor evaluations and corrections have increasingly been performed by opportunistically comparing sensors and monitors within a certain distance for local [41,42] or larger-scale (e.g., multi-state, regional) corrections [43,44,45,46] with some more complicated methods considering longer distances if sensor–monitor pairs are in areas with the same land use [47,48,49].
Recent work has explored the greatest needs related to air sensors. The U.S. EPA engaged in dialog with EPA Regions, and state, local, and tribal air monitoring agencies and found that many data users face challenges related to data management, analysis, and visualization [50]. The proposed solutions to these challenges included data analysis and visualization tools [51]. In a recent review, authors highlighted the need to bring together multiple sources of air quality information for comparison and the importance of data scientists partnering with data users when developing new tools [52]. Users without extensive coding experience face challenges in analyzing data from air sensors due to the complexities of accounting for sensor limitations, and many agencies are struggling with increased data volumes and more community questions all while dealing with steady or decreasing staff time.
Numerous tools exist for processing and analyzing air quality data, including R (v 4.5.1) [53,54,55,56,57,58,59], Python (v 3.13.7) [60,61,62], web-based tools [63,64], and Excel [65] tools in addition to commercial paid solutions. Many existing air data analysis tools require coding experience for operation in an open-source environment [54,55,56,57]; target specific manufacturers’ sensors [53,55]; or focus on evaluation based on a single collocation site [62,65]. An analysis tool is needed to more easily aggregate data from multiple air quality data sources, perform standard QC, identify nearby sensors spatially for comparison, and create comprehensive visualizations.
In response to these needs, the Air Sensor Network Analysis Tool (ASNAT) has been developed as an R-Shiny application designed to empower air quality professionals. This tool facilitates the evaluation of air sensor data quality through comparison with nearby measurements, supports the application of basic QA and QC, and enhances the understanding of local air quality conditions. By streamlining the analysis process, ASNAT enables agencies to quickly harness air sensor data for diverse applications, fostering reproducibility and reliability in air quality analysis. This tool was developed alongside the Air Sensor Data Unifier (ASDU) [66] which helps users to reformat, average, and apply basic quality control to air sensor data for later use in ASNAT and other tools.
This paper presents a detailed examination of ASNAT’s capabilities, illustrated through two case studies: (1) exploring sensor performance under the conditions of the Canadian wildfire smoke impacts in the Midwestern U.S. in June 2023 and (2) exploring the performance of O3 sensors at several sites within the U.S. from August 2019 to July 2020. Through these case studies, we demonstrate how ASNAT can enhance air quality monitoring and response strategies, ultimately potentially contributing to reduced exposure to air pollution, improved public health outcomes, and environmental protection.

2. Materials and Methods

2.1. Overview of ASNAT Functionality

ASNAT is an R-Shiny [67,68] application integrating data from multiple air quality networks (e.g., national monitoring network, sensor networks, meteorological stations) to provide a broader representation of local air quality. This tool was developed mostly in R with some code in C++ to enhance performance speed. It was designed to support air quality professionals with a base knowledge of air quality analysis to develop sensor data corrections, apply data flagging, understand the performance of air sensor networks, improve sensor data quality, and improve the comparability between networks so that users can better understand local air quality—including during extreme events like wildfires, dust storms, and fireworks.
Currently ASNAT has six tabs providing different ways to interact with the data. These include the map, flagging, tables, plots, corrections, and network summary tabs. After selecting data, users can decide to either move to the flagging tab or to the tables tab. In many cases, users may need to generate tables and plots before deciding what kind of flagging and removal of problematic data is needed.
The user manual, which can be downloaded along with the code, provides examples and walks through how to proceed with data analysis on each tab. Virtual user training has been provided to many users, and the training materials are available from the ASNAT team by request (i.e., email). Additional materials are under development.

2.2. Data for ASNAT

Users can load data from a variety of sources into ASNAT (version released 27 October 2025) including data from EPA’s Remote Sensing Information Gateway (RSIG) (https://www.epa.gov/hesc/web-access-rsig-data, accessed on 28 February 2025) [69], from PurpleAir (Draper, UT, USA) offline csv files, or from standard format files (as documented in the user manual), making it possible to use this tool for a wide variety of data types. Data can be loaded at hourly or 24 h averages, and NowCast averaging can be applied to visualizations and statistics within ASNAT (https://usepa.servicenowservices.com/airnow/en/how-is-the-nowcast-algorithm-used-to-report-current-air-quality?id=kb_article&sys_id=798ba26c1b1a5ed079ab0f67624bcb6d, accessed on 20 June 2025). The tool is designed, and was tested, to work with a variety of sensor and monitoring data with a focus on National Ambient Air Quality Standards (NAAQS) pollutants (i.e., PM2.5, PM10, O3, NO2, CO, and SO2).
Standard format files can be generated using ASDU [66] or through other means. ASDU allows the user to reformat text or csv data files from air sensors and air monitors, which currently come in a wide variety of formats for a variety of pollutants, into the standard ASNAT format by specifying column contents, time stamp formatting, sensor locations (i.e., latitude and longitude), and other formatting features. Users can save reformatting information to simplify loading data of the same type in the future [66]. During ASDU design and testing, 20 text-based file formats from 17 different manufactures were used to ensure ASNAT and ASDU could be used for a wide variety of sensor types [66]. If users can convert data into the standard ASNAT format using ASDU or other means, then they can analyze it in ASNAT, making ASNAT a flexible tool for analyzing data from various manufacturers, sensor models, and pollutants.
Users can select data from RSIG (under “Load Web” in Figure 1), which allows users to efficiently call data based on a geographic bounding box. Users can retrieve AirNow (https://www.airnow.gov/, accessed on 20 October 2025), EPA’s Air Quality System (AQS) (https://www.epa.gov/aqs, accessed on 20 October 2025), Meteorological Aerodrome Report (METAR) (https://aviationweather.gov/data/metar/, accessed on 20 October 2025), and public crowdsourced PurpleAir data (https://www2.purpleair.com/, accessed on 20 October 2025). A full list of variables currently available in ASNAT through RSIG is included in the SI (Table S1). Users select the time period and pollutant or other variables of interest. After users load data, they can further refine their dataset of interest on the flagging tab (e.g., remove specific time periods, remove specific sensor or monitor IDs).
Figure 1. A screenshot of ASNAT, showing data selection on the left and loaded data displayed on the map on the right. The display includes AQS and corrected PurpleAir sensor data across the Midwestern U.S. and has multiple tabs to further quality-assure, summarize, and visualize the data.
Data from AirNow and AQS available in ASNAT from RSIG include PM2.5, PM10, RH, temperature, pressure, O3, NO2, CO, and SO2. AirNow data are used to report the air quality index (AQI) to the public in near real time with data delivered at the end of every hour. In contrast, AQS data undergo more extensive quality control and are certified every quarter. This rigorous process ensures that AQS data can be used for regulatory purposes, such as determining the attainment of the NAAQS (https://www.airnow.gov/about-the-data/, accessed on 17 September 2025).
While the AQS and AirNow datasets are similar, there are some differences in availability and content. AQS data will typically become available a few months after collection due to the data certification process. When a user selects data from AirNow or AQS through the “load web” selection in ASNAT, RSIG requests the most recent data from AQS and AirNow, enabling access to the latest information. In some cases, more data may be included in the AirNow dataset as not all monitors also report to AQS. Depending on the application, AirNow or AQS data may be more appropriate, and users can easily run analysis with either and compare. In addition, users can specify which of the AirNow/AQS parameter codes for PM2.5 they would like, including 88101 (i.e., federal equivalent methods used for determining compliance with the PM2.5 National Ambient Air Quality Standards), 88500 (i.e., total atmospheric PM2.5), 88501 (i.e., PM2.5 raw data), or 88502 (i.e., acceptable PM2.5 AQI and Speciation Mass).
PurpleAir sensor owners can make the data public or keep it private. Public sensor data are taken from PurpleAir’s application programming interface (API) to EPA’s RSIG and are accessible in ASNAT. All PurpleAir variables are available in RSIG and ASNAT, although some are not accurate (e.g., PM10) [15,18,20,21,23].
Sensor owners include individuals; community groups; state, local, and tribal air agencies; schools; and others. For PurpleAir, data is sent from PurpleAir sensors connected to Wi-Fi to the PurpleAir API every 2 min. This data is then collected from PurpleAir’s API and saved in RSIG every 2 min and is then immediately available within ASNAT. PurpleAir hourly and daily aggregate files (i.e., one file per day per variable at 1 h and 24 h averages) are created a few minutes after midnight each day—so on a 24 h delay from real time. This means that requests for (pre-computed) pm25_corrected_hourly (as ASNAT provides) are only available for the previous day and before. This delay is due to the considerable processing time needed to create these (and other) archived files, so this is performed starting at midnight.
The PurpleAir dataset includes the corrected PurpleAir PM2.5 data (PurpleAir.pm25_corrected) corrected in a similar way as PurpleAir data on the AirNow Fire and Smoke Map (fire.airnow.gov, accessed on 28 January 2025) including completeness criteria (75% for daily or hourly averages), exclusion when A and B channel measurements disagree (i.e., data are excluded if both |A − B| > 5 µg/m3 and (|A − B| × 2)/(A + B) > 70%), and the application of the U.S.-wide correction, accounting for nonlinearity at high smoke concentrations (Equation (1)). Here, PAcfatm is the atm correction of the PurpleAir data, and RH is the relative humidity. This method may be updated in the future if improvements are made.
PAcfatm < 30:
PM2.5 = [0.524 × PAcfatm] − [0.0862 × RH] + 5.75
30 ≤ PAcfatm < 50:
PM2.5 = [0.786 × (PAcfatm/20 − 3/2) + 0.524 × (1 − (PAcfatm/20 − 3/2))] × PAcfatm − [0.0862 × RH] + 5.75
50 ≤ PAcfatm < 210:
PM2.5 = [0.786 × PAcfatm] − [0.0862 × RH] + 5.75
210 ≤ PAcfatm < 260:
PM2.5 = [0.69 × (PAcfatm/50 − 21/5) + 0.786 × (1 − (PAcfatm/50 − 21/5))] × PAcfatm − [0.0862 × RH × (1 − (PAcfatm/50 − 21/5))] + [2.966 × (PAcfatm/50 − 21/5)] + [5.75 × (1 − (PAcfatm/50 − 21/5))] + [8.84 × (10 − 4) × PAcfatm2 × (PAcfatm/50 − 21/5)]
260 ≤ PAcfatm:
PM2.5 = 2.966 + [0.69 × PAcfatm] + [8.84 × 10−4 × PAcfatm2]
Users must supply a valid PurpleAir API read key (available directly from PurpleAir) to access this data. Users can also load data from standard format text files, which can be generated using the ASDU tool (https://www.epa.gov/air-sensor-toolbox/air-sensor-data-tools, accessed on 28 February 2025), or from offline PurpleAir files from the SD card. These alternative methods are especially important for groups interested in analyzing private data.
METAR data available include temperature, RH, and sea level pressure. For METAR data, the files (hourly) are downloaded from a remote server to maple (internal EPA computer) via a cronjob every 15 min. No user interaction is involved. When a user requests METAR data such as hourly data for 7 days, it is aggregated and streamed on-the-fly based on reading the archived hourly METAR files.
Data are displayed in coordinated universal time (UTC) by default, but users can adjust the UTC offset on the first tab. Data loaded through RSIG are already time-aligned. Users can provide the time zone offset for data processed through ASDU into ASNAT to time-align the datasets. Users cannot adjust the time alignment between different data sources in ASNAT at this time.

2.3. Map Tab

Once users select and load data, monitoring sites appear on the map at the top of the map tab (Figure 1). Each data type is assigned a symbol, and users select a colormap that is then described in the legend. Once users have loaded hourly or daily averaged data, users can choose how to view the data (e.g., loaded hourly or daily averages, nowcast averages, mean value over all timesteps) and use the timestep slider below the map to animate the information, if applicable.

2.4. Tables Tab

Users can load one to three variables at a time for comparison. If the user has selected only one variable (i.e., X variable), the tables tab summarizes the data by site ID including the count (i.e., number of hours or number of days available), percent of missing data, mean, minimum, quartiles, and maximum.
Typically, selecting two variables involves comparing sensor data with more uncertainty and unknown performance to nearby reference monitor data (e.g., AirNow, AQS) to better understand sensor performance and develop corrections. Usually, the variable with more uncertainty would be the Y variable. If a user selects two variables, they will also specify the maximum distance threshold for comparison. Distances are calculated using Euclidean distance between latitude, longitude pairs. If there are no pairs within this distance, the user will see the error “There are no neighboring points within [x] meters” when they try to generate tables on the tables tab. Users must carefully consider the appropriate distance depending on the pollutant, local geography, meteorology, local sources, and other factors. If the user has selected two variables within a certain distance, it provides the same summary table for each variable. In addition, it provides a table of the neighboring points at each time stamp, which can be especially helpful if the user would like to save the data for further analysis outside of ASNAT, and a table summarizing each pair, including IDs, locations, distance between the pair, and the coefficient of determination (R2). Lastly it provides a summary of the paired data by AQI category. This includes the number of points in each AQI category (i.e., count), the normalized mean bias error (NMBE), root mean squared error (RMSE), and normalized root mean squared error (NRMSE) in each AQI category. This can help the user to understand how the bias and error between the two datasets change over the full range of the datasets.
Selecting three variables will allow the user to explore how a third variable (e.g., RH) influences the relationship between two other variables (e.g., sensor data and monitor data). If the user generates these tables and then uses the save data button, these summary tables will be saved along with the raw data.

2.5. Plots Tab

The plots tab includes additional visualizations to understand network performance and local air quality. ASNAT describes the relationship between the X and Y datasets (e.g., sensor–monitor relationship) through a series of performance metrics calculated as outlined in the EPA’s performance targets with different targets for different pollutants [25,26,27,28]. Each point in the boxplot represents a single sensor–monitor pair with at least two data points (e.g., two hourly averages, two daily averages). The target values on the plots are set based on the EPA’s performance targets and are not user-modifiable at this time.
Users can also explore the performance of the sensors by considering how frequently the sensors and monitors report the same AQI category. This type of analysis is important to understand whether sensors can be used to advise various health protective actions suggested in different AQI categories. There is no standard guidance to interpret differences, so users will need to use their best judgment depending on the application and averaging interval. NowCast and 24 h averages are more frequently used for AQI display, and it is typically more challenging for sensors and monitors to agree at shorter averaging intervals (e.g., hourly). This analysis can be performed at 1 h or 24 h or using NowCast averages.

2.6. Corrections Tab

Using the corrections tab, users can develop sensor corrections to adjust for bias between the sensor and the monitor (or whatever two comparison variables the user selects). Sometimes, a simple linear correction allows two datasets to become more comparable. However, much past work with air sensors has shown that RH or temperature can influence sensor measurements [12,13,14], and many gas sensors may have cross-sensitivities to other gases [13], so a third variable may be needed to account for some of these influences or interferences. In ASNAT, currently available corrections include single-variable, multivariable additive, or multivariable interactive corrections in either linear, quadratic, or cubic form. Users can generate corrections for individual sensors and then can export the corrections. The corrections can be applied and are shown on corrected scatterplots and a summary table on the corrections tab. Users can generate the different types of corrections and compare the provided statistics (i.e., R2, RMSE, NMBE) along with a visual assessment of the scatterplot to understand what, if any, correction works best for their dataset. Currently, users cannot apply corrections in ASNAT to data on other tabs and cannot apply user-generated corrections developed outside of the tool. ASNAT applies the following completeness criteria before generating corrections:
  • Spatial validity: Sensor–monitor pairs must be neighbors within the user-specified maximum neighbor distance.
  • Data quality: Only unflagged data (variable Y and variable X with flag status = “0”) are used in correction development.
  • Data availability: All primary variables (X, Y) must have non-missing values (e.g., any paired X/Y row with a missing value for X or Y is removed); for multivariable models, the third variable (Z) is also required with at least 50% data completeness across matched rows (e.g., if three variables are selected, the Z variable is only included in the correction if it has 50% completeness with the non-missing matched X/Y data).
  • Minimum sample requirements: ASNAT requires sufficient data points for correction. For basic model fitting and statistical validation (R2 calculation), a minimum of two records are required for single-variable models and >15 records for multivariable models. For coefficient generation, the minimum requirements vary by correction complexity: linear corrections require ≥20 records, quadratic corrections require ≥30 records, and cubic corrections require ≥40 records. These are the minimums for correction generation, but they do not ensure robust correction development. The user must examine the plots, examine correction performance, and consider whether the conditions during this period are representative of the conditions they plan to apply the correction over.

2.7. Network Summary Tab

The network summary tab allows users to understand local air quality using all sensors/monitors in the view area on the map selection or within a user-specified geoJSON (e.g., uploaded state or county boundary). A map can be generated showing the mean, maximum, or the difference between the mean and the concentration at each site (i.e., to highlight spatial variation). A bar plot is also created of the mean, maximum, or difference values. If PM2.5 or PM10 daily data are selected, a bar plot of daily AQI by category will also be made. The user can also generate a time series plot or two types of calendar plots. The network summary tab also allows users to compare two datasets with each other. These plots will show daily patterns by the hour of the day and day of the week and averages by the month of the year and a scatterplot of the two datasets compared. Data on this tab are different because the tab will compare all data within the selected geographic area instead of just pairs within the user-specified distance and will not remove flagged data. The user can then download all current plots and data frames.

2.8. Flagging Tab

The flagging tab provides a variety of options to flag and/or remove anomalous data based on several categories such as the following: the exceedance of a threshold, agreement with the nearest neighbor, repeated values, outlier detections using several methods (e.g., Hampel filter [70,71]), a user-specified period (e.g., a day with a known sensor issue), and more (Table S2). Although this is the second tab after the map tab, in many cases, users will need to return and flag data after identifying problems in the tables, plots, and potentially other tabs. In some cases, the data may not be anomalous, but users may want to remove it to answer specific questions. For example, users may want to remove specific sensors because the sensors were not operated by a data analyst (e.g., public sensors versus those run by a local agency) or the data are incomplete (i.e., large data gaps). PurpleAir data has a count column indicating how many values were used to generate the average; this can be used to flag data for incompleteness if users want to exclude averages below a certain completeness (e.g., 90% completeness = count > 27 for hourly data generated from raw 2 min PurpleAir data). More than one flag can be applied, and users can remove or reset all current flags. No automated recommendations are made, and users will need to use their best judgment to flag and remove data since in some cases, outliers may be localized events. This is one of the reasons this tool is targeted at users with some air quality experience.
Flags impact some plots and tables and can be exported. Performance plots on the plots tab (i.e., overall scatterplot, AQI agreement plot, boxplots of performance targets) are shown with all data, and only unflagged data and flagged points are indicated in lighter shades on the time series. When data are downloaded using the “Save Data” button, data will be saved and will have a “flagged(-)” column including all the current flags associated with each measurement. Flag numbers are separated by semi-colons. A value of 0 means unflagged, values of 80 and above are used for built-in conditions, and those 1–79 are reserved for user-defined Boolean conditions corresponding to the lines in the saved flags.txt file in the specified output directory. Users can press the “save flag conditions” button to export the flag conditions they entered as Boolean expressions for their records, but for other flagging, users may need to document the flags they used for reproducibility.

2.9. Case Study: 2023 Midwestern Smoke

We present a 2-week case study to demonstrate the features and functionality of this tool. During June 2023, the Midwestern U.S. was impacted by smoke from Canadian wildfires [72]. During uncommon events like these, state and local agencies may be interested in better understanding how sensors perform and whether any improved QA or QC is needed to provide accurate data to the public. Further, the synthesis of air quality data from multiple observational networks supports current and future agency response (e.g., publicizing current air quality conditions and suggested response, documenting evidence of exceptional events). For this example, PM2.5 data were loaded from RSIG, including AQS PM2.5 data and the PurpleAir corrected dataset from the bounding box shown in Figure 1 including all of Wisconsin (WI); most of Michigan (MI); parts of Minnesota (MN), Iowa (IA), Illinois (IL), Indiana (IN), and Ohio (OH); and a small piece of Ontario, Canada.
We considered distances between sensor–monitor pairs between 50 and 4000 m, in line with the comparison distances used in past work [42,43,44,45,46]. For each distance, we evaluated the number of AQS sites with sensors within various distances, the number of nearby pairs, and the overall R2 of all points and between each pair. We also considered the R2 overall and each pair after excluding the top quantile of the data (PurpleAir PM2.5 > 18 µg/m3) since R2 can be driven by one or more outlier points. Ultimately, we selected a distance of 250 m between sensor–monitor pairs for a more in-depth case study analysis.
Hourly PM2.5 data were used throughout. We considered the hourly performance of sensor–monitor pairs using the EPA’s performance targets for PM2.5 sensors [27]. In addition, we compared the AQI category provided by the corrected PurpleAir sensors to nearby air monitors.

2.10. Case Study: O3 Sensor Performance

We also present an evaluation of Aeroqual (Auckland, New Zealand) AQY v1 O3 sensors. The Aeroqual AQY is a multi-pollutant sensor measuring temperature, RH, dew point, NO2, O3, and PM2.5. The AQY uses the Aeroqual gas-sensitive semiconductor sensor to measure O3. The results presented here reflect out-of-the-box performance. We did not use Aeroqual’s calibration feature where users can conduct an initial colocation and input a slope and offset into the online dashboard, which could have further improved the sensor’s performance.
This sensor was evaluated as part of our long-term performance project [73]. During this project, a variety of sensors were collocated alongside air monitors at sites across the U.S. Sensors were within 20 m of the monitors, as specified in the EPA’s performance targets [25], with many sensors being within a few meters. Data were downloaded weekly by site operators and were reformatted and hourly averaged using ASDU and then loaded using the “Load data from a standard-format file” feature in ASNAT. AirNow O3 data was then loaded from RSIG using the “Load Web” feature. AirNow data, rather than AQS data, was used because the North Carolina (NC) collocation site is not a regulatory site and only reports to AirNow. This case study focuses on performance at a subset of sites (5 of 7 sites) included in the full project including NC, Arizona (AZ), Colorado (CO), Delaware (DE), and Oklahoma (OK) (Table 1, Figure S1). These sites represent a variety of environmental conditions potentially contributing to differences in sensor performance. Three sensors operated at the NC site with single sensors at the other sites. This case study focuses on a 1-year subset of the total evaluation period (i.e., excludes pre- and post-collocation periods in NC and extended collocation that occurred at some sites before Aug 2019 and after July 2020). Additional details on this study can be found in a previous publication on the project [73] and the Supplemental Information included with that publication.
Table 1. Summary of O3 performance dataset.
We considered the hourly performance of sensor–monitor pairs using the EPA’s performance targets for O3 sensors [25]. Past work has shown that NO2 is not an interferent for AQY sensors [29] but that temperature and RH may influence measurements [74] and that measurements may drift over time [48]. For this reason, the temperature and RH data measured by the sensors were used to try to better correct the sensor data. In addition, a custom computed variable, days since deployment, was used to understand sensor drift. This was determined by calculating the number of days since 1 August 2019. Sensors had initial collocation periods and started sensing at slightly different times across the sites, but this is a good estimate of how drift may have impacted performance over the 1-year period considered during this analysis. This case study demonstrates the functionality of ASNAT for a sensor dataset that is not part of RSIG.
Some custom R code was written to support this analysis.
(1)
Custom code was used to sort files by headers as some files had minor differences that would not be accepted by ASDU.
(2)
Files were processed through ASDU in batches by time zone (i.e., eastern, central, mountain), and then custom code was used to concatenate and sort by time stamp to meet the requirements for ASNAT import.
(3)
AQY data was loaded from local files, and then AirNow data was loaded from RSIG with pairs within 1000 m used to match sensor data with the collocated monitoring sites. This was a huge dataset of AirNow sites across a large portion of the U.S. Custom code was used to remove all sites that were not matched with AQY sensor collocations to improve the speed of loading the data for further analysis. This step would not be required if all analysis was completed in a single ASNAT session or if users did not mind waiting for all data to be reloaded from AirNow.
(4)
Custom code was used to add a column of days since deployment started (i.e., 1 August 2019).

3. Results

3.1. Comparison Distances Considered for 2023 Midwestern Smoke

For the largest paired dataset (distance = 4000 m), the distribution of the corrected PurpleAir dataset shows an average = 16 µg/m3, median of 13 µg/m3, first quartile of 9 µg/m3, third quartile of 18 µg/m3, and maximum of 204 µg/m3. R2 values can be influenced by high-concentration points; therefore, we examined performance both including all data and excluding the highest quartile of data (>18 µg/m3) (Table 2).
Table 2. Number of AQS sites and neighbors, locations, and R2 for different distances between sensor–monitor pairs. Including R2 for all data and for each sensor–monitor data point over full range and excluding higher-concentration points.
When comparing neighbors within 50 m, only five AQS sites have PurpleAir sensors within 50 m, and there are nine neighbor pairs all in IA. Including all data, there is a strong R2 overall (R2 = 0.88) and for each pair (R2 = 0.80 − 0.99). When excluding the highest quartile of data, the data is still moderately correlated (R2 = 0.63), and each pair is moderately to strongly correlated (R2 = 0.48 − 0.95). Expanding the distance to 250 m identifies additional AQS sites (8 total) with additional neighbors (16 total). Correlations are similar, except over the full range, the R2 by pair ranges from moderate to strong (R2 = 0.62 − 0.99) instead of strong correlation across all pairs seen at 50 m. All pairs within 250 m are still located in IA.
Expanding to 500 m reveals additional cities in IL and MN with 11 AQS sites and 20 neighboring pairs total. The overall R2 values for all data are similar, but the R2 by sensor–monitor pair for the full range of concentrations ranges from weak to strong (0.27–0.99), and for the lower-concentration data, it ranges from no correlation to strong (0.03–0.95). At increased distances (i.e., 1000, 2000, 4000), additional pairs are identified, but correlations remain similar but with weak correlations seen for all low-concentration data for pairs within 4000 m (R2 = 0.03). We selected a distance of 250 m moving forward as further distances seem to have some sensors that are not well correlated to the monitor, suggesting localized sources or other issues leading to differences in sensors and monitors being further apart.

3.2. Nearby Sensor–Monitor Pairs for 2023 Midwestern Smoke

In this example, sensor–monitor pairs within 250 m were selected, resulting in 16 neighboring pairs. Most sensor–monitor pairs are strongly correlated, meeting the EPA’s performance target for the coefficient of determination (R2) with only a few pairs falling below the 0.7 target (Figure 2). All sensor–monitor pairs meet the targets for slope (1 ± 0.35), intercept (−5 ≤ b ≤ 5 mg/m3), and root mean squared error (RMSE ≤ 7 mg/m3). Some sensors have a normalized root mean squared error outside of the target (>30%); however, the EPA’s performance targets [27] only require meeting either the RMSE or the NRMSE target. Overall, sensors perform well during this time.
Figure 2. Performance of hourly corrected PurpleAir PM2.5 data as compared to nearby monitors (data from AQS) for 16 pairs within 250 m at hourly averages. Black lines indicate target range for each metric (coefficient of determination (R2), slope, intercept, root mean squared error (RMSE), and normalized root mean squared error (NRMSE)).
Using the corrections tab, we dig into the performance of individual sensors. Figure 3A shows a sensor with a weak correlation (R2 = 0.62 < 0.7). There are a variety of reasons why this sensor may have weaker performance than the typical sensor during this period (Figure 2). For instance, the user may have improperly reported the latitude and longitude of the sensor, meaning that the sensor may be more than 250 m from the AQS monitor, differences in the federal equivalent method (FEM) may lead to differing performance, this sensor may have hardware or software differences leading to different performance, or there may be other issues. In addition, R2 is dependent on concentration range, so R2 may be lower in part since the range of concentrations experienced is smaller than other sensor–monitor pairs. While a correction is provided for this sensor (y = 4.26 + 0.73x), we typically would not apply it since the R2 is weak. This plot shows the “original data” for the device, but plots can also be generated showing the data after correction. ASNAT does not treat pairs with low correlation differently, but the user can remove these pairs in the flagging tab if they desire.
Figure 3. Corrections generated for hourly PM2.5 data from two example sensor–monitor pairs as visualized by ASNAT. The top plot (panel (A)) shows a sensor with a weak correlation. The bottom plot (panel (B)) shows a sensor with strong agreement.
Figure 3B shows that a different sensor has strong agreement and low normalized mean bias error (NMBE) when compared to the nearby reference monitor. Depending on the project objectives, this performance may be adequate. If higher accuracy is needed, the individual sensor correction generated (y = −1.4597 + 1.1643x) could be applied.
Overall, the sensors in this example show strong AQI category agreement with sensors typically reporting the same AQI categories as monitors. Specifically, sensor and monitor hourly average AQI categories match at least 76% of the time across categories (Figure 4). When the AQS monitors report a good AQI, sensors within 250 m report a good AQI almost 80% of the time and moderate AQI just over 20% of the time. When the AQS monitors report a moderate AQI, sensors within 250 m also report a moderate AQI roughly 90% of the time but occasionally indicate a good or unhealthy AQI for sensitive groups (shortened to unhealthy for some in the ASNAT application) instead. Lastly, when the AQS monitors report an unhealthy AQI for sensitive groups, the sensors almost always also report an unhealthy AQI for sensitive groups but occasionally (<10% of the time) show a moderate AQI instead. These results indicate that corrected PurpleAir data can be reliably used to communicate AQI-associated health-protective actions to the public during a smoke event such as this example.
Figure 4. The percentage of hourly PurpleAir PM2.5 measurements in each AQI category compared to the monitor (AQS)-reported AQI category as visualized by ASNAT. Corrected PurpleAir typically reports the same AQI category as nearby monitors over our midwestern area (range of longitudes: −92.3160, −82.4562; range of longitudes: 41.7711, 46.8183).

3.3. O3 Sensor Performance

Ozone concentrations varied over the study year with higher concentrations in the summer and lower concentrations over the winter (Figure 5). Similar concentrations are seen across the sites (Table 1, Figure S2) with slightly higher average concentrations in OK (mean = 33 ppb; other sites = 27–28 ppb). AirNow monitors and AQY sensors show similar trends by hour of day and month of year with uncorrected AQY sensors typically overestimating O3 concentrations (Figure 6). Daily average concentrations do not vary largely over the course of a week, and the bias between sensors and monitors is consistent (Figure 6).
Figure 5. Ozone concentrations over the study time period as measured by the AirNow monitors as visualized in ASNAT.
Figure 6. A visualization from ASNAT showing ozone concentrations over the study period as measured by the AirNow monitors (red) and the AQY sensors (teal). Months are abbreviated by the first letter of the month (e.g., J January, F February).
Overall, hourly O3 sensor measurements were correlated with monitor measurements (R2 = 0.70) with a large intercept (Y-Intercept = 14.57 ppb) (Figure 7). Most of the sensors (four out of seven) meet the performance target for O3 sensor linearity (R2 > 0.8) with other sensors still showing strong correlation (R2 ≥ 0.7) (Figure 8). This is stronger linearity than that of past work in Texas where AQY O3 sensors had an R2 from 0.59 to 0.96 with 3 of 12 sensors having an R2 < 0.7 [74] but weaker performance than that of previous evaluations in the laboratory (R2 = 0.975) and California (R2 = 0.96) [29]. Two-prong distributions are seen for sensors 51 (DE, R2 = 0.73), 54 (NC, R2 = 0.83), and 55 (NC, R2 = 0.70), suggesting that there may be a difference in sensor–monitor relationship under distinct conditions in some cases, leading to sensors not meeting the linearity target (Figure 9). For the three sensors collocated in NC (54, 55, 56), intercepts range from 6.13 to 19.7 ppb, and slopes range from 0.52 to 1.22 (Figure 9, Table S3), highlighting that this is likely due to differences in individual sensors (e.g., hardware) and not differences in site conditions across the states. Based on the wide range of slopes and intercepts between sensors, individual sensor corrections are needed to improve performance so that all sensors will meet the slope and intercept performance targets.
Figure 7. Hourly O3 sensor data compared to AirNow monitor data at 5 sites in AZ, CO, DE, NC, and OK from August 2019 to July 2020. The black line is the linear regression, and the light grey line is the one to one line.
Figure 8. Hourly performance of AQY O3 sensors compared to AirNow monitors at 5 sites in AZ, CO, DE, NC, and OK from August 2019 to July 2020. The horizontal lines indicate the performance target ranges for O3 for each metric except NRMSE where there is no target.
Figure 9. Scatterplots of hourly AQY O3 sensor data versus AirNow O3 monitor data by sensor ID as generated in ASNAT. The dotted line is the one-to-one line, and the solid black line is the linear regression for each individual sensor. Numbers refer to sensor IDs (Table 1).
Next, we considered whether a more advanced correction would improve sensor performance. We consider linear regression, multivariable additive, and multiplicative interactive corrections using RH and temperature as measured by the sensors and days since deployment. These corrections consider the influence of environmental conditions, which were variable across the sites. Temperature was relatively consistent across sites with higher temperatures in AZ and slightly lower temperatures in CO (Figure S3). RH conditions were more variable with dryer conditions in AZ and CO (Figure S4). Sensor 54 in NC saw higher RH than the other sensors in NC, suggesting some differences in internal RH sensor performance. Considering the days since deployment considers sensor drift over time.
For temperature and RH corrections, correction improves R2 by ≤ 0.04, suggesting that the additional variables explain ≤ 4% of the variation between the sensors and the monitor (Table 3). The sensors that do not meet the R2 target are still unable to meet the target with these corrections. For the correction using an additive correction for days since deployment, R2 improved by 0.01 to 0.11, leading to sensors in DE and NC meeting the linearity target when they did not with other corrections. Using the multiplicative interactive correction for days deployed further improves most sensors, allowing all sensors to meet the target for linearity. Considering the terms in the equations generated (Table S3), the results suggest that the output from most sensors decreases over the project year. This is in line with past work where AQY and other metal oxide sensors were shown to drift over similar time periods [48,75]. To generate accurate data from these sensors, it is likely that users will need individual sensor corrections that account for drift over time or that sensors will need periodic collocation (e.g., quarterly) and correction updates to ensure O3 estimates remain accurate.
Table 3. R2 for various corrections applied (additive “+” and multiplicative interaction “*”) with ASNAT and the improvement in R2 that more complex corrections provide from linear correction. Shaded cells meet the performance target (R2 ≥ 0.8). Days represents the days since deployment, and T is temperature.

4. Discussion

The limitations of this tool include data size limits, the need for non-EPA users to install the tool locally, and some limitations in functionality. Dataset size limits are dependent on local computing resources. Users may need to limit longer time analysis to smaller spatial ranges and/or longer averaging intervals. Depending on future resources and priorities, we may try to host a public version of the tool to reduce the installation burden and make the tool more accessible to a wider variety of users. Since the code is publicly available, other organizations may also choose to publicly host the tool. There will always be additional analysis, quality assurance, or plots that could be helpful to understand local air quality issues,; however, this tool gives users without data analysis expertise, and those with it, a quick way to perform some helpful analysis.
In addition, more complicated corrections (e.g., machine learning, additional variables) are sometimes required to produce adequate sensor performance [13,34,76], but these more complicated corrections are not currently possible to be made in ASNAT. These more complex corrections can increase the risk of overfitting (i.e., generating a correction model that performs inadequately outside of the calibration period) [77,78], are not easy to interpret [77], and require longer calibration periods [34]. For these reasons, we do not include more complicated corrections currently to hopefully prevent users from generating unhelpful corrections. In the future, these features could potentially be improved through built-in safeguards (e.g., cross-validation), but this is beyond the scope of the current tool.
Some background air quality knowledge is needed to successfully use this tool and make informed decisions based on ASNAT outputs. For example, the nearest neighbor radius will depend on pollutant chemistry, local sources, and geography. Our example focused on pairs within 250 m, but future researchers would need to carefully consider an appropriate distance for comparison. In addition, local knowledge may be required to determine whether outliers are due to sensor malfunctions or real short-term pollutant events.
This project is ongoing, and we hope to add additional features and improvements based on feedback from the initial users. In addition, since the code is open-source and publicly available, we hope users will take it and modify it as needed for their own uses, adding additional functionality and customized displays. So far, more than two hundred air quality professionals have been trained to use this tool including staff from state, local, and tribal agencies; the EPA; other federal agencies; academia; consulting companies; and other organizations. The high engagement in tool training and office hours highlights the strong need for this type of tool.
Our case studies were focused on public PurpleAir data loaded through RSIG and Aeroqual AQY data loaded from local sensor files, but any sensor data that can be put into the ASNAT format can be used in ASNAT in the future, meaning that this tool will be helpful as new sensors enter the market. Currently, many of the large sensor networks in the U.S. are PM2.5 sensor networks, but as sensors for additional pollutants become more ubiquitous, this tool can be used to understand performance and local air quality for a variety of different pollutants in addition to PM2.5 and O3.
The midwestern smoke case study explored the performance of sensors under smoke conditions, but users could consider exploring sensor performance over several different types of localized events (e.g., dust events, fireworks, inversions). Our case study highlights strong performance during regional wildfire smoke impacts. However, seasonal variations in pollutants, pollutant sources, environmental conditions, particle properties, and other factors may lead to seasonal trends in correlation and bias between sensors and monitors [15,79,80] whether considering PM2.5 or other pollutants. One of the values of ASNAT is that it allows users to easily generate statistics for different time periods, allowing users to compare seasonal differences in sensor performance. In addition, infrequent or historic events that impact local emissions (e.g., COVID-19 pandemic) can be further analyzed.
The O3 sensor performance case study explored the performance of AQY sensors and the influence of temperature and RH and drift on sensor performance. Other sensors may have other influential or interferent variables (e.g., cross-sensitive pollutants), and if those pollutants are measured by the sensor, then those corrections could be considered using ASNAT as well. Some custom code was required for this case study. ASNAT and ASDU are still under development, and we hope that future improvements will limit the need for custom code. However, some custom analysis will likely always be needed for in-depth analysis. This analysis was still more streamlined with a few short chunks of custom code written rather than needing to perform all analysis and determine all relevant plots and statistics to generate without the support of ASDU and ASNAT.
This tool contributes to increasing the utility of air sensor data. For example, state agency staff plan to use the tool to help support a community monitoring project using PurpleAir sensors and to perform regression analysis on multiple PurpleAir sensors collocated with an air monitor. Also, they plan to use the tool to understand how custom-built sensors compare to nearby PurpleAir sensors. Local agency staff plan to use the tool to understand how local sources (e.g., incinerators) impact air quality at schools. By reducing the time it takes to complete analysis, resources are saved that can be put into furthering these projects or other agency priorities. In addition, this data can be used to take informed actions to better protect public health. This may include providing more accurate data to the public or better understanding local sources.

5. Conclusions

The development of the Air Sensor Network Analysis Tool (ASNAT) represents a significant advancement in the field of air quality monitoring and analysis. This R-Shiny application addresses key challenges faced by air quality professionals, particularly those related to the integration and analysis of diverse air sensor data. By offering a streamlined, user-friendly platform, ASNAT empowers users to effectively manage, analyze, and visualize air quality data, thereby enhancing their ability to make informed decisions regarding public health and environmental protection.
The case study on Canadian wildfire smoke impacts in the Midwestern United States illustrates ASNAT’s utility in real-world scenarios, demonstrating its capacity to investigate sensor data quality, and it showed that the sensors met performance targets and combined comparable data sources so two discrete sets of air quality measurements could be used together to understand local air quality conditions. Through its comprehensive suite of tools—including data flagging, correction development, and performance evaluation—ASNAT supports the reproducibility and reliability of air quality analyses, ultimately contributing to more accurate public communication and response strategies during air quality events.
The O3 case study further underscores the versatility and applicability of ASNAT in diverse air quality monitoring scenarios. By evaluating the performance of Aeroqual AQY O3 sensors, ASNAT demonstrated its capacity to handle complex datasets and provide insightful analyses that are crucial for effective air quality management. This study highlighted the tool’s ability to identify variables impacting sensor performance over time and the necessity for individual sensor corrections to maintain data accuracy. By facilitating the development of tailored corrections, ASNAT ensures that air quality professionals can maintain high data integrity across various sensor types and environmental conditions. This adaptability is essential for addressing the dynamic nature of air pollution and supports the ongoing efforts to enhance public health protection. As ASNAT continues to evolve, its role in bridging the gap between raw sensor data and insights will be pivotal in advancing air quality research and monitoring.
Air sensors are becoming ubiquitous, and they represent the next frontier in air quality monitoring globally. We need tools like ASNAT to help the rapidly growing user community. ASNAT is a valuable resource for air quality professionals, enabling more efficient data analysis and supporting efforts to mitigate air pollution exposure. As more air quality professionals adopt ASNAT, its impact is expected to grow, fostering collaborations and innovations that will further advance the field. By facilitating a better understanding and management of air sensor data, ASNAT contributes to the broader goal of protecting public health and the environment. Future enhancements and user-driven modifications will likely expand its capabilities, solidifying its role as a cornerstone tool in air quality monitoring and analysis.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/atmos16111270/s1, Table S1. Web data that can be loaded in ASNAT through RSIG. Table S2. Available flagging methods [72,73]. Table S3. Models generated in ASNAT for linear, multivariable additive, and multivariable interaction corrections using days since deployed as the 3rd variable. Figure S1. Ozone sensor performance evaluation sites. Figure S2. AirNow monitor concentrations where 0 represents data from all sites. Figure S3. Temperature as measured by AQY sensors. Figure S4. RH as measured by the AQY sensors.

Author Contributions

Conceptualization, K.K.B.; software, T.P., J.Y., G.P., Y.X., C.S., R.B. and H.N.Q.T.; validation, K.K.B., T.P., J.Y., G.P., Y.X., S.K., C.S., R.B. and H.N.Q.T.; data curation, T.P., A.L.C. and K.K.B.; writing—original draft preparation, K.K.B.; writing—review and editing, T.P., J.Y., G.P., Y.X., S.K., C.S., R.B., H.N.Q.T., S.A. and A.L.C.; visualization, T.P., J.Y. and R.B.; supervision, S.A.; project administration, K.K.B., Y.X., S.A., S.K. and A.L.C.; funding acquisition, K.K.B., S.K. and A.L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by EPA internal funding (Air Climate and Energy National Research Program, Regional-ORD Applied Research Program, and Environmental Modeling and Visualization Laboratory).

Institutional Review Board Statement

The mention of trade names, products, or services does not imply an endorsement by the U.S. Government or the U.S. Environmental Protection Agency. The views expressed in this paper are those of the author(s) and do not necessarily represent the views or policies of the U.S. Environmental Protection Agency.

Data Availability Statement

Code is available on github at https://github.com/USEPA/Air-Sensor-Network-Analysis-Tool-Public-, accessed on 30 September 2025, and operating system-specific zip files are available at the following links: Windows https://ofmpub.epa.gov/rsig/rsigserver?asnat/download/Windows/ASNAT.zip, accessed on 30 September 2025, Mac https://ofmpub.epa.gov/rsig/rsigserver?asnat/download/Darwin.arm64/ASNAT.zip, accessed on 30 September 2025, Mac https://ofmpub.epa.gov/rsig/rsigserver?asnat/download/Darwin.x86_64/ASNAT.zip, accessed on 30 September 2025, and Linux https://ofmpub.epa.gov/rsig/rsigserver?asnat/download/Linux.x86_64/ASNAT.zip, accessed on 30 September 2025.

Acknowledgments

Thank you to Heidi Paulsen (EPA), Sedona Ryan (UNC), Stephen Beaulieu (Applied Research Associates), and Eliodora Chamberlain (EPA Region 7) for their project management and other support. Thank you to those who provided input, example datasets, and testing including the following: U.S. EPA Amara Holder (ORD), Megan MacDonald (ORD), Ryan Brown (Region 4), Daniel Garver (Region 4), Chelsey Laurencin (Region 4), Rachel Kirpes (Region 5), Dena Vallano (Region 9), Laura Barry (Region 9), Nicole Briggs (Region 10), Elizabeth Good (Office of Air Quality Planning and Standards), and Arjun Thapa (ORD former); South Coast Air Quality Management District: Wilton Mui, Vasileios Papapostolou, Randy Lam, Namrata Shanmukh Panji, Ashley Collier-Oxandale (former); Washington Department of Ecology: Nate May; Puget Sound Clean Air Agency: Graeme Carvlin; New Jersey Department of Environmental Protection: Luis Lim; Desert Research Institute: Jonathan Callahan; Asheville-Buncombe Air Quality Agency: Steve Ensley; and Oregon Department of Environmental Quality: Ryan Porter. AI was used to help outline and draft the introduction and conclusion and to make minor editorial changes to improve clarity and ease of reading throughout. For the PurpleAir case study, thank you to PurpleAir for providing data (MTA #1261-19) and to Adrian Dybwad and Amanda Hawkins. For the O3 case study, this work would not have been possible without state and local agency partners who provided site access, power, and staff time to support data download and troubleshooting efforts as part of the Long Term Performance Project by the EPA including Maricopa County Air Quality Department (Ben Davis), Colorado Department of Public Health and Environment (Gordon Pierce (former), Erick Mattson), Delaware Division of Air Quality (Charles Sarnoski, Keith Hoffman, Tristan Bostock), and EPA staff who provided reference data for the NC site, including Joann Rice and Colin Barrette. The authors wish to acknowledge support for project coordination, data assembly and analysis, troubleshooting, and field support provided by Jacobs under contract number EP-C-15-008 (now Amentum). Support was provided by Cortina Johnson, Robert Yaga, Brittany Thomas, William Schoppman, Kenneth S. Docherty, Elaine Monbureau, Kierra Johnson, Sam Oyeniran, William Millians, Fiker Desalegan, and Diya Yang. Thank you to Rachelle Duvall (EPA ORD) for serving as an alternative task order contract officer representative, Sam Frederick (former EPA ORD ORAU) for help with data processing, Ian VonWald (former EPA ORD ORISE) for the support of the AZ project, and Carry Croghan (retired EPA ORD) for help with dataset quality assurance. Aeroqual sensors were provided by EPA Cooperative Research and Development (Agreement No. 934-16) with Aeroqual, Ltd. (Auckland, New Zealand).

Conflicts of Interest

Todd Plessel was employed by General Dynamics Information Technology and Yadong Xu was employed by General Dynamics Information Technology and Applied Research Associates. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APIApplication Programming Interface
AQIAir Quality Index
AQSAir Quality System
ASDUAir Sensor Data Unifier
ASNATAir Sensor Network Analysis Tool
CFRCode of Federal Regulations
COCarbon Monoxide
csvComma-Separated Values
EPAEnvironmental Protection Agency
FEMFederal Equivalent Method
FRMFederal Reference Method
IAIowa
IDIdentification
ILIllinois
INIndiana
MIMichigan
MNMinnesota
METARMeteorological Aerodrome Report
MTAMaterial Transfer Agreement
NO2Nitrogen Dioxide
NAAQSNational Ambient Air Quality Standards
NMBENormalized Mean Bias Error
NRMSENormalized Root Mean Squared Error
O3Ozone
OHOhio
ORDOffice of Research and Development
PM10Particulate Matter 10 µm or Less in Diameter
PM2.5Fine Particulate Matter 2.5 µm or Less in Diameter
QAQuality Assurance
QCQuality Control
R2Coefficient of Determination
RHRelative Humidity
RMSERoot Mean Squared Error
RSIGRemote Sensing Information Gateway
SDSecure Digital
SO2Sulfur Dioxide
UNCUniversity of North Carolina at Chapel Hill
U.S.United States
USDU.S. Dollars
UTUtah
UTCCoordinated Universal Time
WIWisconsin

References

  1. Manisalidis, I.; Stavropoulou, E.; Stavropoulos, A.; Bezirtzoglou, E. Environmental and Health Impacts of Air Pollution: A Review. Front. Public Health 2020, 8, 14. [Google Scholar] [CrossRef]
  2. McCarron, A.; Semple, S.; Braban, C.F.; Swanson, V.; Gillespie, C.; Price, H.D. Public engagement with air quality data: Using health behaviour change theory to support exposure-minimising behaviours. J. Expo. Sci. Environ. Epidemiol. 2023, 33, 321–331. [Google Scholar] [CrossRef]
  3. Ambient Air Quality Surveillance. 40 CFR Part 58 Appendix E. 2006. Available online: https://www.ecfr.gov/current/title-40/chapter-I/subchapter-C/part-58/appendix-Appendix%20E%20to%20Part%2058 (accessed on 30 September 2025).
  4. Kelp, M.M.; Lin, S.; Kutz, J.N.; Mickley, L.J. A new approach for determining optimal placement of PM2.5 air quality sensors: Case study for the contiguous United States. Environ. Res. Lett. 2022, 17, 034034. [Google Scholar] [CrossRef]
  5. Schollaert, C.; Connolly, R.; Cushing, L.; Jerrett, M.; Liu, T.; Marlier, M. Air Quality Impacts of the January 2025 Los Angeles Wildfires: Insights from Public Data Sources. Environ. Sci. Technol. Lett. 2025, 12, 911–917. [Google Scholar] [CrossRef]
  6. Kelly, K.E.; Xing, W.W.; Sayahi, T.; Mitchell, L.; Becnel, T.; Gaillardon, P.E.; Meyer, M.; Whitaker, R.T. Community-Based Measurements Reveal Unseen Differences during Air Pollution Episodes. Environ. Sci. Technol. 2021, 55, 120–128. [Google Scholar] [CrossRef]
  7. Snyder, E.G.; Watkins, T.H.; Solomon, P.A.; Thoma, E.D.; Williams, R.W.; Hagler, G.S.W.; Shelow, D.; Hindin, D.A.; Kilaru, V.J.; Preuss, P.W. The Changing Paradigm of Air Pollution Monitoring. Environ. Sci. Technol. 2013, 47, 11369–11377. [Google Scholar] [CrossRef] [PubMed]
  8. van Zoest, V.M.; Stein, A.; Hoek, G. Outlier Detection in Urban Air Quality Sensor Networks. Water Air Soil Pollut. 2018, 229, 111. [Google Scholar] [CrossRef]
  9. Karagulian, F.; Barbiere, M.; Kotsev, A.; Spinelle, L.; Gerboles, M.; Lagler, F.; Redon, N.; Crunaire, S.; Borowiak, A. Review of the Performance of Low-Cost Sensors for Air Quality Monitoring. Atmosphere 2019, 10, 506. [Google Scholar] [CrossRef]
  10. deSouza, P.N.; Barkjohn, K.; Clements, A.; Lee, J.; Kahn, R.; Crawford, B.; Kinney, P. An analysis of degradation in low-cost particulate matter sensors. Environ. Sci. Atmos. 2023, 3, 521–536. [Google Scholar] [CrossRef]
  11. Li, J.; Hauryliuk, A.; Malings, C.; Eilenberg, S.R.; Subramanian, R.; Presto, A.A. Characterizing the Aging of Alphasense NO2 Sensors in Long-Term Field Deployments. ACS Sens. 2021, 6, 2952–2959. [Google Scholar] [CrossRef] [PubMed]
  12. Giordano, M.R.; Malings, C.; Pandis, S.N.; Presto, A.A.; McNeill, V.F.; Westervelt, D.M.; Beekmann, M.; Subramanian, R. From low-cost sensors to high-quality data: A summary of challenges and best practices for effectively calibrating low-cost particulate matter mass sensors. J. Aerosol Sci. 2021, 158, 105833. [Google Scholar] [CrossRef]
  13. Levy Zamora, M.; Buehler, C.; Lei, H.; Datta, A.; Xiong, F.; Gentner, D.R.; Koehler, K. Evaluating the Performance of Using Low-Cost Sensors to Calibrate for Cross-Sensitivities in a Multipollutant Network. ACS EST Eng. 2022, 2, 780–793. [Google Scholar] [CrossRef] [PubMed]
  14. Popoola, O.A.M.; Stewart, G.B.; Mead, M.I.; Jones, R.L. Development of a baseline-temperature correction methodology for electrochemical sensors and its implications for long-term stability. Atmos. Environ. 2016, 147, 330–343. [Google Scholar] [CrossRef]
  15. Barkjohn, K.K.; Clements, A.; Mocka, C.; Barrette, C.; Bittner, A.; Champion, W.; Gantt, B.; Good, E.; Holder, A.; Hillis, B.; et al. Air Quality Sensor Experts Convene: Current Quality Assurance Considerations for Credible Data. ACS ES&T Air 2024, 1, 1203–1214. [Google Scholar] [CrossRef]
  16. Feenstra, B.; Papapostolou, V.; Hasheminassab, S.; Zhang, H.; Boghossian, B.D.; Cocker, D.; Polidori, A. Performance evaluation of twelve low-cost PM2.5 sensors at an ambient air monitoring site. Atmos. Environ. 2019, 216, 116946. [Google Scholar] [CrossRef]
  17. Kang, Y.; Aye, L.; Ngo, T.D.; Zhou, J. Performance evaluation of low-cost air quality sensors: A review. Sci. Total Environ. 2022, 818, 151769. [Google Scholar] [CrossRef]
  18. Tryner, J.; Mehaffy, J.; Miller-Lionberg, D.; Volckens, J. Effects of aerosol type and simulated aging on performance of low-cost PM sensors. J. Aerosol Sci. 2020, 150, 105654. [Google Scholar] [CrossRef]
  19. Kaur, K.; Kelly, K.E. Laboratory evaluation of the Alphasense OPC-N3, and the Plantower PMS5003 and PMS6003 sensors. J. Aerosol Sci. 2023, 171, 106181. [Google Scholar] [CrossRef]
  20. Molina Rueda, E.; Carter, E.; L’Orange, C.; Quinn, C.; Volckens, J. Size-Resolved Field Performance of Low-Cost Sensors for Particulate Matter Air Pollution. Environ. Sci. Technol. Lett. 2023, 10, 247–253. [Google Scholar] [CrossRef]
  21. Ouimette, J.; Arnott, W.P.; Laven, P.; Whitwell, R.; Radhakrishnan, N.; Dhaniyala, S.; Sandink, M.; Tryner, J.; Volckens, J. Fundamentals of low-cost aerosol sensor design and operation. Aerosol Sci. Technol. 2023, 58, 1–15. [Google Scholar] [CrossRef]
  22. Ouimette, J.R.; Malm, W.C.; Schichtel, B.A.; Sheridan, P.J.; Andrews, E.; Ogren, J.A.; Arnott, W.P. Evaluating the PurpleAir monitor as an aerosol light scattering instrument. Atmos. Meas. Tech. 2022, 15, 655–676. [Google Scholar] [CrossRef]
  23. Hagan, D.H.; Kroll, J.H. Assessing the accuracy of low-cost optical particle sensors using a physics-based approach. Atmos. Meas. Tech. 2020, 13, 6343–6355. [Google Scholar] [CrossRef] [PubMed]
  24. Kaur, K.; Kelly, K.E. Performance evaluation of the Alphasense OPC-N3 and Plantower PMS5003 sensor in measuring dust events in the Salt Lake Valley, Utah. Atmos. Meas. Tech. 2023, 16, 2455–2470. [Google Scholar] [CrossRef]
  25. Duvall, R.M.; Clements, A.L.; Hagler, G.; Kamal, A.; Kilaru, V.; Goodman, L.; Frederick, S.; Barkjohn, K.K.J.; VonWald, I.; Greene, D.; et al. Performance Testing Protocols, Metrics, and Target Values for Ozone Air Sensors: Use in Ambient, Outdoor, Fixed Site, Non-Regulatory Supplemental and Informational Monitoring Applications; EPA/600/R-20/279; U.S. Environmental Protection Agency, Office of Research and Development: Washington, DC, USA, 2021.
  26. Duvall, R.; Clements, A.; Barkjohn, K.; Kumar, M.; Greene, D.; Dye, T.; Papapostolou, V.; Mui, W.; Kuang, M. NO2, CO, and SO2 Supplement to the 2021 Report on Performance Testing Protocols, Metrics, and Target Values for Ozone Air Sensors; U.S. Environmental Protection Agency: Washington, DC, USA, 2024.
  27. Duvall, R.; Clements, A.; Hagler, G.; Kamal, A.; Kilaru, V.; Goodman, L.; Frederick, S.; Johnson Barkjohn, K.; VonWald, I.; Greene, D.; et al. Performance Testing Protocols, Metrics, and Target Values for Fine Particulate Matter Air Sensors: Use in Ambient, Outdoor, Fixed Site, Non-Regulatory Supplemental and Informational Monitoring Applications; EPA/600/R-20/280; U.S. Environmental Protection Agency, Office of Research and Development: Washington, DC, USA, 2021.
  28. Duvall, R.; Clements, A.; Barkjohn, K.; Kumar, M.; Greene, D.; Dye, T.; Papapostolou, V.; Mui, W.; Kuang, M. PM10 Supplement to the 2021 Report on Performance Testing Protocols, Metrics, and Target Values for Fine Particulate Matter Air Sensors; U.S. Environmental Protection Agency: Washington, DC, USA, 2023.
  29. Collier-Oxandale, A.; Feenstra, B.; Papapostolou, V.; Zhang, H.; Kuang, M.; Der Boghossian, B.; Polidori, A. Field and laboratory performance evaluations of 28 gas-phase air quality sensors by the AQ-SPEC program. Atmos. Environ. 2020, 220, 117092. [Google Scholar] [CrossRef]
  30. Jiao, W.; Hagler, G.; Williams, R.; Sharpe, R.; Brown, R.; Garver, D.; Judge, R.; Caudill, M.; Rickard, J.; Davis, M.; et al. Community Air Sensor Network (CAIRSENSE) project: Evaluation of low-cost sensor performance in a suburban environment in the southeastern United States. Atmos. Meas. Tech. 2016, 9, 5281–5292. [Google Scholar] [CrossRef]
  31. Feinberg, S.; Williams, R.; Hagler, G.S.W.; Rickard, J.; Brown, R.; Garver, D.; Harshfield, G.; Stauffer, P.; Mattson, E.; Judge, R.; et al. Long-term evaluation of air sensor technology under ambient conditions in Denver, Colorado. Atmos. Meas. Tech. 2018, 11, 4605–4615. [Google Scholar] [CrossRef]
  32. Champion, W.M.; MacDonald, M.K.; Thomas, B.; Bantupalli, S.C.; Thoma, E.D. Methane Sensor Characterization Using Colocated Ambient Comparisons and Simulated Emission Challenges. ACS EST Air 2025, 2, 1359–1368. [Google Scholar] [CrossRef]
  33. Zhang, J.; Bai, L.; Li, N.; Wang, Y.; Lv, Y.; Shi, Y.; Yang, C.; Xu, C. Low-Cost Particulate Matter Mass Sensors: Review of the Status, Challenges, and Opportunities for Single-Instrument and Network Calibration. ACS Sens. 2025, 10, 3207–3221. [Google Scholar] [CrossRef] [PubMed]
  34. Levy Zamora, M.; Buehler, C.; Datta, A.; Gentner, D.R.; Koehler, K. Identifying optimal co-location calibration periods for low-cost sensors. Atmos. Meas. Tech. 2023, 16, 169–179. [Google Scholar] [CrossRef]
  35. Malings, C.; Tanzer, R.; Hauryliuk, A.; Saha, P.K.; Robinson, A.L.; Presto, A.A.; Subramanian, R. Fine particle mass monitoring with low-cost sensors: Corrections and long-term performance evaluation. Aerosol Sci. Technol. 2020, 54, 160–174. [Google Scholar] [CrossRef]
  36. Lu, T.; Liu, Y.; Garcia, A.; Wang, M.; Li, Y.; Bravo-Villasenor, G.; Campos, K.; Xu, J.; Han, B. Leveraging Citizen Science and Low-Cost Sensors to Characterize Air Pollution Exposure of Disadvantaged Communities in Southern California. Int. J. Environ. Res. Public Health 2022, 19, 8777. [Google Scholar] [CrossRef]
  37. McFarlane, C.; Raheja, G.; Malings, C.; Appoh, E.K.E.; Hughes, A.F.; Westervelt, D.M. Application of Gaussian Mixture Regression for the Correction of Low Cost PM2.5 Monitoring Data in Accra, Ghana. ACS Earth Space Chem. 2021, 5, 2268–2279. [Google Scholar] [CrossRef]
  38. Byrne, R.; Ryan, K.; Venables, D.S.; Wenger, J.C.; Hellebust, S. Highly local sources and large spatial variations in PM2.5 across a city: Evidence from a city-wide sensor network in Cork, Ireland. Environ. Sci. Atmos. 2023, 3, 919–930. [Google Scholar] [CrossRef]
  39. Hodoli, C.G.; Mead, I.; Coulon, F.; Ivey, C.E.; Tawiah, V.O.; Raheja, G.; Nimo, J.; Hughes, A.; Haug, A.; Krause, A.; et al. Urban Air Quality Management at Low Cost Using Micro Air Sensors: A Case Study from Accra, Ghana. ACS EST Air 2025, 2, 201–214. [Google Scholar] [CrossRef]
  40. Barkjohn, K.K.; Gantt, B.; Clements, A.L. Development and application of a United States-wide correction for PM2.5 data collected with the PurpleAir sensor. Atmos. Meas. Tech. 2021, 14, 4617–4637. [Google Scholar] [CrossRef] [PubMed]
  41. Leenose, J.F.; Vilagi, A.; Pride, D.; Betha, R.; Aggarwal, S. Unveiling the Limits of Existing Correction Factors for a Low-Cost PM2.5 Sensor in Cold Environments and Development of a Tailored Solution. ACS EST Air 2025, 2, 1191–1201. [Google Scholar] [CrossRef]
  42. Psotka, J.; Tracey, E.; Sica, R.J. Retrieval of bulk hygroscopicity from PurpleAir PM2.5 sensor measurements. Atmos. Meas. Tech. 2025, 18, 3135–3146. [Google Scholar] [CrossRef]
  43. Nilson, B.; Jackson, P.L.; Schiller, C.L.; Parsons, M.T. Development and evaluation of correction models for a low-cost fine particulate matter monitor. Atmos. Meas. Tech. 2022, 15, 3315–3328. [Google Scholar] [CrossRef]
  44. Wallace, L.; Zhao, T.; Klepeis, N.E. Calibration of PurpleAir PA-I and PA-II Monitors Using Daily Mean PM2.5 Concentrations Measured in California, Washington, and Oregon from 2017 to 2021. Sensors 2022, 22, 4741. [Google Scholar] [CrossRef] [PubMed]
  45. Ghahremanloo, M.; Choi, Y.; Momeni, M. Deep learning calibration model for PurpleAir PM2.5 measurements: Comprehensive Investigation of the PurpleAir network. Atmos. Environ. 2025, 348, 121118. [Google Scholar] [CrossRef]
  46. Jaffe, D.A.; Miller, C.; Thompson, K.; Finley, B.; Nelson, M.; Ouimette, J.; Andrews, E. An evaluation of the U.S. EPA’s correction equation for PurpleAir sensor data in smoke, dust, and wintertime urban pollution events. Atmos. Meas. Tech. 2023, 16, 1311–1322. [Google Scholar] [CrossRef]
  47. Weissert, L.F.; Henshaw, G.S.; Williams, D.E.; Feenstra, B.; Lam, R.; Collier-Oxandale, A.; Papapostolou, V.; Polidori, A. Performance evaluation of MOMA (MOment MAtching)—A remote network calibration technique for PM2.5 and PM10 sensors. Atmos. Meas. Tech. 2023, 16, 4709–4722. [Google Scholar] [CrossRef]
  48. Miskell, G.; Alberti, K.; Feenstra, B.; Henshaw, G.S.; Papapostolou, V.; Patel, H.; Polidori, A.; Salmond, J.A.; Weissert, L.; Williams, D.E. Reliable data from low cost ozone sensors in a hierarchical network. Atmos. Environ. 2019, 214, 116870. [Google Scholar] [CrossRef]
  49. Miskell, G.; Salmond, J.A.; Williams, D.E. Solution to the Problem of Calibration of Low-Cost Air Quality Measurement Sensors in Networks. ACS Sens. 2018, 3, 832–843. [Google Scholar] [CrossRef] [PubMed]
  50. Hagler, G.; Clements, A. Air sensor data—What are the current technical practices and unmet needs of the EPA, state, local, and tribal air monitoring agencies? National Ambient Air Monitoring Conference, Pittsburgh, PA, USA, 22–25 August 2020. Available online: https://cfpub.epa.gov/si/si_public_record_report.cfm?LAB=CEMM&dirEntryID=349515 (accessed on 30 September 2025).
  51. Hagler, G.; Clements, A.; Brown, R.; Garver, D.; Evans, R.; Barrette, C.; McMahon, E.; Vallano, D.; Judge, R.; Waldo, S.; et al. Understanding the air sensor data management, visualization, and analysis needs of government air quality organizations in the United States. National Ambient Air Monitoring Conference, Pittsburgh, PA, USA, 22–25 August 2022. Available online: https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=355522&Lab=CEMM&simplesearch=0&showcriteria=2&sortby=pubDate&searchall=Understanding+the+air+sensor+data+management,+visualization,+and+analysis+needs+of+government+air+quality+organizations+in+the+United+States&timstype= (accessed on 30 September 2025).
  52. Rosales, C.M.; Bratburd, J.R.; Diez, S.; Duncan, S.; Malings, C.; Pant, P. Open Air Quality Data Platforms for Environmental Health Research and Action. Curr. Environ. Health Rep. 2025, 12, 27. [Google Scholar] [CrossRef] [PubMed]
  53. Feenstra, B.; Collier-Oxandale, A.; Papapostolou, V.; Cocker, D.; Polidori, A. The AirSensor open-source R-package and DataViewer web application for interpreting community data collected by low-cost sensor networks. Environ. Model. Softw. 2020, 134, 104832. [Google Scholar] [CrossRef]
  54. Carslaw, D.C.; Ropkins, K. openair—An R package for air quality data analysis. Environ. Model. Softw. 2012, 27–28, 52–61. [Google Scholar] [CrossRef]
  55. Collier-Oxandale, A.; Feenstra, B.; Papapostolou, V.; Polidori, A. AirSensor v1.0: Enhancements to the open-source R package to enable deep understanding of the long-term performance and reliability of PurpleAir sensors. Environ. Model. Softw. 2022, 148, 105256. [Google Scholar] [CrossRef]
  56. Yang, C.-T.; Chan, Y.-W.; Liu, J.-C.; Lou, B.-S. An implementation of cloud-based platform with R packages for spatiotemporal analysis of air pollution. J. Supercomput. 2020, 76, 1416–1437. [Google Scholar] [CrossRef]
  57. Dai, Y.; Liu, B.; Tong, C.; Shi, Z. Aqpet—An R package for air quality policy evaluation. Environ. Model. Softw. 2024, 177, 106052. [Google Scholar] [CrossRef]
  58. Díaz, J.J.; Mura, I.; Franco, J.F.; Akhavan-Tabatabaei, R. aiRe—A web-based R application for simple, accessible and repeatable analysis of urban air quality data. Environ. Model. Softw. 2021, 138, 104976. [Google Scholar] [CrossRef]
  59. MacDonald, M.K.; Champion, W.M.; Thoma, E.D. SENTINEL: A Shiny app for processing and analysis of fenceline sensor data. Environ. Model. Softw. 2025, 189, 106462. [Google Scholar] [CrossRef] [PubMed]
  60. Kelly, C.; Fawkes, J.; Habermehl, R.; de Ferreyro Monticelli, D.; Zimmerman, N. PLUME Dashboard: A free and open-source mobile air quality monitoring dashboard. Environ. Model. Softw. 2023, 160, 105600. [Google Scholar] [CrossRef]
  61. Mahajan, S. Vayu: An Open-Source Toolbox for Visualization and Analysis of Crowd-Sourced Sensor Data. Sensors 2021, 21, 7726. [Google Scholar] [CrossRef] [PubMed]
  62. Kumar, M.; Frederick, S.G.; Barkjohn, K.K.; Clements, A.L. Sensortoolkit—A Python Library for Standardizing the Ingestion, Analysis, and Reporting of Air Sensor Data for Performance Evaluation. Sensors 2025, 25, 5645. [Google Scholar] [CrossRef]
  63. Clements, A. EPA Tools and Resources Webinar: Web-Based Data Visualization of Air Sensor Data with RETIGO Version 4. EPA Tools and Resources Webinar, Online, 5 June 2024. Available online: https://www.epa.gov/system/files/documents/2024-06/508-compliant-tools-webinar-retigo.pdf (accessed on 30 September 2025).
  64. Prados, A.I.; Leptoukh, G.; Lynnes, C.; Johnson, J.; Rui, H.; Chen, A.; Husar, R.B. Access, Visualization, and Interoperability of Air Quality Remote Sensing Data Sets via the Giovanni Online Tool. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 3, 359–370. [Google Scholar] [CrossRef]
  65. Conner, T.; Clements, A.; Williams, R.; Srivastava, M.; Kaufman, A.A. Macro Analysis Tool—MAT. 2018. Available online: https://www.epa.gov/air-research/instruction-guide-and-macro-analysis-tool-evaluating-air-sensors-collocation-federal (accessed on 3 November 2025).
  66. Barkjohn, K.K.; Seppanen, C.; Arunachalam, S.; Krabbe, S.; Clements, A.L. Air Sensor Data Unifier: R-Shiny Application. Air 2025, 3, 21. [Google Scholar] [CrossRef]
  67. Chang, W.; Cheng, J.; Allaire, J.; Sievert, C.; Schloerke, B.; Xie, Y.; Allen, J.; McPherson, J.; Dipert, A.; Borges, B. shiny: Web Application Framework for R. 2025. Available online: https://rstudio.r-universe.dev/shiny (accessed on 3 November 2025).
  68. R Core Team. R: A Language and Environment for Statistical Computing, version 4.4.1; R Foundation for Statistical Computing: Vienna, Austria, 2024. Available online: https://cran.r-project.org/doc/manuals/r-release/fullrefman.pdf (accessed on 3 November 2025).
  69. Szykman, J.; Freeman, M.; Henderson, B.U.S. EPA Remote Sensing Information Gateway (RSIG). In Proceedings of the Western States Air Resources Council Training Workshop on use of Satellite Data for Air Quality, Fort Collins, CO, USA, 5–7 August 2025. Available online: https://appliedsciences.nasa.gov/sites/default/files/2025-08/D2P3_RSIG_ARSET_2025_v2.pdf (accessed on 3 November 2025).
  70. Hampel, F.R. The Influence Curve and its Role in Robust Estimation. J. Am. Stat. Assoc. 1974, 69, 383–393. [Google Scholar] [CrossRef]
  71. Callahan, J.; Martin, H.; Wilson, K.; Brasel, T.; Miller, H. AirSensor: Process and Display Data from Air Quality Sensors, R package version 1.1.1. 2023. Available online: https://mazamascience.github.io/AirSensor/ (accessed on 30 September 2025).
  72. Chen, H.; Zhang, W.; Sheng, L. Canadian record-breaking wildfires in 2023 and their impact on US air quality. Atmos. Environ. 2025, 342, 120941. [Google Scholar] [CrossRef]
  73. Barkjohn, K.K.; Yaga, R.; Thomas, B.; Schoppman, W.; Docherty, K.S.; Clements, A.L. Evaluation of Long-Term Performance of Six PM2.5 Sensor Types. Sensors 2025, 25, 1265. [Google Scholar] [CrossRef]
  74. Khreis, H.; Johnson, J.; Jack, K.; Dadashova, B.; Park, E.S. Evaluating the performance of low-cost air quality monitors in Dallas, Texas. Int. J. Environ. Res. Public Health 2022, 19, 1647. [Google Scholar] [CrossRef] [PubMed]
  75. Masson, N.; Piedrahita, R.; Hannigan, M. Approach for quantification of metal oxide type semiconductor gas sensors used for ambient air quality monitoring. Sens. Actuators B Chem. 2015, 208, 339–345. [Google Scholar] [CrossRef]
  76. Spinelle, L.; Gerboles, M.; Villani, M.G.; Aleixandre, M.; Bonavitacola, F. Field calibration of a cluster of low-cost available sensors for air quality monitoring. Part A: Ozone and nitrogen dioxide. Sens. Actuators B Chem. 2015, 215, 249–257. [Google Scholar] [CrossRef]
  77. Liang, L. Calibrating low-cost sensors for ambient air monitoring: Techniques, trends, and challenges. Environ. Res. 2021, 197, 111163. [Google Scholar] [CrossRef]
  78. Johnson, N.E.; Bonczak, B.; Kontokosta, C.E. Using a gradient boosting model to improve the performance of low-cost aerosol monitors in a dense, heterogeneous urban environment. Atmos. Environ. 2018, 184, 9–16. [Google Scholar] [CrossRef]
  79. Ratingen, S.v.; Vonk, J.; Blokhuis, C.; Wesseling, J.; Tielemans, E.; Weijers, E. Seasonal Influence on the Performance of Low-Cost NO2 Sensor Calibrations. Sensors 2021, 21, 7919. [Google Scholar] [CrossRef] [PubMed]
  80. Liu, X.; Jayaratne, R.; Thai, P.; Kuhn, T.; Zing, I.; Christensen, B.; Lamont, R.; Dunbabin, M.; Zhu, S.; Gao, J.; et al. Low-cost sensors as an alternative for long-term air quality monitoring. Environ. Res. 2020, 185, 109438. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.