Next Article in Journal
Predicting Perennial Ryegrass Cultivars and the Presence of an Epichloë Endophyte in Seeds Using Near-Infrared Spectroscopy (NIRS)
Next Article in Special Issue
A Comparative Study of CO2 Forecasting Strategies in School Classrooms: A Step Toward Improving Indoor Air Quality
Previous Article in Journal
Federated Learning Framework for Real-Time Activity and Context Monitoring Using Edge Devices
Previous Article in Special Issue
Quality Assessment and Application Scenario Analysis of AGRI Land Aerosol Product from the Geostationary Satellite Fengyun-4B in China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Long-Term Performance of Six PM2.5 Sensor Types

by
Karoline K. Barkjohn
1,*,
Robert Yaga
2,
Brittany Thomas
2,
William Schoppman
2,
Kenneth S. Docherty
1,2 and
Andrea L. Clements
1,*
1
US Environmental Protection Agency Office of Research and Development, Research Triangle Park, Durham, NC 27711, USA
2
Amentum, Research Triangle Park, Durham, NC 27711, USA
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(4), 1265; https://doi.org/10.3390/s25041265
Submission received: 20 December 2024 / Revised: 31 January 2025 / Accepted: 6 February 2025 / Published: 19 February 2025
(This article belongs to the Special Issue Recent Trends in Air Quality Sensing)

Abstract

:
From July 2019 to January 2021, six models of PM2.5 air sensors were operated at seven air quality monitoring sites across the U.S. in Arizona, Colorado, Delaware, Georgia, North Carolina, Oklahoma, and Wisconsin. Common PM sensor data issues were identified, including repeat zero measurements, false high outliers, baseline shift, varied relationships between the sensor and monitor, and relative humidity (RH) influences. While these issues are often easy to identify during colocation, they are more challenging to identify or correct during deployment since it is hard to differentiate between real pollution events and sensor malfunctions. Air sensors may exhibit wildly different performances even if they have the same or similar internal components. Commonly used RH corrections may still have variable bias by hour of the day and seasonally. Most sensors show promise in achieving the U.S. Environmental Protection Agency (EPA) performance targets, and the findings here can be used to improve their performance and reliability further. This evaluation generated a robust dataset of colocated air sensor and monitor data, and by making it publicly available along with the results presented in this paper, we hope the dataset will be an asset to the air sensor community in understanding sensor performance and validating new methods.

1. Introduction

Air sensors are increasingly used to measure particulate matter (PM) around the United States and the world. Air sensors, sometimes called “low-cost sensors”, can cost an order of magnitude less than air monitors and require fewer resources to operate and maintain. However, many air sensors have limitations in accuracy and precision that may make it challenging to deliver credible data [1,2]. In addition, there can be a lack of information from manufacturers on factory calibrations and other design features that can impact sensor performance. Sensor data must be carefully examined to identify problems [3] and understand limitations [4]. Particulate matter air sensors typically measure particles using light scattering either from a cloud of particles (i.e., nephelometric) or from single particles (i.e., optical particle counter (OPC)) [5]. Air sensors can fill in spatial gaps between regulatory monitors once their limitations are understood and improved where possible.
Air sensors are typically operated alongside federal reference method (FRM), federal equivalent method (FEM) monitors, or other research-grade monitors (i.e., colocation) to better understand their performance [6,7]. Colocating air sensors with FRMs or FEMs, or other air monitors enables users to understand better the performance of the sensors (e.g., accuracy, precision, bias, and drift) relative to the monitors and to understand the influence of other environmental factors on performance [8,9,10,11]. Colocated data (i.e., within 20 m horizontal of FRM or FEM monitors [10]) are scarce, especially for time periods greater than a few months. Colocation data are valuable for air sensor evaluations, quantifying precision between sensors, refining air sensor quality control algorithms, determining air sensor correction algorithms, estimating the uncertainty of data, and other validation activities. To date, many multi-sensor evaluations occur at a single site [12,13]. Single-site and short-term evaluations have similar limitations because of the limited range of environmental conditions and particle properties experienced. Sensor evaluations considering multiple sites often consider a single sensor manufacturer or a single model of air sensor [14,15,16,17,18,19], providing less generalizable results. Recent work has identified nearby air sensor air monitor pairs and assumed they are close enough to make strong comparisons [14,18]. While this may often be the case, localized sources and real differences in pollutant concentrations can bias the findings. There is a need for more multi-site, multi-sensor, longer-term true colocation data to understand air sensor performance better, identify failure patterns, build effective corrections, and make sensors more robust, enabling a variety of air monitoring applications.
This evaluation across multiple U.S. states expands our understanding of PM2.5 air sensor performance by operating multiple sensor types across the United States, thus exposing them to a large range of environmental conditions and aerosol composition for more than a year. Our objectives with this study are (1) to identify common failure modes of PM sensors, (2) to briefly explore the influence of RH on various PM2.5 air sensors, (3) to compare bias across geographically diverse sites at different times of the year, and (4) to provide a publicly accessible dataset for validation of data quality assurance and correction methods.

2. Materials and Methods

2.1. Study Design Overview

Six sensor types were colocated across seven air monitoring sites. Nine sensors of each type were first colocated in Research Triangle Park, North Carolina (NC) (prior to July 2019), and then typically, one of each sensor type was deployed to the other six air monitoring sites while three remained in NC. Concurrent colocations across the seven sites ran between 22 July 2019 and 1 January 2021; there were technical difficulties in making the Arisense (ARS) (Aerodyne, Billerica, MA, USA) sensors operational, so those devices were sent out later in February 2020, and sixteen ARS sensors were evaluated in total with the additional sensors deployed in NC. In total, 58 sensors were evaluated. At the end of this study, all sensors were brought back to NC for a final 30-day colocation.

2.2. Sensors Selected

Six sensor models were selected based on their popularity and availability during project planning in 2018 and early 2019 and to provide a variety of sensor components and data processing methodologies for comparison.
Table 1, additional details in Supplementary Information (SI). Many of the models evaluated are no longer available from the manufacturer because of significant changes or improvements to the technology made in the past five years. All devices used wall power except for three Clarity Node-S devices evaluated in Research Triangle Park, NC. Four of the sensors tested use different versions of the Plantower sensor (Nanchang City, China) while the other two use the Nova SDS011 (Nova Fitness Co., Ltd., Jinan, China) and the Particles Plus OPC (Stoughton, MA, USA).

2.3. Long-Term Monitoring Sites Selected

Seven colocation sites were chosen based on location, both geographic and relative to other selected sites, and the types of criteria pollutants being measured at each site (Table 2, Supplementary Information S1.1). These sites span five of the nine continental U.S. climate regions [20,21] (Figure 1), have a wide range of temperature, RH, and PM2.5 concentrations (Figure 2), and represent urban-to-neighborhood scale environments to ensure a variety of source influences. Sites outside of NC included Phoenix, Arizona (AZ); Denver, Colorado (CO); Wilmington, Delaware (DE); Decatur, Georgia (GA); Oklahoma City, Oklahoma (OK); Milwaukee, Wisconsin (WI). These are regulatory sites monitoring a variety of pollutants, except the NC site, which is the Air Innovation Research Site (AIRS) of the EPA that reports data to AirNow. Teledyne API T640 or T640x optical monitors (San Diego, CA, USA) were used for comparison at all sites except for West Phoenix, where a Thermo tapered element oscillating microbalance (TEOM) 1405-DF (ThermoFisher Scientific, Waltham, MA, USA) was used. All T640 and T640x data use the original firmware and are not reflective of the April 2023 firmware update. All monitors were maintained, and data was quality assured by local agency staff. Sensors were within 20 m horizontally of the monitors (as specified in the performance targets [10]), with many sensors within a few meters of the monitors.

2.4. Data Processing and Analysis

Most reference data were downloaded from the Air Quality System (AQS) (Table S8). These data are quality assured and validated by the air agencies providing these data as specified in their site quality assurance project plan (QAPP) (e.g., flow and leak check instruments, investigating outliers, maintenance). NC data were provided by EPA staff directly after some quality assurance similar to other sites. Data not available from AQS were downloaded from AirNow Tech, including temperature and RH data from the AZ site. AirNow Tech data are not as closely quality-assured and quality-controlled as data from AQS.
Data were averaged to hourly averages. Plots were generated to visualize each month for each sensor type (Example: Figure S17) at each site. Each plot was visually inspected to identify common data issues.
For the CNO sensors, we evaluate both the raw uncalibrated PM2.5 values generated by the device and the values as corrected with the 2021 wildfire correction (CNO_wf) (Supplementary Equation (S1)) (https://www.clarity.io/2021-wildfire-calibrations, accessed: 29 August 2024), now superseded by the PM2.5 Global Calibration v2 (https://www.clarity.io/blog/clarity-releases-v2-pm-global-calibration-model-with-significant-performance-improvements, last accessed 12 November 2024). For PAR, we used previously developed methods to exclude measurements when the A and B channel measurements are significantly different [22]. We also evaluated the performance using previously developed corrections (PAR_wf) [15,22]. The highest PAR hourly concentration was 448 µg/m3, so the extended correction for high concentrations (>570 µg/m3) was not needed [15]. PAR sensors were the only sensors with duplicate internal sensors (i.e., Plantower PMS5003s (Nanchang City, China)), labeled as channels A and B, evaluated during this study. Since other sensors did not have duplicate internal sensors, similar quality assurance could not be applied.
For this project, sites were visited roughly weekly, with some interruptions because of staffing and COVID-19. During these visits, the physical operation of the sensor was checked, data were physically downloaded from some sensors, and data were checked for completeness. A more thorough data review was completed later where research staff applied data flags to each data record. Many flags indicate why these data were not available, but some indicate abnormalities (e.g., sampling interval abnormality). Flag files were compiled for each sensor, documenting a variety of errors experienced during testing (Table S24). These flags include sensors operating somewhere other than the field site, warm-up period, shutdown for data collection/maintenance, sensor maintenance, operator working near device, sampling interval abnormality, data loss due to user error, cellular/Wi-Fi communication error, power connection error, sensor malfunction either hardware or firmware, data incomplete due to meteorological, gas, or PM sensor malfunction, and data value issues including drastic/sudden spike or decrease, outside of expected range, or timestamp adjustments. Many of these flagged data were not removed (e.g., sampling interval abnormality). These were summarized to better understand common issues and failure patterns for the different sensor types. Since this paper focuses on PM2.5, only the PM2.5 sensor and “all” sensor flags were considered as, in some cases, the gas sensors failed separately (e.g., ozone (O3) flags not considered in this paper).
We also considered the influence of relative humidity by binning these data into 10% RH bins (e.g., 0–10%, 10–20%, and 20–30%) and considered the ratio of sensor PM2.5 divided by monitor PM2.5 in each bin to evaluate the percent RH Influence.
Percent   RH   Influence = High   RH   Ratio Low   RH   Ratio Mean   Ratio   ×   100 %
where High RH Ratio is the average ratio (hourly sensor/monitor) at the highest RH bin with at least 10 h of valid ratios between the sensor and the monitor, and Low RH Ratio is the ratio at the lowest RH bin with at least 10 h of valid ratios between the sensor and the monitor, and Mean Ratio is the mean of the means of all 10 bins. Note that the mean ratio is not the same as the average ratio of all data since there may be varying amounts of data in each bin depending on the environmental conditions during each collocation. Ratios were excluded if the monitor read < 5 µg/m3 as lower readings may be below the detection limit of the instruments, and small variations in the denominator may add noise obscuring the influence of RH.

3. Results and Discussion

3.1. Common Failure Points

Common points of failure seen across the units included loss of power due to loose or damaged power connections, battery issues, corrupted or damaged data storage (i.e., SD card, USB), and lost communication. Some delicate SD card ports were damaged due to the frequent data download schedule. Many of these failures could be addressed by modifications to the sensor design, including the use of connectors that prevent twisting of wires, avoiding silicone as the only means to secure fittings, mounting batteries in a way that prevents gravity from working against contact points, incorporating onboard data storage backup even if a sensor transmits data, testing sensor operation and data transmission in many different environments/countries, and adding visual status indicators for power, battery voltage, data logging, and data communications both on the sensor and in the online data dashboard. Real-time cellular communication strength indicators can help users site sensors in the field.
The most common flags for the CNO, ARS, AQY, and RAM sensors were sampling interval abnormalities (Table S25). Each sensor model had expected sampling intervals (Table S1), which varied by sensor type, except for the Clarity Node-S, which has a sampling period that is variable to accommodate the solar-powered operation. This flag was applied if intervals were skipped or if sampling did not conform to the expected interval. The biggest issue for PAR sensors was data loss due to a power connection error. For MAX sensors, the biggest issue was data loss from sensor hardware malfunction, most often associated with the battery. Other common sensor issues included PM sensor malfunction and data loss due to firmware malfunction, user error, or cellular/Wi-Fi communication error. Additional details are provided in the Supplementary Information.

3.2. Overall Performance by Site

First, we considered the performance of each type of sensor at each location without removing periods with sensor data issues. We evaluated performance based on the EPA Performance Target of R2 > 0.7 for PM2.5 [10]. We have not considered the additional performance metrics (e.g., slope and intercept) since it is assumed that an adequate R2 slope and intercept could also meet the performance targets with a basic linear regression correction. The performance target reports recommend evaluating at least a 30-day period [10], and most of the evaluations here cover more than a year. The performance targets also recommend using 24-h averages, though 1-h averages can be used as well [10]. We have used 1-h averages here due to the interest in high-time resolution data; however, 24-h averaged R2 would often be higher.
Sensor performance is highly variable by sensor type (Figure 3). The PAR sensors met the R2 target in all states except for OK, both with and without the wildfire correction (OK R2 = 0.67–0.68). CNO met the R2 target in all states except for NC and OK (R2 = 0.56–0.68) and, when the wildfire correction was used, no longer met the R2 target in GA (R2 decreased by 0.20) but performed better in OK (although the R2 only increased by 0.03). Similar to CNO, the MAX met the R2 target in all states except for NC and OK (R2 = 0.50–0.58). The AQY sensors met the EPA R2 performance target in WI and CO with weak correlations (R2 = 0.00–0.42) seen in other states, typically because of high concentration outliers. The RAM and ARS sensors performed poorly across all states (R2 = 0.00–0.57, R2 = 0.02–0.45, respectively). The RAM and PAR use the same internal sensor, the Plantower PMS5003; the difference in performance between these two sensors highlights the importance of sensor integration into the larger package (e.g., orientation, flow path, and onboard correction) and the impact on performance.
Sensor performance is highly variable by location (Figure 3) but some sites had adequate performance for most sensors. CO and WI had the most sensors meet the R2 target with six of eight sensors or corrected sensors meeting the target. This suggests that it may be easiest to use PM2.5 air sensors in CO and WI and obtain accurate results, likely because of the wider range of PM2.5 concentrations experienced, more consistent particle properties, and favorable meteorological conditions. DE and AZ had five of eight sensors meet the R2 target suggesting that PM2.5 sensors will typically perform well in these locations. AZ experienced dust impacts and has more large particles than other parts of the country (Figure S1). Many previous studies have shown PM air sensors often measure particles only 0.3 to 1 µm with variable success. This dust can lead to sensor underestimations and inaccuracies [23,24,25,26,27], which likely leads to poor or changing agreement between sensors and monitors in AZ. However, the R2 metric somewhat depends on the concentration range experienced, and AZ had many hourly PM2.5 concentrations above 100 µg/m3 (Figure 2, sample size (N) = 72) leading to typically higher R2 values (R2 = 0.00–0.92). In addition, these high-concentration events may be primarily wood smoke events instead of dust events (i.e., larger particle events) [28] that are easier for sensors to measure [1,23]. Sensors may have performed better in these locations for a variety of reasons.
Sensor performance has some dependence on the monitor used as a reference. Both the TEOM and T640/T640x were designated and operated as Federal Equivalent Methods. Past work comparing sensor performance to a TEOM and T640 has shown stronger correlations between sensors and the T640, likely because they are both optical methods, and have also shown significant fluctuations from the TEOM at 5 µg/m3 or less [29]. However, the T640 or T640x may provide slightly higher estimates of PM2.5 compared with the TEOM [30] but typically perform adequately [31].
Other sites had poor performance for half or more of the sensor types or corrections, indicating they may be more challenging environments for sensors to operate. GA had four of eight sensor or sensor corrections meet the target suggesting that most sensors may be able to provide adequate measurement accuracy in GA. Only two sensor types or sensor corrections (PAR and PAR wildfire corrected) met the R2 target in NC. All sensors were colocated in NC before and after testing, and typically, three sensors were run simultaneously throughout the project. Sensors have different responses (i.e., low precision), leading to the typically lower performance as measured by overall R2 in NC. This lack of precision has been shown in past work with similar sensors [12,32]. Only one sensor type or sensor correction (CNO_wf) met the R2 target in OK. The OK sites experienced changing particle size distributions (Figure S1) and properties. Changing particle size distributions can lead to sensor inaccuracies and variable relationships between the sensors and monitors [24,25,33], which likely leads to poor or changing agreement between sensors in OK.
Sites with typically better performance likely have more stable particle properties—other than AZ, which was previously discussed. In many cases, we would need additional information on particle size distribution and chemical composition to draw further conclusions about the differences at these sites. Many previous studies have been published looking into the PM2.5 characteristics at these sites and in these cities, showing variable particle properties (e.g., size distribution and chemical composition) depending on source [34], wind speed and direction [34,35,36,37,38,39], local meteorology [40], time of day [36,39,40,41,42], weekday versus weekend [43], and season [43]. These variations in particle properties can impact particle light scattering [44] and particle hygroscopicity [45], which can impact sensor performance [33]. However, it is unknown how relatable that work is to the period studied during this project. The time period during this study may differ because of year-to-year differences in sources and meteorology, spatially variable long-term trends in PM2.5 concentrations and compositions [46,47,48], and also because of the impacts of the covid pandemic on local and regional PM2.5 concentrations [49].
Our results are typically in line with past work for most types of sensors evaluated during this study. APT sensors (including the APT Maxima (MAX) and Minima) strongly correlate with reference measurements internationally [50,51,52,53], sometimes requiring an RH correction [54], sometimes requiring correction dependent on source composition [55], and exhibit a strong correlation in the lab [56,57] but correlations may be weaker in some locations and can be dependent on reference monitor type and data averaging [58]. To our knowledge, no work to date has directly compared the performance of the APT MAX and Minima, but the devices have the same internal components. CNO sensors have typically seen strong or near strong correlations (R2 = 0.69) with reference measurements internationally [59], although some have seen moderate correlations (R2 = 0.61) with limited improvement with correction [60]. Stronger correlations were observed when also correcting for temperature and RH [61]. PAR sensors are the most widely studied and have typically seen strong correlations across the U.S. and North America, especially once RH influences are accounted for [14,15,19,62], but have changing relationships during dust impacts [23,63]. Past PM2.5 evaluations of the ARS are not comparable due to changes in the internal sensing component [64]. To our knowledge, no past work has evaluated this model of the ARS, which incorporated the Particles Plus OPC. AQY has been strongly correlated with a Beta Attenuation Mass monitor in California during short-term evaluation [12], with mixed results during short-term smoke impacts [62], mixed results in California [65], and poor correlation in Texas [66]. More complicated network corrections have been proposed to improve AQY sensor performance for PM2.5. However, these corrections are dependent on external information (i.e., using monitor data as a proxy) and not just data coming from the sensor itself [65]. Many of the international evaluations have higher average PM2.5 concentrations and wider ranges or PM2.5 potentially contributing to stronger correlations in those areas. Additional correction could further improve the performance of these sensors.
Past work has typically shown better performance for the RAM than was found during this study. The RAM has shown strong correlations during short-term smoke impacts [62] and mixed but near-adequate results internationally [67]. It is unknown how similar the correction that came with our sensor was to those used in these past projects. Much past work with the RAM used a previous design where they were attached to external PM2.5 sensors (e.g., Met-One Neighborhood Particulate Monitor) or deployed alongside PAR sensors [68]. It is likely that changes to the sensor design over time have contributed to the differences in results between this study and past work.

3.3. Common Data Issues

Four common issues were identified in the monthly plots and are outlined in the sections below, along with an additional section on RH influence. The four common issues identified were repeat zero measurements, single point high concentration outliers, baseline shift where the relationship between the sensors and monitor changes for a period of hours, and variable relationships between sensors and monitors where the relationship between sensors and monitors changes for longer periods (e.g., days). These issues were removed before considering the influence of RH and variability in the bias sections below but were not removed before considering the overall performance (Section 3.2 above).

3.3.1. Zero

Some of the sensors reported repeat zeros. This was especially common for the ARS sensors, and often this was an indicator that the PM sensor pump had failed. All zeros were removed from these ARS data (12%), and in other sensors, short periods with repeat zeros were removed (<1%). These short periods of zeros were identified visually. It is important to remove zeros carefully from the dataset as past work has also shown some sensors will read repeat zeros when the concentrations detected are low and near the limit of detection of the sensor [69]. Depending on the project objectives, zeros when the concentrations are low and near the limit of detection would be kept in the dataset so as not to bias the dataset high. For example, Figure 4 shows three MAX sensors operating in NC. In late October, one of the MAX (purple) starts reporting repeat zeros; however, zeros also occur when concentrations are low, as shown by other sensors in the time series and scatter plots.

3.3.2. Outlier

Some sensors also experienced outliers where these sensor data would suddenly be 10s of µg/m3 higher than the monitor (e.g., Figure 5A,B) or previous and subsequent 1-h sensor readings. However, other examples of outliers are true short-term PM pollution events that would be hard to identify without reference data (e.g., Figure 5C,D). Both figures show a concentration jump of about 50 µg/m3 compared with the average 1-h average concentrations over the prior week. Without additional details about what might be causing a 1-h high concentration event and data from the monitor, it would be hard to identify whether the sensor readings represent outliers representative of a sensor issue or a real pollution event. PAR is unique as its dual Plantower design allows for most sensor malfunction outliers to be excluded since it is highly unlikely both channels will have outlier issues at the same time. If both sensors report a high concentration, it is more likely a real PM2.5 event.
While more time could be spent developing mathematical criteria to exclude outliers from our dataset, it is important to consider that colocation often occurs at regulatory sites away from localized sources, while sensor networks are often deployed to measure pollutant hotspots [70,71]. This means any methods validated at a regulatory site might incorrectly remove outliers from sensors deployed in a network with different PM patterns (e.g., localized sources and localized geography). For this reason, outliers were removed from this colocation study dataset manually by visually inspecting each monthly plot.

3.3.3. Baseline Shift

Sometimes sensors saw baseline shifts leading to short-term strong overestimates of PM2.5 compared with the monitor (Figure 6A,B). Baseline shift errors are times when the sensor strongly overestimates PM2.5 concentrations more than usual for a particular sensor and for more than a single point outlier, as described in the section above. However, PM2.5 sometimes has true baseline shifts in concentration when regional or long-range pollution blows in (Figure 6C,D). An example of a real baseline shift was experienced during late June of 2020 when Saharan dust impacted the United States [72]. Similar to the outlier example, without additional information on local or regional sources, air monitoring data, or data from colocated sensors, it would be hard to identify baseline shifts from those due to true PM2.5 events. We define these events differently from the outliers, as multiple points in a row are impacted instead of a single point.

3.3.4. Variable Relationship Between Sensor and Monitor

While some months showed strong correlations between sensors and monitors, some showed weak correlations or distinctly different relationship patterns during other months. Both examples in Figure 7, panels B and D, show two distinct prongs in the scatter plots. Panels A and B in Figure 7 show data from two colocated RAM sensors along with reference data. In this case, the response of one sensor changes mid-month, showing higher concentrations that agree more closely with the reference monitor. This changed response cannot be explained. The example shown in panels C and D in Figure 7 is from the Saharan dust event that occurred late in the month. In this case, CNO often reads higher than the monitor (i.e., as shown in the first part of the time series through 22 June and with many of the scatter plot points above the 1:1 line). However, on 26–28 June, the sensor did not detect the dust particles, while the reference monitor detected dust particles, leading to a relatively low sensor response.

3.4. RH Influence

Much past work has documented that sensors are often biased by high RH [16,33,68]. Some sensors, including the ARS, CNO, MAX, and PAR, show increasing overestimation of PM2.5 as RH increases (Figure 8). AQY and RAM show little change in bias across different RHs. After applying the wildfire corrections, both PAR and CNO show little impact of RH on bias. The RH correction in the CNO and PAR wildfire equations are similar (CNO = −0.0510 × RH, PAR = −0.0862 × RH) and appear effective across all sites. However, these terms are not necessarily directly comparable because of the differences in other terms in the equation, potential differences in RH measurements, and other factors.
To compare the magnitude of the influence of RH by site and sensor, we compared the change in the mean sensor to monitor concentration ratio at low and high RH. Figure 9 expresses the influence of RH as a percent, calculated by dividing the difference in the mean ratio in the highest and lowest RH bins by the mean ratio across all bins. AQY is well corrected for RH across sites, with the influence at most sites within ±35%. The ARS has the largest influence from RH, with measurements at high RH 200% higher or more at most sites; this is the only device that uses an OPC which may lead to the larger influence of RH [33]. CNO and MAX see overestimations due to RH at all sites, while PAR sees overestimation at most sites. Negative RH influence, below 0 on the plot, indicates over-correction for RH since we would expect RH to increase the size of the particles and, therefore, the PM2.5 estimates. After correction, PAR_wf and the RAM over-correct for RH at most sites, while CNO_wf slightly over-corrects. The influence of RH on the measurements is dependent on the sensor, with large differences seen across sensor types.
The difference in RH influence across locations is more similar than the difference in influence by sensor type. Typically, higher RH leads to higher sensor PM2.5 estimates, but some sensors at each site show the opposite trend (i.e., higher RH leads to lower PM2.5 estimates), suggesting that measurements have been over-corrected. Some of the difference in RH influence between locations could be due to variations in particle type and hygroscopicity but some of the variation may be due to differences in individual sensor performance.
The federal equivalent method keeps the RH of the particles relatively constant by conditioning the sampled aerosol. None of the sensors evaluated in this study had driers or RH controls. If sensors do not physically control RH, correction algorithms are typically needed to account for water associated with the particles. These corrections allow the measurements from sensors to be comparable to federal equivalent methods that control RH. These algorithms are dependent on the RH measured by the sensor (e.g., CNO wildfire correction and PAR wildfire correction), so it is important to understand the accuracy of the RH measurement as well. Scatterplots in Figure 10 explore the agreement between sensor-based RH measurements compared with the high-quality measurements made at each air monitoring site. Some of the scatter in these plots may be due to the difference in the internal operating temperature of the sensor and fluctuations as the sensor experiences shade versus the sun. Typically, these sensors would experience periods of sun and shade every day. In some cases, the sensors may be measuring the internal RH as higher than what the particles experience as the ambient RH making these values potentially more useful for correction. The ARS RH sensors seem to be the least consistent and reliable. Some of the AQY RH measurements are at 100% even when ambient RH is low (~25%). Some time periods are identified for PAR and RAM where stuck values occur, indicating sensor or communication error.

3.5. Variability in Bias

3.5.1. Bias by Sensor and Location

We considered mean bias error (MBE) by sensor make and location (Figure 11). These MBEs are comparable because the average concentrations, as measured by the monitors, were 7–10 µg/m3 for each location. AQY and the RAM strongly underestimate PM2.5 across all locations. The ARS has wide variability, with some sensors showing strong underestimations and some sensors showing strong overestimations. Without correction, CNO and PAR overestimate the concentrations at multiple sites. CNO_wf, MAX, and PAR_wf typically have low biases within ±1.7 µg/m3 (20% of the average PM2.5 concentration). When comparing bias by site, the differences are less distinct, with all sites having at least one sensor with a bias of less than 1.7 µg/m3 and other sensors that strongly over or underestimate. The typical bias is more variable by sensor type, with less difference seen in bias by location.

3.5.2. Hour of Day Performance

PM2.5 concentrations vary over the day and by site in terms of average concentration and variability (Figure 12 and Figure S19). AZ has the largest daily variability in average PM2.5 by hour, with concentrations varying by more than 6 µg/m3, and WI has the least variability with less than 1 µg/m3 difference over the day. Although already corrected, both CNO_wf and PAR_wf data typically underestimate PM2.5 concentrations (Figure 12). This may be due to the comparison with T640 and T640x data which have been shown to be slightly biased high [15,30,73,74,75]. While a new correction for T640 data was developed that slightly adjusts the PM2.5 measurements (https://downloads.regulations.gov/EPA-HQ-OAR-2023-0642-0029/content.pdf, accessed on 3 September 2024), these data were collected prior to the release of the correction and so uncorrected T640 data have been used throughout this paper. If the T640 measurements were adjusted for the 2 µg/m3 overestimation, the bias for the sensors would be closer to zero. Interestingly, the CNO wildfire correction works almost perfectly in WI with little daily variation in bias, which remains near zero. For PAR, the bias is similar (about 1 µg/m3) for all sites except for AZ and CO, and all sites are slightly more biased in the morning and during higher RH. In some cases, the sensors do not show the same daily patterns. In GA, NC, and OK, the monitor shows lower evening concentrations, but GA CNO_wf, both NC sensor types, and the OK CNO_wf show higher evening concentrations. These sensors could be unreliable in providing information on what time of day the air is cleanest to exercise or spend time outdoors. This may be due to differences in particle properties or environmental conditions at different times of the day, leading to different sensor responses. These results suggest more work may be needed to improve the corrections to have stable bias by the hour for each day.

3.5.3. Monthly Bias

Bias is variable by month (Figure 13, Table 3) with bias from +13 µg/m3 (DE PAR July 2019; NC PAR Oct 2020, WI CNO Dec 2010) to −18 µg/m3 (AZ RAM Jan 2020) (additional details Supplementary Information). The most variation in MBE occurs in NC for the ARS, CNO_wf, MAX, and PAR; the variability is due to both seasonal differences in performance and the difference in bias between duplicate sensors operated in NC (i.e., spread in MBE values in the same month) indicating low precision. Excluding NC, the ARS, CNO, and MAX have the most monthly difference in WI, and CNO_wf, PAR_wf, and RAM have the most monthly difference in MBE in AZ. AQY has the most variability in MBE in OK, and PAR has the most variability in DE. On the flip side, most sensors had the least variability in MBE in GA (CNO, MAX, PAR, PAR_wf, and RAM), with AQY and ARS having the least variability in CO and CNO_wf having the least variability in WI. These results suggest that WI and AZ may have more seasonal differences in particle properties and environmental conditions that lead to variable performance. However, corrections such as the CNO_wf correction may be able to account for much of the difference in monthly bias in WI due to RH influences. In AZ more dust events occur in the summer [76] and more smoke in the winter [28], likely leading to variable performance. AZ has the largest change in PM2.5 concentration over months of the year with a minimum of 5–6 µg/m3 in spring and summer months (i.e., August and Sept 2019 and March–July 2020) and a maximum of 30 µg/m3 in Dec of 2020 and 22 µg/m3 in Jan 2020. In AZ, some sensors see larger underestimation (i.e., negative MBE) at higher concentrations (e.g., AQY, CNO_wf, PAR_wf, and RAM) while others see a larger overestimation at higher concentrations (e.g., CNO, MAX, and PAR) which seems mostly dependent on the different concentration ranges experienced. For example, much higher concentrations experienced in AZ during Jan of 2020 than July 2020 leading to larger over and underestimations depending on concentration.
Even after applying corrections, PAR_wf and CNO_wf show strong biases at different sites across different months. While the median MBE is −1 µg/m3 for CNO_wf, the MBE varies from −7 to +1 µg/m3. For PAR_wf, the median MBE is also −1 µg/m3, but the MBE varies from −9 to +2 µg/m3. It is important to understand the limitations of sensors and the potential for seasonal bias. Without careful correction, conclusions about seasonal patterns in PM2.5 as measured by sensors should be used with care, as differences in particle properties and environmental conditions may lead to incorrect conclusions. In addition, different sites may need different lengths of colocations to generate useful results, and leaving a sensor colocated will help determine how sensor performance may change seasonally.
We also compare the range in monthly MBE to the overall R2 for each sensor type at each site (Table 3). The R2 values are calculated after removing the common data issues identified in Section 3.3, so in many cases, they are higher than those shown in Figure 3. The median range of MBE for each site/sensor model is 7 µg/m3. In AZ and OK, sensors typically have large ranges in monthly MBE. In AZ, the R2 target is typically met, while in OK, it typically is not. It may be challenging for most sensors to accurately measure PM2.5 in AZ and OK because of changing particle properties and seasonal differences. In GA and NC, many sensors do not meet the R2 targets but have a low range of MBE values. It may be more challenging for sensors to measure accurate concentrations in GA and NC as well. In CO, DE, and WI, the range in MBE is low, and the R2 target is met by most sensors indicating it may be easiest for sensors to perform in these environments.

4. Conclusions

Long-term air sensor evaluations across six states highlighted common failure points for air sensors, including both physical (e.g., shipping damage, communication loss, and wiring coming unplugged) and data issues (e.g., sampling frequency issues, outliers, stuck zero, baseline shifts, and variable relationships between sensor and monitor PM2.5). Many of these data issues would be hard to identify without colocated monitor data or at least data from a duplicate sensor.
RH can lead to strong bias from air sensors, highlighting the need for physical humidity control (e.g., dryer) or robust correction algorithms and performance evaluation of the onboard RH measurement. Sensors with stronger RH influences (e.g., OPCs) may benefit the most from nonlinear RH corrections. Common RH correction methods may reduce bias on average, as shown with AQY, CNO_wf, and PAR_wf. However, care should be taken to ensure these data are not over-corrected.
Bias may be variable by month, indicating that seasonal corrections may improve performance and highlighting the need to consider sensor limitations when drawing conclusions (e.g., ensure seasonal sensor bias is limited before drawing conclusions about seasonal PM2.5 concentrations using sensors).
The sensors evaluated show varying degrees of promise to provide accurate PM2.5 data across the U.S. AQY could be accurate if baseline shift issues can be improved or identified and removed. The ARS will be challenging to use because of the low R2, high RH influence, frequent repeat zeros, and the largely variable seasonal bias. The RAM will be challenging to use because of the low R2, which may be due in part to the over-correction for RH influence. CNO, CNO_wf, PAR, PAR_wf, and MAX could perform accurately in many locations. The performance of all sensors could be improved with further individual sensor-specific or location-specific corrections in some cases, including improved RH correction.
Much improvement has occurred in the field of air sensors since the purchase of these sensors five years ago. It is largely unknown how the performance of these devices will compare to the latest versions from each manufacturer because of changes in quality assurance and data processing practices. Many sensors on the market today still use similar internal sensors (e.g., Plantower), but some of the issues identified in this paper may have already been corrected by better manufacturing, more sophisticated software, advancements in calibration, and the introduction of improved automated quality control.
Sampling frequency may also impact the comparability of these results. For example, most of the CNO sensors evaluated were the CNO Node with wall power that typically sampled more than 20 times per hour. However, this model has been discontinued, and now all CNO Node-S sensors use solar power. This device provides hourly averages based on fewer measurements per hour. This may lead to greater uncertainty, lower internal temperatures, different RH influences, and overall different performance.
While this study captures some of the wide variety in sensor performance results to date, the results of this study may not apply to performance across all parts of the U.S. with different conditions. In addition, other parts of the world may have different local meteorological conditions, particle properties, and PM concentrations, leading to different sensor performances. Though we captured a year or more of data at these sites, it does not represent the full range of conditions that could be experienced as some events (e.g., extreme wildfire smoke) may only happen every few years.
Additional analysis could be accomplished with this dataset. By making it publicly available, we hope others will use it to support ongoing work to understand air sensor performance better, draw more conclusions, and validate correction methods under development.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s25041265/s1, Table S1: LTPP Sensor List with additional details. Many of the temperature and RH measurements may be internal measurements instead of ambient measurements. Some of the sampling intervals could be modified by the user or by the manufacturer, but these were the intervals used during this project; Table S2: Aeroqual deployment locations and time periods by serial IDs. Sensors were first colocated in NC (deploy = Pre) before being sent for ~1 year of colocation (deploy = deploy) and then a final colocation in NC (deploy = Post); Table S3: ARISense deployment locations and time periods by serial IDs. Sensors were first colocated in NC (deploy = Pre) before being sent for ~1 year of colocation (deploy = deploy) and then a final colocation in NC (deploy = Post); Table S4: Clarity deployment locations and time periods by serial IDs. Sensors were first colocated in NC (deploy = Pre) before being sent for ~1 year of colocation (deploy = deploy) and then a final colocation in NC (deploy = Post); Equation (S1). The 2021 Wildfire correction for Clarity data; Table S5: MAX deployment locations and time periods by serial IDs. Sensors were first colocated in NC (deploy = Pre) before being sent for ~1 year of colocation (deploy = deploy) and then a final colocation in NC (deploy = Post); Table S6. PurpleAir deployment locations and time periods by serial IDs. Sensors were first colocated in NC (deploy = Pre) before being sent for ~1 year of colocation (deploy = deploy) and then a final colocation in NC (deploy = Post); Table S7. RAMP deployment locations and time periods by serial IDs. Sensors were first colocated in NC (deploy = Pre) before being sent for roughly a 1 year of colocation (deploy = deploy) and then a final colocation in NC (deploy = Post); Figure S1. Reference monitor PM10/PM2.5 ratio by location where available. The blue solid line is the median for all sites (1.92) dashed lines are the overall 1st and 3rd quartiles (1.61, 2.39). Values above 7.4 have not been plotted. The boxplot suggests more coarse PM in AZ than in other locations and more variability in particle size distribution in OK; Table S8. Reference data sources and details; Table S9. Additional Details AZ Site; Table S10. AZ Site Monitors; Figure S2. Photo of AZ Site; Figure S3. AZ Site—Deployment diagram; Figure S4. AZ Site—Deployed Sensors. Left on railing RAMP, another sensor not in this study, PAR. Right on railing AQY, another sensor not in this study, CNO, MAX; Table S11. CO Site Details; Table S12. CO Site Monitors; Figure S5. Photo of CO Site; Figure S6. CO Site—Deployed Sensors. Left AQY, Clarity. Right RAMP, Maxima, PurpleAir. (Arisense deployed later and not pictured); Table S13. DE Site Details; Table S14. MLK Site Monitors*; Figure S7. Photo of DE Site; Table S15. GA Site Details; Table S16. South DeKalb Site Monitors; Figure S8. Photo of GA Site; Figure S9. GA Site—Deployed Sensors. On railing: Maxima, PurpleAir, Clarity, AQY, RAMP (Arisense deployed later and not pictured); Table S17. NC Site Details; Table S18. NC Site Monitors; Figure S10. Photo of NC Site. Sensors were deployed on the deck; Figure S11. NC Site—Deployed Sensors. Left top three RAMP, bottom three PurpleAir, three Maxima. Right top three Clarity Node-S (with solar power), bottom 3 AQY. Left and Right images show the front and back of the same structure where sensors were mounted; Figure S12. Arisense sensors at AIRS. All sensors were run side-by-side, but not all sensors pictured were used in this project; Table S19. OK Site Details; Table S20. OK Site Monitors; Figure S13. OK Site; Figure S14. Oklahoma Site—Deployed Sensors. On rail: Clarity, PurpleAir, Maxima, AQY, RAMP; Table S21. WI Site Details; Table S22. WI WDNR Headquarters Site Monitors; Figure S15. Photo of WI Site; Figure S16. WI Site—Deployed Sensors. On railing: AQY, RAMP, Clarity, Maxima, and PurpleAir (Arisense deployed later and not pictured); Figure S17. Visual inspection of each sensor type at each site each month was used to identify any problems with time synchronization, outliers, or other issues. Example plot for Clarity wildfire corrected (CNO_wf) data at Research Triangle Park, NC site in September of 2019. The black line (A, B) represents monitor data; Figure S18. An example of low precision between RAM sensors in NC; Table S23. Months with <80% data completeness as measured by hours of sensor data/hours of monitor data. The sensors in GA were deployed for the longest at the request of the agency. AQY sensors were run for a few extra months since it was determined partway through the project that the gas sensors in some of them were unplugged leading to less usable data. The ARS were deployed for less time because of the multiple issues in making them operational; Table S24. Flag Definitions; Table S25. Overall hours flagged by flag type (Table 3). Excluded flags with <24 h flagged. Note that nine sensors running for a year yield 78,840 total hours or 3285 days. The total data collected for each sensor type was variable due to a variety of factors. Only flags on PM2.5 or “ALL” parameters were considered in this analysis since this paper focuses on PM2.5 sensor performance; Figure S19. Daily patterns of all sensors compared with the monitor (black); Table S26. Summary of MBE minimum, maximum, and the range of MBE by sensor make and location and R2. Excludes months with <24 h of data and pre and post-colocation; Figure S20. Average FEM concentrations by month across locations. Sensors had different data completeness leading to slightly different average FEM concentrations; Figure S21. Comparison of sensor performance in January and July of 2020. Note that ARS has been excluded because of low data completeness.

Author Contributions

Conceptualization, A.L.C.; Methodology A.L.C. and K.K.B.; Data collection R.Y., B.T., W.S. and K.S.D.; Data Curation R.Y., B.T. and W.S.; Formal Analysis K.K.B.; Writing—Original Draft Preparation K.K.B., R.Y., B.T., W.S. and K.S.D.; Writing—Review and Editing K.K.B., R.Y., B.T., W.S., K.S.D. and A.L.C.; Visualization K.K.B.; Funding Acquisition and Contract Oversight and Technical Direction, A.L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by U.S. Environmental Protection Agency internal funding.

Data Availability Statement

Data will be available after publication from DOI: 10.23719/1531918.

Acknowledgments

This work would not have been possible without state and local agency partners who provided site access, power, and staff time to support data download and troubleshooting efforts as part of the Long Term Performance Project by the EPA including Maricopa County Air Quality Department (Ben Davis), Oklahoma Department of Environmental Quality (Kent Stafford, Ryan Biggerstaff, Daniel Ross), Colorado Department of Public Health and Environment (Gordon Pierce, Erick Mattson), Delaware Division of Air Quality (Charles Sarnoski, Keith Hoffman, Tristan Bostock), and Georgia Environmental Protection Division (Ken Buckley), and Wisconsin Department of Natural Resources (Benjamin Wolf, Gabe Ziskin). EPA staff who provided reference data for the NC site, including Joann Rice, Colin Barrette, and Tim Hanley. The authors wish to acknowledge support for project coordination, data assembly and analysis, troubleshooting, and field support provided by Jacobs under contract number EP-C-15-008 (now Amentum). In addition to the co-authors, support was provided by Cortina Johnson, Elaine Monbureau, Kierra Johnson, Sam Oyeniran, William Millians, Fiker Desalegan, and Diya Yang. Thank you to Rachelle Duvall for serving as alternative task order contract officer representative, Sam Frederick for help with data processing, Ian VonWald for support of the AZ project, and Carry Croghan for help with dataset quality assurance. Thank you to Christine Alvarez and Libby Nessley for quality assurance support. Clarity sensors were loaned for testing free of charge by Clarity Movement Co. by Materials Transfer Agreement with U.S. EPA (MTA #1065-19). Aeroqual sensors were provided by a Cooperative Research and Development (Agreement No. 934-16) with Aeroqual Limited. Thank you to Levi Stanton (Clarity) and Eben Cross (QuantAQ) for providing the final corrected datasets used in this paper after the project’s conclusion.

Conflicts of Interest

Authors Robert Yaga, Brittany Thomas, William Schoppman, Kenneth S. Docherty were employed by the company Amentum. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations and acronyms are used in this manuscript:
AIRSAir Innovation Research Site
AQYAeroqual AQY sensor
ARSArisense sensor
AZArizona
CNOClarity Node and Clarity Node-S sensor
CNO_wfClarity Node sensor wildfire corrected
COColorado
DEDelaware
EPAEnvironmental Protection Agency
GAGeorgia
NCNorth Carolina
MAXMaxima sensor
MBEMean bias error
MDPIMultidisciplinary Digital Publishing Institute
OKOklahoma
O3Ozone
PARPurpleAir sensor
PAR_wfPurpleAir sensor wildfire corrected
PM2.5Fine particulate matter
RAMRAMP sensor
RHRelative Humidity
WIWisconsin

References

  1. Barkjohn, K.K.; Clements, A.; Mocka, C.; Barrette, C.; Bittner, A.; Champion, W.; Gantt, B.; Good, E.; Holder, A.; Hillis, B.; et al. Air Quality Sensor Experts Convene: Current Quality Assurance Considerations for Credible Data. ACS EST Air 2024, 1, 1203–1214. [Google Scholar] [CrossRef] [PubMed]
  2. Karagulian, F.; Barbiere, M.; Kotsev, A.; Spinelle, L.; Gerboles, M.; Lagler, F.; Redon, N.; Crunaire, S.; Borowiak, A. Review of the Performance of Low-Cost Sensors for Air Quality Monitoring. Atmosphere 2019, 10, 506. [Google Scholar] [CrossRef]
  3. Clements, A.L.; Reece, S.; Conner, T.; Williams, R. Observed data quality concerns involving low-cost air sensors. Atmos. Environ. X 2019, 3, 100034. [Google Scholar] [CrossRef] [PubMed]
  4. Kang, Y.; Aye, L.; Ngo, T.D.; Zhou, J. Performance evaluation of low-cost air quality sensors: A review. Sci. Total Environ. 2022, 818, 151769. [Google Scholar] [CrossRef] [PubMed]
  5. Giordano, M.R.; Malings, C.; Pandis, S.N.; Presto, A.A.; McNeill, V.F.; Westervelt, D.M.; Beekmann, M.; Subramanian, R. From low-cost sensors to high-quality data: A summary of challenges and best practices for effectively calibrating low-cost particulate matter mass sensors. J. Aerosol Sci. 2021, 158, 105833. [Google Scholar] [CrossRef]
  6. Duvall, R.M.; Hagler, G.S.W.; Clements, A.L.; Benedict, K.; Barkjohn, K.; Kilaru, V.; Hanley, T.; Watkins, N.; Kaufman, A.; Kamal, A.; et al. Deliberating Performance Targets: Follow-on workshop discussing PM10, NO2, CO, and SO2 air sensor targets. Atmos. Environ. 2020, 246, 118099. [Google Scholar] [CrossRef]
  7. Williams, R.; Duvall, R.; Kilaru, V.; Hagler, G.; Hassinger, L.; Benedict, K.; Rice, J.; Kaufman, A.; Judge, R.; Pierce, G.; et al. Deliberating performance targets workshop: Potential paths for emerging PM2.5 and O3 air sensor progress. Atmos. Environ. X 2019, 2, 100031. [Google Scholar] [CrossRef]
  8. Duvall, R.; Clements, A.; Barkjohn, K.; Kumar, M.; Greene, D.; Dye, T.; Papapostolou, V.; Mui, W.; Kuang, M. NO2, CO, and SO2 Supplement to the 2021 Report on Performance Testing Protocols, Metrics, and Target Values for Ozone Air Sensors; U.S. Environmental Protection Agency, Ed.; U.S. Environmental Protection Agency: Washington, DC, USA, 2024.
  9. Duvall, R.; Clements, A.; Barkjohn, K.; Kumar, M.; Greene, D.; Dye, T.; Papapostolou, V.; Mui, W.; Kuang, M. PM10 Supplement to the 2021 Report on Performance Testing Protocols, Metrics, and Target Values for Fine Particulate Matter Air Sensors; U.S. Environmental Protection Agency, Ed.; U.S. Environmental Protection Agency: Washington, DC, USA, 2023.
  10. Duvall, R.; Clements, A.; Hagler, G.; Kamal, A.; Kilaru, V.; Goodman, L.; Frederick, S.; Johnson Barkjohn, K.; VonWald, I.; Greene, D.; et al. Performance Testing Protocols, Metrics, and Target Values for Fine Particulate Matter Air Sensors: Use in Ambient, Outdoor, Fixed Site, Non-Regulatory Supplemental and Informational Monitoring Applications; EPA/600/R-20/280; U.S. Environmental Protection Agency, Office of Research and Development: Washington, DC, USA, 2021.
  11. Duvall, R.M.; Clements, A.L.; Hagler, G.; Kamal, A.; Kilaru, V.; Goodman, L.; Frederick, S.; Barkjohn, K.K.J.; VonWald, I.; Greene, D.; et al. Performance Testing Protocols, Metrics, and Target Values for Ozone Air Sensors: Use in Ambient, Outdoor, Fixed Site, Non-Regulatory Supplemental and Informational Monitoring Applications; EPA/600/R-20/279; U.S. Environmental Protection Agency: Washington, DC, USA, 2021.
  12. Feenstra, B.; Papapostolou, V.; Hasheminassab, S.; Zhang, H.; Boghossian, B.D.; Cocker, D.; Polidori, A. Performance evaluation of twelve low-cost PM2.5 sensors at an ambient air monitoring site. Atmos. Environ. 2019, 216, 116946. [Google Scholar] [CrossRef]
  13. Jiao, W.; Hagler, G.; Williams, R.; Sharpe, R.; Brown, R.; Garver, D.; Judge, R.; Caudill, M.; Rickard, J.; Davis, M.; et al. Community Air Sensor Network (CAIRSENSE) project: Evaluation of low-cost sensor performance in a suburban environment in the southeastern United States. Atmos. Meas. Tech. 2016, 9, 5281–5292. [Google Scholar] [CrossRef]
  14. Nilson, B.; Jackson, P.L.; Schiller, C.L.; Parsons, M.T. Development and Evaluation of Correction Models for a Low-Cost Fine Particulate Matter Monitor. Atmos. Meas. Tech. Discuss. 2022, 2022, 1–16. [Google Scholar] [CrossRef]
  15. Barkjohn, K.K.; Holder, A.L.; Frederick, S.G.; Clements, A.L. Correction and Accuracy of PurpleAir PM2.5 Measurements for Extreme Wildfire Smoke. Sensors 2022, 22, 9669. [Google Scholar] [CrossRef] [PubMed]
  16. Zheng, T.; Bergin, M.H.; Johnson, K.K.; Tripathi, S.N.; Shirodkar, S.; Landis, M.S.; Sutaria, R.; Carlson, D.E. Field evaluation of low-cost particulate matter sensors in high-and low-concentration environments. Atmos. Meas. Tech. 2018, 11, 4823–4846. [Google Scholar] [CrossRef]
  17. Johnson, K.K.; Bergin, M.H.; Russell, A.G.; Hagler, G.S. Field test of several low-cost particulate matter sensors in high and low concentration urban environments. Aerosol Air Qual. Res 2018, 18, 565–578. [Google Scholar] [CrossRef] [PubMed]
  18. Wallace, L.; Zhao, T.; Klepeis, N.E. Calibration of PurpleAir PA-I and PA-II Monitors Using Daily Mean PM2.5 Concentrations Measured in California, Washington, and Oregon from 2017 to 2021. Sensors 2022, 22, 4741. [Google Scholar] [CrossRef]
  19. deSouza, P.N.; Barkjohn, K.; Clements, A.; Lee, J.; Kahn, R.; Crawford, B.; Kinney, P. An analysis of degradation in low-cost particulate matter sensors. Environ. Sci. Atmos. 2023, 3, 521–536. [Google Scholar] [CrossRef]
  20. Karl, T.R.; Koss, W.J. Regional and National Monthly, Seasonal, and Annual Temperature Weighted by Area, 1895–1983. Hist. Climatol. Ser. 1984, 4-3, 38. [Google Scholar]
  21. NOAA. U.S. Climate Regions. Available online: https://www.ncdc.noaa.gov/monitoring-references/maps/us-climate-regions.php (accessed on 5 February 2025).
  22. Barkjohn, K.K.; Gantt, B.; Clements, A.L. Development and application of a United States-wide correction for PM2.5 data collected with the PurpleAir sensor. Atmos. Meas. Tech. 2021, 14, 4617–4637. [Google Scholar] [CrossRef] [PubMed]
  23. Jaffe, D.A.; Miller, C.; Thompson, K.; Finley, B.; Nelson, M.; Ouimette, J.; Andrews, E. An evaluation of the U.S. EPA’s correction equation for PurpleAir sensor data in smoke, dust, and wintertime urban pollution events. Atmos. Meas. Tech. 2023, 16, 1311–1322. [Google Scholar] [CrossRef]
  24. Ouimette, J.; Arnott, W.P.; Laven, P.; Whitwell, R.; Radhakrishnan, N.; Dhaniyala, S.; Sandink, M.; Tryner, J.; Volckens, J. Fundamentals of low-cost aerosol sensor design and operation. Aerosol Sci. Technol. 2023, 58, 1–15. [Google Scholar] [CrossRef]
  25. Ouimette, J.R.; Malm, W.C.; Schichtel, B.A.; Sheridan, P.J.; Andrews, E.; Ogren, J.A.; Arnott, W.P. Evaluating the PurpleAir monitor as an aerosol light scattering instrument. Atmos. Meas. Tech. 2022, 15, 655–676. [Google Scholar] [CrossRef]
  26. Kuula, J.; Kuuluvainen, H.; Rönkkö, T.; Niemi, J.V.; Saukko, E.; Portin, H.; Aurela, M.; Saarikoski, S.; Rostedt, A.; Hillamo, R.; et al. Applicability of Optical and Diffusion Charging-Based Particulate Matter Sensors to Urban Air Quality Measurements. Aerosol Air Qual. Res. 2019, 19, 1024–1039. [Google Scholar] [CrossRef]
  27. Kaur, K.; Kelly, K.E. Performance evaluation of the Alphasense OPC-N3 and Plantower PMS5003 sensor in measuring dust events in the Salt Lake Valley, Utah. Atmos. Meas. Tech. 2023, 16, 2455–2470. [Google Scholar] [CrossRef]
  28. Pope, R.; Stanley, K.M.; Domsky, I.; Yip, F.; Nohre, L.; Mirabelli, M.C. The relationship of high PM2.5 days and subsequent asthma-related hospital encounters during the fireplace season in Phoenix, AZ, 2008–2012. Air Qual. Atmos. Health 2017, 10, 161–169. [Google Scholar] [CrossRef]
  29. Li, Y.; Yuan, Z.; Chen, L.W.A.; Pillarisetti, A.; Yadav, V.; Wu, M.; Cui, H.; Zhao, C. From air quality sensors to sensor networks: Things we need to learn. Sens. Actuators B Chem. 2022, 351, 130958. [Google Scholar] [CrossRef]
  30. Aberkane, T. Evaluation of PM instruments in New Zealand. Air Qual. Clim. Change 2021, 55, 47–52. [Google Scholar]
  31. Toner, S.M. Evaluation of an optical PM measurement method compared to conventional PM measurement methods. Air Qual. Clim. Change 2021, 55, 63–70. [Google Scholar]
  32. Feinberg, S.; Williams, R.; Hagler, G.S.W.; Rickard, J.; Brown, R.; Garver, D.; Harshfield, G.; Stauffer, P.; Mattson, E.; Judge, R.; et al. Long-term evaluation of air sensor technology under ambient conditions in Denver, Colorado. Atmos. Meas. Tech. 2018, 11, 4605–4615. [Google Scholar] [CrossRef]
  33. Hagan, D.H.; Kroll, J.H. Assessing the accuracy of low-cost optical particle sensors using a physics-based approach. Atmos. Meas. Tech. 2020, 13, 6343–6355. [Google Scholar] [CrossRef] [PubMed]
  34. Ryder, O.S.; DeWinter, J.L.; Brown, S.G.; Hoffman, K.; Frey, B.; Mirzakhalili, A. Assessment of particulate toxic metals at an Environmental Justice community. Atmos. Environ. X 2020, 6, 100070. [Google Scholar] [CrossRef]
  35. Brown, S.G.; Penfold, B.; Mukherjee, A.; Landsberg, K.; Eisinger, D.S. Conditions Leading to Elevated PM2.5 at Near-Road Monitoring Sites: Case Studies in Denver and Indianapolis. Int. J. Environ. Res. Public Health 2019, 16, 1634. [Google Scholar] [CrossRef] [PubMed]
  36. Valerino, M.J.; Johnson, J.J.; Izumi, J.; Orozco, D.; Hoff, R.M.; Delgado, R.; Hennigan, C.J. Sources and composition of PM2.5 in the Colorado Front Range during the DISCOVER-AQ study. J. Geophys. Res. Atmos. 2017, 122, 566–582. [Google Scholar] [CrossRef]
  37. Upadhyay, N.; Clements, A.; Fraser, M.; Herckes, P. Chemical Speciation of PM2.5 and PM10 in South Phoenix, AZ. J. Air Waste Manag. Assoc. 2011, 61, 302–310. [Google Scholar] [CrossRef]
  38. Heo, J.; McGinnis, J.E.; de Foy, B.; Schauer, J.J. Identification of potential source areas for elevated PM2.5, nitrate and sulfate concentrations. Atmos. Environ. 2013, 71, 187–197. [Google Scholar] [CrossRef]
  39. Dreyfus, M.A.; Adou, K.; Zucker, S.M.; Johnston, M.V. Organic aerosol source apportionment from highly time-resolved molecular composition measurements. Atmos. Environ. 2009, 43, 2901–2910. [Google Scholar] [CrossRef]
  40. Stanier, C.; Singh, A.; Adamski, W.; Baek, J.; Caughey, M.; Carmichael, G.; Edgerton, E.; Kenski, D.; Koerber, M.; Oleson, J.; et al. Overview of the LADCO winter nitrate study: Hourly ammonia, nitric acid and PM2.5 composition at an urban and rural site pair during PM2.5 episodes in the US Great Lakes region. Atmos. Chem. Phys. 2012, 12, 11037–11056. [Google Scholar] [CrossRef]
  41. Clements, N.; Hannigan, M.P.; Miller, S.L.; Peel, J.L.; Milford, J.B. Comparisons of urban and rural PM10 − 2.5 and PM2.5 mass concentrations and semi-volatile fractions in northeastern Colorado. Atmos. Chem. Phys. 2016, 16, 7469–7484. [Google Scholar] [CrossRef]
  42. Weber, R. Short-Term Temporal Variation in PM2.5 Mass and Chemical Composition during the Atlanta Supersite Experiment, 1999. J. Air Waste Manag. Assoc. 2003, 53, 84–91. [Google Scholar] [CrossRef] [PubMed]
  43. Dutton, S.J.; Rajagopalan, B.; Vedal, S.; Hannigan, M.P. Temporal patterns in daily measurements of inorganic and organic speciated PM2.5 in Denver. Atmos. Environ. 2010, 44, 987–998. [Google Scholar] [CrossRef]
  44. Kerr, S.C.; Schauer, J.J.; Rodger, B. Regional haze in Wisconsin: Sources and the spatial distribution. J. Environ. Eng. Sci. 2004, 3, 213–222. [Google Scholar] [CrossRef]
  45. Petters, M.D.; Kreidenweis, S.M. A single parameter representation of hygroscopic growth and cloud condensation nucleus activity. Atmos. Chem. Phys. 2007, 7, 1961–1971. [Google Scholar] [CrossRef]
  46. Chan, E.A.W.; Gantt, B.; McDow, S. The reduction of summer sulfate and switch from summertime to wintertime PM2.5 concentration maxima in the United States. Atmos. Environ. 2018, 175, 25–32. [Google Scholar] [CrossRef] [PubMed]
  47. Zhai, X.; Mulholland, J.A.; Russell, A.G.; Holmes, H.A. Spatial and temporal source apportionment of PM2.5 in Georgia, 2002 to 2013. Atmos. Environ. 2017, 161, 112–121. [Google Scholar] [CrossRef]
  48. Bravo, M.A.; Warren, J.L.; Leong, M.C.; Deziel, N.C.; Kimbro, R.T.; Bell, M.L.; Miranda, M.L. Where Is Air Quality Improving, and Who Benefits? A Study of PM2.5 and Ozone over 15 Years. Am. J. Epidemiol. 2022, 191, 1258–1269. [Google Scholar] [CrossRef] [PubMed]
  49. He, J.; Harkins, C.; O’Dell, K.; Li, M.; Francoeur, C.; Aikin, K.C.; Anenberg, S.; Baker, B.; Brown, S.S.; Coggon, M.M.; et al. COVID-19 perturbation on US air quality and human health impact assessment. PNAS Nexus 2024, 3, pgad483. [Google Scholar] [CrossRef] [PubMed]
  50. Prakash, J.; Choudhary, S.; Raliya, R.; Chadha, T.S.; Fang, J.; George, M.P.; Biswas, P. Deployment of networked low-cost sensors and comparison to real-time stationary monitors in New Delhi. J. Air Waste Manag. Assoc. 2021, 71, 1347–1360. [Google Scholar] [CrossRef] [PubMed]
  51. Prakash, J.; Choudhary, S.; Raliya, R.; Chadha, T.; Fang, J.; Biswas, P. PM sensors as an indicator of overall air quality: Pre-COVID and COVID periods. Atmos. Pollut. Res. 2022, 13, 101594. [Google Scholar] [CrossRef] [PubMed]
  52. Dharaiya, V.R.; Malyan, V.; Kumar, V.; Sahu, M.; Venkatraman, C.; Biswas, P.; Yadav, K.; Haswani, D.; Raman, R.S.; Bhat, R.; et al. Evaluating the Performance of Low-cost PM Sensors over Multiple COALESCE Network Sites. Aerosol Air Qual. Res. 2023, 23, 220390. [Google Scholar] [CrossRef]
  53. Edwards, L.; Rutter, G.; Iverson, L.; Wilson, L.; Chadha, T.S.; Wilkinson, P.; Milojevic, A. Personal exposure monitoring of PM2.5 among US diplomats in Kathmandu during the COVID-19 lockdown, March to June 2020. Sci. Total Environ. 2021, 772, 144836. [Google Scholar] [CrossRef]
  54. Prajapati, B.; Dharaiya, V.; Sahu, M.; Venkatraman, C.; Biswas, P.; Yadav, K.; Pullokaran, D.; Raman, R.S.; Bhat, R.; Najar, T.A.; et al. Development of a physics-based method for calibration of low-cost particulate matter sensors and comparison with machine learning models. J. Aerosol Sci. 2024, 175, 106284. [Google Scholar] [CrossRef]
  55. Malyan, V.; Kumar, V.; Sahu, M.; Prakash, J.; Choudhary, S.; Raliya, R.; Chadha, T.S.; Fang, J.; Biswas, P. Calibrating low-cost sensors using MERRA-2 reconstructed PM2.5 mass concentration as a proxy. Atmos. Pollut. Res. 2024, 15, 102027. [Google Scholar] [CrossRef]
  56. Li, J.Y.; Mattewal, S.K.; Patel, S.; Biswas, P. Evaluation of Nine Low-cost-sensor-based Particulate Matter Monitors. Aerosol Air Qual. Res. 2020, 20, 254–270. [Google Scholar] [CrossRef]
  57. Vidwans, A.; Choudhary, S.; Jolliff, B.; Gillis-Davis, J.; Biswas, P. Size and charge distribution characteristics of fine and ultrafine particles in simulated lunar dust: Relevance to lunar missions and exploration. Planet. Space Sci. 2022, 210, 105392. [Google Scholar] [CrossRef]
  58. Do, K.; Yu, H.; Velasquez, J.; Grell-Brisk, M.; Smith, H.; Ivey, C.E. A data-driven approach for characterizing community scale air pollution exposure disparities in inland Southern California. J. Aerosol Sci. 2021, 152, 105704. [Google Scholar] [CrossRef]
  59. Coker, E.S.; Amegah, A.K.; Mwebaze, E.; Ssematimba, J.; Bainomugisha, E. A land use regression model using machine learning and locally developed low cost particulate matter sensors in Uganda. Environ. Res. 2021, 199, 111352. [Google Scholar] [CrossRef] [PubMed]
  60. Njeru, M.N.; Mwangi, E.; Gatari, M.J.; Kaniu, M.I.; Kanyeria, J.; Raheja, G.; Westervelt, D.M. First Results From a Calibrated Network of Low-Cost PM2.5 Monitors in Mombasa, Kenya Show Exceedance of Healthy Guidelines. GeoHealth 2024, 8, e2024GH001049. [Google Scholar] [CrossRef]
  61. Raheja, G.; Nimo, J.; Appoh, E.K.E.; Essien, B.; Sunu, M.; Nyante, J.; Amegah, M.; Quansah, R.; Arku, R.E.; Penn, S.L.; et al. Low-Cost Sensor Performance Intercomparison, Correction Factor Development, and 2+ Years of Ambient PM2.5 Monitoring in Accra, Ghana. Environ. Sci. Technol. 2023, 57, 10708–10720. [Google Scholar] [CrossRef] [PubMed]
  62. Holder, A.L.; Mebust, A.K.; Maghran, L.A.; McGown, M.R.; Stewart, K.E.; Vallano, D.M.; Elleman, R.A.; Baker, K.R. Field Evaluation of Low-Cost Particulate Matter Sensors for Measuring Wildfire Smoke. Sensors 2020, 20, 4796. [Google Scholar] [CrossRef]
  63. Weissert, L.F.; Henshaw, G.S.; Clements, A.L.; Duvall, R.M.; Croghan, C. Seasonal effects in the application of the MOMA remote calibration tool to outdoor PM2.5 air sensors. EGUsphere 2024, 2024, 1–18. [Google Scholar]
  64. Bittner, A.S.; Cross, E.S.; Hagan, D.H.; Malings, C.; Lipsky, E.; Grieshop, A.P. Performance characterization of low-cost air quality sensors for off-grid deployment in rural Malawi. Atmos. Meas. Tech. 2022, 15, 3353–3376. [Google Scholar] [CrossRef]
  65. Weissert, L.F.; Henshaw, G.S.; Williams, D.E.; Feenstra, B.; Lam, R.; Collier-Oxandale, A.; Papapostolou, V.; Polidori, A. Performance evaluation of MOMA (MOment MAtching)—A remote network calibration technique for PM2.5 and PM10 sensors. Atmos. Meas. Tech. 2023, 16, 4709–4722. [Google Scholar] [CrossRef]
  66. Khreis, H.; Johnson, J.; Jack, K.; Dadashova, B.; Park, E.S. Evaluating the performance of low-cost air quality monitors in Dallas, Texas. Int. J. Environ. Res. Public Health 2022, 19, 1647. [Google Scholar] [CrossRef] [PubMed]
  67. Bahino, J.; Giordano, M.; Beekmann, M.; Yoboué, V.; Ochou, A.; Galy-Lacaux, C.; Liousse, C.; Hughes, A.; Nimo, J.; Lemmouchi, F.; et al. Temporal variability and regional influences of PM2.5 in the West African cities of Abidjan (Côte d’Ivoire) and Accra (Ghana). Environ. Sci. Atmos. 2024, 4, 468–487. [Google Scholar] [CrossRef]
  68. Malings, C.; Tanzer, R.; Hauryliuk, A.; Saha, P.K.; Robinson, A.L.; Presto, A.A.; Subramanian, R. Fine particle mass monitoring with low-cost sensors: Corrections and long-term performance evaluation. Aerosol Sci. Technol. 2019, 54, 160–174. [Google Scholar] [CrossRef]
  69. Wallace, L. Cracking the code—Matching a proprietary algorithm for a low-cost sensor measuring PM1 and PM2.5. Sci. Total Environ. 2023, 893, 164874. [Google Scholar] [CrossRef] [PubMed]
  70. Madhwal, S.; Tripathi, S.N.; Bergin, M.H.; Bhave, P.; de Foy, B.; Reddy, T.V.R.; Chaudhry, S.K.; Jain, V.; Garg, N.; Lalwani, P. Evaluation of PM2.5 spatio-temporal variability and hotspot formation using low-cost sensors across urban-rural landscape in lucknow, India. Atmos. Environ. 2024, 319, 120302. [Google Scholar] [CrossRef]
  71. Harr, L.; Sinsel, T.; Simon, H.; Esper, J. Seasonal Changes in Urban PM2.5 Hotspots and Sources from Low-Cost Sensors. Atmosphere 2022, 13, 694. [Google Scholar] [CrossRef]
  72. Francis, D.; Nelli, N.; Fonseca, R.; Weston, M.; Flamant, C.; Cherif, C. The dust load and radiative impact associated with the June 2020 historical Saharan dust storm. Atmos. Environ. 2022, 268, 118808. [Google Scholar] [CrossRef]
  73. Long, R.W.; Urbanski, S.P.; Lincoln, E.; Colon, M.; Kaushik, S.; Krug, J.D.; Vanderpool, R.W.; Landis, M.S. Summary of PM2.5 measurement artifacts associated with the Teledyne T640 PM Mass Monitor under controlled chamber experimental conditions using polydisperse ammonium sulfate aerosols and biomass smoke. J. Air Waste Manag. Assoc. 2023, 73, 295–312. [Google Scholar] [CrossRef]
  74. Hagler, G.; Hanley, T.; Hassett-Sipple, B.; Vanderpool, R.; Smith, M.; Wilbur, J.; Wilbur, T.; Oliver, T.; Shand, D.; Vidacek, V.; et al. Evaluation of two collocated federal equivalent method PM2.5 instruments over a wide range of concentrations in Sarajevo, Bosnia and Herzegovina. Atmos. Pollut. Res. 2022, 13, 101374. [Google Scholar] [CrossRef] [PubMed]
  75. O’Brien, E.; Torr, S. A comparison between a TAPI T640X, TEOM 1405-DF and reference samplers for the measurement of PM10 and PM2.5 at an urban location in Brisbane. Air Qual. Clim. Change 2021, 55, 71–76. [Google Scholar]
  76. Sandhu, T.; Robinson, M.C.; Rawlins, E.; Ardon-Dryer, K. Identification of dust events in the greater Phoenix area. Atmos. Pollut. Res. 2024, 15, 102275. [Google Scholar] [CrossRef]
Figure 1. (a) Map of Selected Regulatory Monitoring Sites and (b) DE Site—Deployed Sensors. On railing: AQY, RAM, CNO, MAX, PAR (ARS deployed later and not pictured).
Figure 1. (a) Map of Selected Regulatory Monitoring Sites and (b) DE Site—Deployed Sensors. On railing: AQY, RAM, CNO, MAX, PAR (ARS deployed later and not pictured).
Sensors 25 01265 g001
Figure 2. Boxplot showing the ranges of PM2.5 (µg/m3), T (°C), and RH (%) experienced at each site based on the reference measurements. These values cover the colocation period, typically July or August 2019 until Oct 2020 or January 2021, depending on the site and sensor types (additional details in the Supplementary Information).
Figure 2. Boxplot showing the ranges of PM2.5 (µg/m3), T (°C), and RH (%) experienced at each site based on the reference measurements. These values cover the colocation period, typically July or August 2019 until Oct 2020 or January 2021, depending on the site and sensor types (additional details in the Supplementary Information).
Sensors 25 01265 g002
Figure 3. Hourly averaged sensor versus reference for all sensors (colored by unit ID) before sensor problems are removed with R2 in the center of each plot. Points above 200 µg/m3 are excluded from the plot to improve visualization (but were left in for all analyses).
Figure 3. Hourly averaged sensor versus reference for all sensors (colored by unit ID) before sensor problems are removed with R2 in the center of each plot. Points above 200 µg/m3 are excluded from the plot to improve visualization (but were left in for all analyses).
Sensors 25 01265 g003
Figure 4. Example of repeat zeros is shown by the purple MAX sensor (different colors represent different sensors) compared with the monitor (black line on time series).
Figure 4. Example of repeat zeros is shown by the purple MAX sensor (different colors represent different sensors) compared with the monitor (black line on time series).
Sensors 25 01265 g004
Figure 5. Example data showing sensor outliers where (A,B) show data outliers from RAM sensors in WI while (C,D) show a real concentration event measured by the MAX in DE. The black line on the time series is the monitor and the green is the sensor.
Figure 5. Example data showing sensor outliers where (A,B) show data outliers from RAM sensors in WI while (C,D) show a real concentration event measured by the MAX in DE. The black line on the time series is the monitor and the green is the sensor.
Sensors 25 01265 g005
Figure 6. Examples of a baseline shift where one sensor suddenly sees concentrations 20–40 µg/m3 higher than the monitor (black line on the time series) (A,B) and an example where the monitor sees a real baseline shift in PM2.5 concentrations of more than 40 µg/m3 (C,D) because of a Saharan dust event. Colors indicate different sensors.
Figure 6. Examples of a baseline shift where one sensor suddenly sees concentrations 20–40 µg/m3 higher than the monitor (black line on the time series) (A,B) and an example where the monitor sees a real baseline shift in PM2.5 concentrations of more than 40 µg/m3 (C,D) because of a Saharan dust event. Colors indicate different sensors.
Sensors 25 01265 g006
Figure 7. Two examples showing variable but distinct relationships between the sensors and the monitor (black line on the time series). Two sensors are included in the NC RAM example indicated by different colors.
Figure 7. Two examples showing variable but distinct relationships between the sensors and the monitor (black line on the time series). Two sensors are included in the NC RAM example indicated by different colors.
Sensors 25 01265 g007
Figure 8. Scatter plot showing the influence of RH on the ratio of PM2.5 sensor/PM2.5 monitor. Hours where the monitor PM2.5 is less than 5 µg/m3 have been excluded. Colors indicate unit ID and black dots show the average ratio in each of the 10 bins (e.g., 0–10% and 10–20%).
Figure 8. Scatter plot showing the influence of RH on the ratio of PM2.5 sensor/PM2.5 monitor. Hours where the monitor PM2.5 is less than 5 µg/m3 have been excluded. Colors indicate unit ID and black dots show the average ratio in each of the 10 bins (e.g., 0–10% and 10–20%).
Sensors 25 01265 g008
Figure 9. Boxplots showing the influence of RH with more variation by sensor make than by location. Each point represents the relative difference in the ratio of sensor/monitor from high to low RH, as shown in Figure 8. The area between the blue lines at ±35% indicates weak RH influence.
Figure 9. Boxplots showing the influence of RH with more variation by sensor make than by location. Each point represents the relative difference in the ratio of sensor/monitor from high to low RH, as shown in Figure 8. The area between the blue lines at ±35% indicates weak RH influence.
Sensors 25 01265 g009
Figure 10. Differences in the hourly RH measured by the sensor compared with the independent reference RH at each site. Colors indicate different sensors, and the black line is the 1:1.
Figure 10. Differences in the hourly RH measured by the sensor compared with the independent reference RH at each site. Colors indicate different sensors, and the black line is the 1:1.
Sensors 25 01265 g010
Figure 11. Bias by sensor make and location. The area between the blue lines (±1.7 µg/m3) indicates low bias (20% of the average concentration).
Figure 11. Bias by sensor make and location. The area between the blue lines (±1.7 µg/m3) indicates low bias (20% of the average concentration).
Sensors 25 01265 g011
Figure 12. PM2.5 concentrations by location and hour of day. The monitor is in black.
Figure 12. PM2.5 concentrations by location and hour of day. The monitor is in black.
Sensors 25 01265 g012
Figure 13. Mean Bias Error (MBE) by month across all sites and sensor types. Multiple points per month in NC due to multiple sensors running simultaneously. Note the variable y-axis. The black horizontal line on each plot indicates MBE = 0 µg/m3.
Figure 13. Mean Bias Error (MBE) by month across all sites and sensor types. Multiple points per month in NC due to multiple sensors running simultaneously. Note the variable y-axis. The black horizontal line on each plot indicates MBE = 0 µg/m3.
Sensors 25 01265 g013
Table 1. List of sensors evaluated see additional details in Supplementary Information.
Table 1. List of sensors evaluated see additional details in Supplementary Information.
Number Evaluated
IDMakeModelInternal PM SensorCommunicationPower SourceNCOther SitesMeasured Pollutants Sampling Interval
AQYAeroqual (Auckland, New Zealand)AQY *Nova SDS011Cellular
Wi-Fi (NC only)
Wall36PM2.5, NO2, O3, T, RH 1 min
CNOClarity Movement Co. (Berkeley, CA, USA)Node *Plantower PMS6003CellularWall-6PM2.5, NO2 *, T, RH ~5 min (Node)
~15 min (Node-S, NC only)
Node-SWi-FiSolar3-PM2.5, NO2 *, T, RH 30 s
MAXApplied Particle Technology (Boise, ID, USA)MaximaPlantower PMSA003Wi-FiWall36PM1, PM2.5, PM10, T, RH, P 30 s
PARPurpleAir (Draper, UT, USA)PA-II-SD *Plantower PMS5003 (×2)Wi-FiWall36PM1, PM2.5, PM10, T, RH, P 2 min
RAMSensit Technologies (Valparaiso, IN, USA)RAMPPlantower PMS5003Direct (no Wi-Fi/Cellular)Wall36PM2.5, CO, NO, NO2, SO2, O3 15 s
ARSAerodyne ‡ (Billerica, MA, USA)Arisense *Particles Plus OPCCellularWall76PM1, PM2.5, PM10, CO, CO2, NO, NO2, O3, T, RH, P, WS, WD 2 min
* These make/models are no longer available from the manufacturer. Devices purchased from Aerodyne (Billerica, MA, USA), but then the company spun off into QuantAQ (Somerville, MA, USA).
Table 2. Selected monitoring sites and comparison monitors. The reported monitor average and maximum concentrations are for the duration of colocation.
Table 2. Selected monitoring sites and comparison monitors. The reported monitor average and maximum concentrations are for the duration of colocation.
Location
(City, State)
AQS IDMonitor *Spatial ScaleSite TypeAverage Monitor PM2.5Maximum Hourly Monitor PM2.5
(µg/m3)(µg/m3)
Phoenix, AZ, USA04-013-0019Thermo TEOM 1405-DFNeighborhoodPopulation Exposure
Highest Concentration
8.9550
Denver, CO, USA08-031-0026Teledyne T640Neighborhood
Urban
National Core Network (Ncore)
State or Local Air Monitoring Stations (SLAMS)
8.8207
Wilmington, DE, USA10-003-2004Teledyne T640NeighborhoodPopulation Exposure
Maximum Concentration
NCore
Photochemical Assessment Monitoring Stations
(PAMS)
8.344
Decatur, GA, USA13-089-0002Teledyne T640xNeighborhoodPopulation Exposure
Highest Concentration
9.196
Research Triangle Park, NC, USA37-063-0099Teledyne T640NeighborhoodNCore8.282
Oklahoma City, OK, USA40-109-1037Teledyne T640
(until 31 December 2019)
Teledyne T640x
(starting 1 January 2020)
Urban
Population Exposure
SLAMS10.0110
Milwaukee, WI, USA55-079-0026Teledyne T640xUrban
Neighborhood
Population Exposure
SLAMS7.9335
* All T640 and T640x data are not reflective of the April 2023 firmware update that implemented the alignment factor.
Table 3. Summary of range of monthly MBE (max MBE–min MBE) by sensor make and location. Shaded cells are locations where the most variation by sensor type is seen, and shaded R2 < 0.7 does not meet the performance target. Statistics are calculated after removing the common data issues identified in Section 3.3, so results differ from Figure 3.
Table 3. Summary of range of monthly MBE (max MBE–min MBE) by sensor make and location. Shaded cells are locations where the most variation by sensor type is seen, and shaded R2 < 0.7 does not meet the performance target. Statistics are calculated after removing the common data issues identified in Section 3.3, so results differ from Figure 3.
Uncorrected PAR, CNO
Make IDLocationRange MBER2Range MBER2
RAMAZ150.88
AQYAZ130.64
MAXAZ110.92
ARSAZ100.45
PAR_wfAZ70.8770.87
CNO_wfAZ70.8880.91
MAXCO70.92
RAMCO60.35
CNO_wfCO60.9380.79
PAR_wfCO40.9460.93
ARSCO30.15
AQYCO30.82
MAXDE90.86
ARSDE70.45
RAMDE70.58
AQYDE60.75
PAR_wfDE60.84130.86
CNO_wfDE30.8190.84
ARSGA130.12
AQYGA70.42
CNO_wfGA50.5750.77
MAXGA50.79
RAMGA40.64
PAR_wfGA30.7560.78
ARSNC200.12
MAXNC140.67
CNO_wfNC70.6130.61
PAR_wfNC70.76180.77
RAMNC60.32
AQYNC50.41
AQYOK140.21
ARSOK130.42
MAXOK80.55
PAR_wfOK80.6890.67
RAMOK50.38
CNO_wfOK40.7170.68
ARSWI150.16
MAXWI120.9
RAMWI70.45
AQYWI40.75
PAR_wfWI30.83100.81
CNO_wfWI20.88150.83
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barkjohn, K.K.; Yaga, R.; Thomas, B.; Schoppman, W.; Docherty, K.S.; Clements, A.L. Evaluation of Long-Term Performance of Six PM2.5 Sensor Types. Sensors 2025, 25, 1265. https://doi.org/10.3390/s25041265

AMA Style

Barkjohn KK, Yaga R, Thomas B, Schoppman W, Docherty KS, Clements AL. Evaluation of Long-Term Performance of Six PM2.5 Sensor Types. Sensors. 2025; 25(4):1265. https://doi.org/10.3390/s25041265

Chicago/Turabian Style

Barkjohn, Karoline K., Robert Yaga, Brittany Thomas, William Schoppman, Kenneth S. Docherty, and Andrea L. Clements. 2025. "Evaluation of Long-Term Performance of Six PM2.5 Sensor Types" Sensors 25, no. 4: 1265. https://doi.org/10.3390/s25041265

APA Style

Barkjohn, K. K., Yaga, R., Thomas, B., Schoppman, W., Docherty, K. S., & Clements, A. L. (2025). Evaluation of Long-Term Performance of Six PM2.5 Sensor Types. Sensors, 25(4), 1265. https://doi.org/10.3390/s25041265

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop