Next Article in Journal
A Comparative Study of Swarm Intelligence Metaheuristics in UKF-Based Neural Training Applied to the Identification and Control of Robotic Manipulator
Next Article in Special Issue
From Data to Human-Readable Requirements: Advancing Requirements Elicitation through Language-Transformer-Enhanced Opportunity Mining
Previous Article in Journal
Using Epidemiological Models to Predict the Spread of Information on Twitter
Previous Article in Special Issue
An Algorithm for the Fisher Information Matrix of a VARMAX Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Testing a New “Decrypted” Algorithm for Plantower Sensors Measuring PM2.5: Comparison with an Alternative Algorithm

U.S. Environmental Protection Agency, Washington, DC 20460, USA
Retired.
Algorithms 2023, 16(8), 392; https://doi.org/10.3390/a16080392
Submission received: 23 July 2023 / Revised: 11 August 2023 / Accepted: 14 August 2023 / Published: 17 August 2023
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)

Abstract

:
Recently, a hypothesis providing a detailed equation for the Plantower CF_1 algorithm for PM2.5 has been published. The hypothesis was originally validated using eight independent Plantower sensors in four PurpleAir PA-II monitors providing PM2.5 estimates from a single site in 2020. If true, the hypothesis makes important predictions regarding PM2.5 measurements using CF_1. Therefore, we test the hypothesis using 18 Plantower sensors from four datasets from two sites in later years (2021–2023). The four general models from these datasets agreed to within 10% with the original model. A competing algorithm known as “pm2.5 alt” has been published and is freely available on the PurpleAir API site. The accuracy, precision, and limit of detection for the two algorithms are compared. The CF_1 algorithm overestimates PM2.5 by about 60–70% compared to two calibrated PurpleAir monitors using the pm2.5 alt algorithm. A requirement that the two sensors in a single monitor agree to within 20% was met by 85–99% of the data using the pm2.5 alt algorithm, but by only 22–74% of the data using the CF_1 algorithm. The limit of detection (LOD) of the CF_1 algorithm was about 10 times the LOD of the pm2.5 alt algorithm, resulting in 71% of the CF_1 data falling below the LOD, compared to 1 % for the pm2.5 alt algorithm.

Graphical Abstract

1. Introduction

1.1. Low-Cost Optical Sensors

In recent years, there has been an explosion of interest in low-cost particle monitors. The fundamental question is accuracy. Accuracy can be determined under field conditions by comparison to nearby regulatory monitors employing the gravimetric-based Federal Reference Methods (FRM), which require collection of particles over 24 h followed by weighing filters under strict regulation of temperature and humidity. Since typically only one day out of every three can be monitored in this way, monitors were developed to estimate continuous variation of PM mass. The best of these monitors have passed stringent tests to determine their agreement with the FRM monitors. These monitors are called Federal Equivalence Monitors (FEM) and are in use at several hundred regulatory monitor sites in the United States. The accuracy of low-cost monitors can therefore be determined by comparing to either FRM or FEM monitors at regulatory sites. (For further information on the development of health standards such as PM2.5 and PM10, and the development of optical monitors, see Sections S1.0 and S1.1 of Supplementary Materials).
Accuracy can also be determined by laboratory or chamber investigations. In this approach, several monitors (usually in triplicate) to be tested are placed side by side with one or more reference monitors in a chamber with controlled temperature and humidity. A particle source (often organic PSL spheres, inorganic sodium chloride, or Arizona road dust) is activated and either maintained at a steady concentration or allowed to rise to a peak and then decay so that a wide variety of concentrations can be created.
A major source of both field and laboratory investigations of low-cost monitors is the program known as AQ-SPEC, operated by the South Coast Air Quality Management District (SCAQMD) in California. (This management district includes about 17 million people, 44% of the California population.) Monitors must pass a field test before being administered a chamber test. In the field, sensors are tested alongside one or more of South Coast AQMD’s existing air monitoring stations using traditional federal reference/equivalent method instruments over a 30- to 60-day period to gauge overall performance. Sensors demonstrating acceptable performance in the field are then brought to the AQ-SPEC laboratory for more detailed testing in an environmental chamber under controlled conditions alongside traditional federal reference/equivalent method and/or best available technology instruments [1].
About 100 field evaluation reports and 49 laboratory evaluation reports are presently available [2].
Both field and laboratory investigations of low-cost monitors have been carried out by multiple investigators [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. Many of these involve Plantower sensors.

Plantower Sensors

This paper focuses on Plantower sensors. This focus is supported by the fact that 14 of 47 manufacturers of low-cost particle monitors tested in the AQ-SPEC program use Plantower sensors (Table 1). In addition, the largest national network of low-cost monitors (PurpleAir with perhaps 25,000 monitors) uses Plantower sensors exclusively.

1.2. Algorithms for Optical Particle Sensors

1.2.1. Standard Algorithm for Optical Particle Sensors

The standard algorithm employed by many manufacturers of optical particle counters since the 1970s is a completely open and transparent approach. It typically uses three bins (0.3–0.5 µm, 0.5–1 µm, and 1–2.5 µm) to calculate PM2.5. For each bin, the assumption is made that all particles are spherical and have identical diameters Dp equal to some midpoint (either arithmetic or geometric mean) between the bin boundaries. The volume of the single particle is then πDp3/6. All particles are assumed to have the same density ρ. The mass of the particle is then equal to the density multiplied by the volume: ρπDp3/6. The total mass of particles in the bin is equal to the single-particle mass multiplied by the number Ni of particles in the bin. Finally, PM2.5 is equal to the sum of the masses of all particles in the three bins:
PM2.5 = aN1 + bN2 + cN3
where a, b, and c are simply the masses of the single (representative) particle in each bin and N1, N2, and N3 are the number of particles in each bin. Numerically, assuming the choice of geometric mean for the particle diameter and a density of 1 g cm−3, the values of a, b, and c are as follows (Table 2):
In this table, the volumes and masses have been divided by 100 since the number Ni of particles is in units of number per deciliter. This allows one to use the numbers reported by Plantower for N1, N2, and N3 without change.
Therefore, the alt PM2.5 algorithm for a density of 1 g cm−3 is given by Equation (1) using the values of a, b, and c shown in the above table.
As with all uses of PM2.5 estimates, however, it is always recommended that investigators compare the PM2.5 predictions to research-grade instruments measuring the aerosol mixture of interest. In the field, this is usually done by comparing nearby regulatory monitors using gravimetric Federal Reference Method or continuous Federal Equivalence Method (FEM/FRM) to the aerosol mixture being measured. The result of the comparison is a calibration factor (CF) to adjust the PM2.5 estimates. For the choice of geometric mean and density of 1 as mentioned above, the final equation for PM2.5 becomes
PM2.5 = CF(aN1 + bN2 +cN3)

1.2.2. Application of Standard Algorithm to Plantower Sensors

The standard algorithm in Equation (2) was first applied to Plantower PMS 5003 sensors used in PurpleAir PA-II monitors [4,19]. These studies tested 33 PurpleAir monitors within 500 m of 27 regulatory FEM/FRM monitors in California, finding the CF to be 3.0. The algorithm was therefore named ALT-CF3, where the “ALT” suggests an alternative algorithm to those supplied by Plantower and the “CF3” is the calibration factor found in these two studies. Therefore, the algorithm as applied to the PurpleAir monitors using Plantower PMS 5003 sensors is equal to
ALT-CF3 = 3(aN1 + bN2 + cN3)
where the a, b, and c coefficients are those in the above table.
This algorithm is freely available on the PurpleAir API site, where it has been renamed “pm2.5 alt”. A major study of 3000 indoor air monitors selected pm2.5 alt as the only algorithm to be used. The study is expected appear soon in Proceedings of the National Academy of Sciences (PNAS).
A more recent study showed that both the PMS 1003 and PMS 5003 sensors have an identical (within 2%) CF of 3.4 [20], leading to the revised Equation (3)
ALT-CF3.4 = 3.4(aN1 + bN2 + cN3)
where a, b, and c are still unchanged from the values in Table 2 above, but the CF is changed from 3 to 3.4.
This CF of 3.4 has been used in several studies and is used in this study as well. For persons wanting to use this most recent value, it is sufficient to multiply the value given in the API site by 3.4/3, or about a 12% increase in PM2.5 values.
An advantage in using equations of the general form of Equation (2) above is that it allows an estimate of the contribution made by each size category to the total mass (PM2.5). For example, typical values of N1, N2, and N3 occurring in a Santa Rosa home were determined from a full-year dataset from 1 January 2021 to 31 December 2021. These are entered into Equation (4) to determine the fractional contribution of each bin to total PM2.5 during this period (Table 3 and Figure 1).

1.2.3. Algorithms Offered by Plantower

The Plantower manual v2.5 for the PMS 1003 sensor describes two algorithms for determining PM1, PM2.5, and PM10. One algorithm is labeled as “CF = 1, standard particle”, the other as “under atmospheric environment”.
The Plantower manual v2.3 for the PMS 5003 sensor has the same labels for the two algorithms, but there is an added note for the CF1 algorithm: “CF = 1 should be used in the factory environment.”
Some have interpreted these cryptic descriptions as indicating that the CF = 1 algorithm should be used indoors and the “under atmospheric environment” algorithm should be used for outdoor measurements. However, Plantower presented no data to support their characterization of the two algorithms.
It is easy to determine the relation of the two algorithms. A 10-day run of data in a Santa Rosa home from 24 April 2019 to 3 May 2019 using a PurpleAir PA-II monitor gave these results for the relationship (Figure 2). It should be immediately evident that one of these two algorithms can have no physical reality; the relationship is simply a mathematical model. The two algorithms give identical results for all particle concentrations below about 28 µg/m3; a linear relation above that concentration, in which the CF_ATM/CF_1 ratio increases by about 0.01 unit for the next 50 steps of 1 µg/m3 for the CF_1 algorithm; and then, beginning at about 78 µg/m3, it curls over to become ultimately fixed at a constant ratio on the order of 1.5. No possible actual physical process could behave in this way. This observation by itself does not allow the problematic algorithm to be identified. However, based on correlations with measurements by other methods, the physically unrealizable algorithm is CF_ATM, which should, therefore, not be used (See Supplementary Materials Section S1.2 and Figures S1–S4). It is uncertain why this algorithm was developed, but the fact that the CF_1 algorithm is found by almost all investigators to overpredict PM mass may have caused the Plantower engineers to search for a new algorithm that would give better estimates. In fact, the CF_ATM algorithm does give lower estimates that may be closer to the truth, but for wrong reasons. Unfortunately, the PurpleAir corporation has chosen to adopt the CF_ATM algorithm for their outdoor map because it would partially correct the overestimate of the CF_1 algorithm. While this is true, at least for the small number of outdoor concentrations >28 µg/m3, it means that a meaningless algorithm is being used widely by >25,000 consumers with no indication of its lack of scientific basis. Several studies have concluded that the CF_1 algorithm should be used in place of the CF_ATM algorithm [9,18].

Danger of “Proprietary” Algorithms

Plantower presents no information regarding the composition, density, or index of refraction of the test aerosol used to calibrate its sensors. In fact, it does not even mention whether a test aerosol was used, or whether its instruments are calibrated. Its two algorithms are said to be “proprietary”, which seems contrary to the practice of earlier manufacturers, whose algorithms were openly described.
A problem with “proprietary” algorithms is the ever-present possibility that the manufacturer could change the algorithm at will. If the manufacturer does not announce that the algorithm has changed, consumers would not know a change had occurred, unless a careful examination of their historical data might reveal a change in some parameter (e.g., the relation of particle numbers in adjacent size categories). This is not only a theoretical possibility but actually occurred for the Plantower PMS 5003 sensor (and possibly other sensors) in about March of 2022. At that time, the PurpleAir technical staff noticed for some new instruments a clear change in the relative number of particles in the 0.3–0.5 µm and 0.5–1 µm size categories. Whereas previously the smallest size category (0.3–0.5 um) had about three times as many particles as the next larger one (0.5–1 um), now both categories seemed to have about the same number of particles. Plantower had made no notice about the change, but when contacted by PurpleAir they did admit that a change had occurred. PurpleAir made the decision not to accept the “new” instruments, which could be distinguished from the “old” instruments by the tests that PurpleAir runs on all monitors before releasing them for sale. After some time, no further “new” instruments were received by PurpleAir. However, it is unclear whether some of these “new” instruments may still be available to the 10 or so companies that use Plantower sensors.

1.3. Objectives of this Study

1.3.1. Objective #1

A main objective of this study is to rigorously test the recent “decoded” CF_1 algorithm [26]. In that article, the CF_1 algorithm was found to be nearly perfectly matched by the following equation:
PM2.5 (CF_1) = a(N1 + N2) + cN3 + d
That is, despite providing specific numbers N1 and N2 of particles in the 0.3–0.5 µm and 0.5–1 µm size categories, the CF_1 algorithm instead uses a single coefficient to multiply the sum of the numbers in the two smallest size categories. The best fit to observed CF_1 PM2.5 estimates required an additive component d. Reference [26] used a single six-month data series of collocated PurpleAir monitors inside and outside a Santa Rosa home from 18 June 2020 to 31 December 2020 to test the model in Equation (4) against observed values of PM2.5 provided by the CF_1 algorithm. Four PA-II monitors (eight independent sensors) were used to give eight independent best-fit estimates of PM2.5 as reported using the CF_1 algorithm. The eight individual models were then averaged to give a single general model, which was then applied again to the observed data. The general model (Equation (4)) had the following values for the coefficients: a = 0.0042, c = 0.10, and d = −1.17 µg/m3.
The individual best-fit models were all in excellent agreement with the CF_1 observations. The general model applied to all cases also gave good results. One interesting finding was that the additive component d was negative and on the order of −1 µg/m3. When the model was compared to the observations, because of this negative value for d, some model estimates of PM2.5 were negative. Interestingly, nearly all of the 18,000 negative results in the model corresponded to values of zero in the observed CF_1 data. This suggested that, indeed, the CF_1 algorithm is of the form in Equation (4), and that rather than report negative concentrations, the Plantower approach was to provide zero values instead. This was apparently the first explanation of the otherwise incomprehensible CF_1 estimates of zero, since N1 and N2 are never zero.
Although the general model developed in [26] was convincingly shown to provide good agreement with the CF_1 algorithm, the agreement was tested against only one fairly old (2020) database for only one site. A reasonable question arises whether the model will hold up if tested at other sites with different PurpleAir monitors and using more recent data.
Therefore, this study further tests the conclusions of reference [26] by using four additional databases and adding a second site in Redwood City, CA. The newer Santa Rosa data include two new PurpleAir Flex monitors employing four PMS 6003 sensors, adding to the other four monitors to provide extensive data on 12 sensors. The Redwood City databases add three collocated PurpleAir monitors (six independent PMS 5003 sensors). Both datasets include indoor and outdoor measurements. For each of the four datasets, individual models of the CF_1 algorithm for each of the independent sensors are created, and a general model is also estimated. We regress both the individual and general model predictions on the observed CF_1 values and analyze the effectiveness of the individual and general models by their intercepts, slopes, and R2 correlations resulting from the regressions. We also calculate the Mean Absolute Error (MAE) to test the performance of the individual and general models.

1.3.2. Objective 2

A second main objective of this study is to is to compare the Plantower CF_1 algorithm to the independent algorithm described above in Equation (3) and available under the name “PM2.5 alt” in the PurpleAir API site (https://api.purpleair.com/, accessed on 14 August 2023). We compare the precision, accuracy, and limit of detection (LOD) of the two algorithms for estimating PM2.5.

2. Materials and Methods

Calibration of PurpleAir Monitors 1 and 2

Prior to the start of the present study, two of the PurpleAir monitors used were calibrated against three research-grade optical particle counters (TSI Model 510 Sidepaks equipped with PM2.5 cutpoint inlets) [27]. The particle source was aerosol produced from a single puff of marijuana smoke from a vaping pen. The Sidepaks were part of a group of six Sidepaks, which were in turn calibrated against gravimetric samplers sampling from the same indoor source [28]. Over an eight-month period, 47 experiments, each lasting 6–10 h, were carried out in a dedicated 30 m3 room of a home in Santa Rosa, CA, USA. The experiments plotted the decay of the particle concentration. Following initial mixing, the decay becomes linear (on a logarithmic scale) for a period of well-mixed concentrations. The slope is given by the sum of the air change rate a and the deposition rate k. The air exchange rate was measured by releasing a puff of carbon monoxide and tracking its decay using a Langan CO monitor T15. The regression line fitted to the decay curve can then be followed back to the beginning of the experiment to estimate the total mass released (in mg/puff), using a method developed in [29]. From the gravimetric experiments, the Sidepaks were determined to have a calibration factor (CF) of 0.44 (SE 0.03) [28]. The two PurpleAir monitors (numbers 1 and 2 with four independent Plantower sensors 1a, 1b, 2a, and 2b) were determined to have a calibration factor of 3.24, midway between the calibration factors of 3.0 and 3.4 found in various studies of outdoor air [4,19,20,28]. A linear least-squares regression of the PurpleAir monitors against the SidePaks resulted in a slope of 1.00 and an R2 of 98.6% (Figure 3).
Two sites provided data using both the Plantower CF_1 and pm2.5 alt algorithms. At the Santa Rosa site where monitors 1 and 2 were calibrated, data were collected for one year (2021) using four monitors (eight Plantower sensors). Three PA-II monitors were collocated indoors in the same 30 m3 room where monitors 1 and 2 were calibrated. They were placed 1.5 m high with the intake unobstructed. One monitor was outside at a height of 2 m and at 1 m distance from the house. The monitors collected data every two minutes, and the data were averaged over 10 min periods. In 2022, two new PurpleAir Flex monitors with two Plantower PMS 6003 sensors each were added to the four existing monitors with the Plantower PMS 5003 sensors. Therefore, a second data collection period was selected for the Santa Rosa site running from 1 January 2023 to 13 July 2023.
At the Redwood City site, the data were collected by two collocated PA-II monitors at 1 m height in a 43 m3 room. One PA-II monitor was located outdoors. Two data periods were selected from this site. The first runs from 29 April 2021 to 31 October 2022. The second runs from 22 September 2022 to 23 June 2023.
In summary, four datasets from two locations were analyzed for this study (Table 4). Ultimately, 18 independent Plantower sensors were included.

3. Results

3.1. Santa Rosa Site

3.1.1. Santa Rosa 2021 Dataset

Monitors 1–4 were used throughout the full year 2021 in Santa Rosa. Since monitors 1 and 2 were calibrated against the SidePak earlier, they were considered reference monitors for this study. Monitor 3 (sensors 3a and 3b) monitored outdoor air. Monitor 4 was collocated indoors with monitors 1 and 2, so the 4a and 4b sensors estimates of PM2.5 for the alt CF3.4 algorithm were compared with the mean values of monitors 1 and 2 (Table 5).
For this year of 2021, the model was tested against the observed CF_1 PM2.5 estimates for each of the eight sensors. The individual estimates of a–d are found in Table 6. The mean of those estimates (highlighted row in Table 6) is the general model to be tested.

3.1.2. Santa Rosa 2023 Dataset

With the addition of the two Flex monitors in late 2022, all 12 of the Santa Rosa sensors were available for the 6-month period from 1 January 2023 to 13 July 2023. Mean PM2.5 values for the three monitors 4–6 were compared to the calibrated monitors 1 and 2 (Table 7). (Monitor 3 was outdoors, so sensors 3a and 3b could not be compared to the calibrated monitors 1 and 2.)
For the 2023 data, the model was tested against the observed CF_1 PM2.5 estimates for each of the 12 sensors to determine the parameters a, c, and d for the individual best-fit models (Table 8). The mean of those estimates is the general model to be tested.
PM2.5 estimates from the best-fit individual models and the general model were regressed on observed CF_1 PM2.5. Results are provided for the 12 tested sensors in Santa Rosa in 2023 (Table 9). The individual models had a mean intercept of 0.01 µg/m3, a mean slope of 0.997, and a mean R2 of 0.997. The general models had similar mean values, but a wider range of individual values. However, no general model failed to meet the EPA guidelines of 5 µg/m3 for the intercept (range: −0.2 to +0.4 µg/m3) and between 0.9 and 1.1 for the slope (range: 0.93 to 1.08) [17].
Examples of an individual best-fit model and a general model for the same sensor taken from the 9-month Santa Rosa data are provided in Figure 4 and Figure 5. Note the extremely close approach to a zero intercept and slope of 1.00 for the individual model. The general model has an intercept of −0.053 µg/m3v and a slope of 0.97, but these are still very good results.

3.2. Redwood City

3.2.1. Redwood City 2021–2022 Dataset

For this 17-month dataset, all individual models approached the origin closely (0.006 to 0.048 µg/m3). Their slopes and R2 values were above 0.99 in five of six cases. The general model also meets requirements for a good fit of a model to observed concentrations, with all intercepts less than 1 µg/m3 and all slopes between 0.95 and 1.065.
The results for the individual and general models may be found in the Supplementary Materials Section S1.3, Tables S1–S3.

3.2.2. Redwood City 2023 Dataset

For this 9-month dataset, the best estimates for a, c, and d are summarized in Table 10.
For the Redwood City site, the regressions of the individual models on observed CF_1 PM2.5 all had intercepts below 0.2 µg/m3 and slopes >0.96 (Table 11). The general model performed well, with intercepts between −0.32 and +0.74 µg/m3 and slopes between 0.95 and 1.06. Thus, all models met the EPA requirement for an intercept absolute value less than 5 µg/m3 and a slope between 0.9 and 1.1.
Our second main objective is to compare the two algorithms cf_1 and pm2.5 alt for precision, accuracy, and limit of detection.

3.3. Precision

The precision of the CF_1 and pm2.5 alt algorithms is calculated by comparing the A and B sensors within a PA-II or Flex monitor. We have calculated precision by abs(A − B)/(A + B), although some prefer to use the coefficient of variation (CV) or relative standard deviation (RSD). The CV and RSD are equal to sqrt(2) times the precision as defined above. A reasonable choice for an upper limit on precision would be, say, 20% using the definition above, which corresponds to a CV or RSD of 28%. For each of the nine monitors during the 6-month 2023 period in both the Santa Rosa and Redwood City locations, the number of measurements meeting the precision standard of 20% is provided for both the CF_1 and pm2.5 alt algorithms (Table 12). For the CF_1 algorithm, the fraction of observations meeting the standard ranges from 0.22 to 0.74; for the pm2.5 alt algorithm, the fraction ranges from 0.85 to 0.99.
The major loss of observations for the CF_1 algorithm shown in the above table is due in part to the many values of zero that occur. However, the zero values are not the only reason for the poor performance. Even considering only those observations with a precision meeting the standard, the mean and median precision estimates for the CF_1 algorithm are consistently worse than for the pm2.5 alt algorithm. Comparing the precision of the six Santa Rosa monitors in 2023 for the two algorithms CF_1 and pm2.5 alt, the upper limit of 0.2 is easily met by the pm2.5 alt algorithm, whereas no mean value for the CF_1 algorithm was able to meet the 0.2 upper limit (Table 13).

3.4. Accuracy

Since monitors 1 and 2 were calibrated by collocation with three research-grade TSI Sidepaks Model 510, which had themselves been calibrated against gravimetric monitors, we estimate the accuracy of the CF_1 algorithm using the average of these two monitors (four sensors) as the reference. Regressions of the CF_1 estimates against the alt CF3.4 estimates for sensors 1a, 1b, 2a and 2b resulted in slopes of 1.60, 1.65, 1.63, and 1.70, indicating overestimates by 60–70% for the CF_1 algorithm. Similar overestimates have also been noted by multiple investigators [3,9,10,12,13,14]. An example is shown from the most recent dataset from 1 January 2023 to 13 July 2023 (Figure 6).

Limit of Detection (LOD)

The LOD for the two algorithms was calculated using a method introduced in [30]. The method involves identifying all cases with the mean/SD < 3 and searching for their appearance in “batches” of 100 or so samples ordered by concentration. If, beyond a certain concentration, there are no cases with five or more such values appearing in each 100-sample batch, then the LOD has about a 95% probability of being that concentration. The LOD is of particular interest for indoor studies since indoor concentrations are often quite low. For the five collocated indoor PA-II and Flex monitors in the most recent 2023 data from Santa Rosa, the LOD was 0.106 µg/m3 for the pm2.5 alt algorithm and 1.22 µg/m3 for the CF_1 algorithm. Although both values seem fairly small, because of the low concentrations in general, only 29% of the 39,014 measurements were above the CF_1 LOD, compared to 99% above the LOD for the pm2.5_alt algorithm.

4. Discussion

In summary, four new datasets with about 200,000 additional observations were employed to test the basic model of Equation (5) on 18 independent particle sensors (14 PMS 5003 sensors and 4 PMS 6003 sensors). The individual best-fit models performed superlatively well, with intercepts mostly within ±0.1 µg/m3 and slopes mostly close to 0.99. Even the general models were well within the guidelines of a successful model, with intercept absolute values always less than |1| µg/m3 and slopes between 0.95 and 1.06. The four new general models all had estimates for the a and c parameters generally within ± 10% (Table 14). Because of the small value for the additive constant d, the percentage difference was larger, at about 10–20%.
A measure of the fit of a model to observations is the Mean Absolute Error (MAE). MAE results were calculated for the individual models and for the general model fits for the eight sensors in the Santa Rosa site for the 1 January 2023 to 13 July 2023 dataset. For individual sensors, MAEs ranged from 0.17 to 0.42 µg/m3. For the general models, the range was only slightly larger, from 0.20 to 0.52 µg/m3. For the four Flex sensors, the range was from 0.21 to 0.30 for the individual models and 0.23–0.34 for the general models. For the Redwood City 2023 data, the MAEs for the six sensors 7a to 9b ranged between 0.13 and 0.34 µg/m3 for the individual models and between 0.21 and 0.38 µg/m3 for the general model, with the exception of an MAE value of 1.24 µg/m3 for sensor 9a, which had a constant offset of 1.75 µg/m3. Once this offset was subtracted from every measurement, the agreement with sensor 9b was quite good. This was the only case of a bias encountered among the 18 sensors tested.
These findings provide support for the original hypothesis in [1] that the CF_1 algorithm has the form shown in Equation (5), in which the numbers of particles N1 and N2 in the two smallest datasets are combined and multiplied by a single parameter. There is also support for the hypothesis that there is an additional additive component d of order −1 µg/m3. When this value is used in both the individual and general models, the number of negative values in the models almost exactly matches the number of zeros reported by the CF_1 algorithm. This observation suggests a reason for the otherwise mysterious appearance of multiple zeros in CF_1 estimates when in fact the number of particles in N1 is never zero.
Precision of the CF_1 algorithm was consistently worse than that for the pm2.5 alt algorithm, with 26–78% of values unable to meet the standard of 20%. For the pm2.5 alt algorithm, 90–99% of values met the standard for eight of the nine monitors, and even the ninth monitor lost only 15% of the values compared to 54% for the Cf_1 algorithm.
Accuracy, as found by regressing CF_1 values on the average of the four calibrated sensors, ranged between 60 and 70% overestimates, agreeing with multiple other studies.
The limit of detection for the CF_1 algorithm, although not obviously high at 1.22 µg/m3, was more than 10 times higher than the LOD of 0.11 for the pm2.5 alt method, resulting in a large percentage of values (71%) below the LOD, compared to 1% for the pm25 alt algorithm.
Finally, we have provided in the Supplementary Materials a brief history of the development of health standards dating from the London fog of 1952 [31,32] and the development of aerosol instrumentation for workplace monitoring [33,34,35], which led ultimately to the development of today’s low-cost particle monitors [36,37].

5. Conclusions

We find first that the CF_ATM algorithm offered by Plantower has no physical basis and should not be used.
Secondly, we have confirmed the finding that the CF_1 algorithm has the form a(N1 + N2) + cN3 + d and have estimated values for a, c, and d within about 10–20% tolerance. The finding that the additive component d is a negative value on the order of −1 µg/m3 may explain the large number of zeros often reported by the CF_1 algorithm—they are due to negative concentrations predicted by the CF_1 algorithm and therefore are set to zero.
Thirdly, the bias of the CF_1 algorithm, compared to calibrated monitors using the pm2.5 alt algorithm, is on the order of 60–70% overestimates of PM2.5, a result similar to those of many investigators. Precision, particularly for the low PM2.5 concentrations commonly found for indoor air, is poor, resulting in loss of 71% of observations in the most recent 2023 dataset analyzed. The CF_1 LOD for this set of observations in 2023 was more than 10 times the LOD for the pm2.5 alt algorithm, resulting in only 29% of observations exceeding the LOD, compared to 99% for the pm2.5 alt algorithm.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/a16080392/s1, Sections S1.0–S1.3; Figures S1–S4; Tables S1–S3.

Funding

This research received no external funding.

Data Availability Statement

Data will be made available on request.

Acknowledgments

The important contribution to science made by the PurpleAir organization in making the data of all those who agree widely available is acknowledged.

Conflicts of Interest

The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Papapostolou, V.; Zhang, H.; Feenstra, B.J.; Polidori, A. Development of an environmental chamber for evaluating the performance of low-cost air quality sensors under controlled conditions. Atmos. Environ. 2017, 171, 82–90. [Google Scholar] [CrossRef]
  2. AQ-SPEC. 2016. Available online: https://www.aqmd.gov/aq-spec/evaluations/criteria-pollutants/summary-pm (accessed on 21 July 2023).
  3. Barkjohn, K.; Gantt, B.; Clements, A.L. Development and application of a United States wide correction for PM2.5 data collected with the PurpleAir sensor. Atmos. Meas. Tech. 2020, 4, 105194. [Google Scholar]
  4. Bi, J.; Wallace, L.A.; Sarnat, J.A.; Liu, Y. Characterizing outdoor infiltration and indoor contribution of PM2.5 with citizen-based low-cost monitoring data. Environ. Pollut. 2021, 276, 116793. [Google Scholar] [CrossRef]
  5. Bulot, F.M.J.; Russell, H.S.; Rezaei, M.; Johnson, M.S.; Ossont, S.J.J.; Morris, A.K.R.; Basford, P.J.; Easton, N.H.C.; Foster, G.L.; Loxham, M.; et al. Laboratory Comparison of Low-Cost Particulate Matter Sensors to Measure Transient Events of Pollution. Sensors 2020, 20, 2219. [Google Scholar] [CrossRef] [PubMed]
  6. Delp, W.W.; Singer, B.C. Wildfire smoke adjustment factors for low-cost and professional PM2.5 monitors with optical sensors. Sensors 2020, 20, 3683. [Google Scholar] [CrossRef]
  7. Gupta, P.; Doraiswamy, P.; Levy, R.; Pikelnaya, O.; Maibach, J.; Feenstra, B.; Polidori, A.; Kiros, F.; Mills, K.C. Impact of California fires on local and regional air quality: The role of a low-cost sensor network and satellite observations. GeoHealth 2018, 2, 172–181. [Google Scholar] [CrossRef]
  8. He, M.; Kuerbanjiang, N.; Dhaniyala, S. Performance characteristics of the low-cost Plantower PMS optical sensor. Aerosol Sci. Technol. 2020, 54, 232–241. [Google Scholar] [CrossRef]
  9. Holder, A.L.; Mebust, A.K.; Maghran, L.A.; McGown, M.R.; Stewart, K.E.; Vallano, D.M.; Elleman, R.A.; Baker, K.R. Field Evaluation of Low-Cost Particulate Matter Sensors for Measuring Wildfire Smoke. Sensors 2020, 20, 4796. [Google Scholar] [CrossRef]
  10. Kelly, K.; Whitaker, J.; Petty, A.; Widmer, C.; Dybwad, A.; Sleeth, D.; Martin, R.; Butterfield, A. Ambient and laboratory evaluation of a low-cost particulate matter sensor. Environ. Pollut. 2017, 221, 491–500. [Google Scholar] [CrossRef]
  11. Zamora, M.L.; Xiong, F.; Gentner, D.R.; Kerkez, B.; Kohrman-Glaser, J.; Koehler, K. Field and Laboratory Evaluations of the Low-Cost Plantower Particulate Matter Sensor. Environ. Sci. Technol. 2018, 53, 838–849. [Google Scholar] [CrossRef]
  12. Liang, Y.; Sengupta, D.; Campmier, M.J.; Lunderberg, D.M.; Apte, J.S.; Goldstein, A.H. Wildfire smoke impacts on indoor air quality assessed using crowdsourced data in California. Proc. Natl. Acad. Sci. USA 2021, 118, e2106478118. [Google Scholar] [CrossRef]
  13. Robinson, D.L. Accurate, low cost PM2.5 measurements demonstrate the large spatial variation in wood smoke pollution in regional Australia and improve modeling and estimates of health costs. Atmosphere 2020, 11, 856. [Google Scholar] [CrossRef]
  14. Sayahi, T.; Butterfield, A.; Kelly, K.E. Long-term field evaluation of the Plantower PMS low-cost particulate matter sensors. Environ. Pollut. 2019, 245, 932–940. [Google Scholar] [CrossRef]
  15. Singer, B.C.; Delp, W.W. Response of consumer and research grade indoor air quality monitors to residential sources of fine particles. Indoor Air 2018, 28, 624–639. [Google Scholar] [CrossRef]
  16. Tryner, J.; L’Orange, C.; Mehaffy, J.; Miller-Lionberg, D.; Hofstetter, J.C.; Wilson, A.; Volckens, J. Laboratory evaluation of low-cost PurpleAir PM monitors and in-field correction using co-located portable filter samplers. Atmos. Environ. 2020, 220, 117067. [Google Scholar] [CrossRef]
  17. US EPA. 2017. Available online: https://www.epa.gov/air-sensor-toolbox/how-use-air-sensors-air-sensor-guidebook (accessed on 30 March 2022).
  18. Wallace, L. Intercomparison of PurpleAir Sensor Performance over Three Years Indoors and Outdoors at a Home: Bias, Precision, and Limit of Detection Using an Improved Algorithm for Calculating PM2.5. Sensors 2022, 22, 2755. [Google Scholar] [CrossRef]
  19. Wallace, L.; Bi, J.; Ott, W.R.; Sarnat, J.; Liu, Y. Calibration of low-cost PurpleAir outdoor monitors using an improved method of calculating PM2.5. Atmos. Environ. 2021, 256, 118432. [Google Scholar] [CrossRef]
  20. Wallace, L.; Zhao, T.; Klepeis, N.E. Calibration of PurpleAir PA-I and PA-II monitors using daily mean PM2.5 concentrations measured in California, Washington, and Oregon from 2017 to 2021. Sensors 2022, 22, 4741. [Google Scholar] [CrossRef]
  21. Wallace, L.A.; Zhao, T.; Klepeis, N.E. Indoor contribution to PM2.5 exposure using all PurpleAir sites in Washington, Oregon, and California. Indoor Air 2022, 32, e13105. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, Z.; Delp, W.W.; Singer, B.C. Performance of low-cost indoor air quality monitors for PM2.5 and PM10 from residential sources. Build. Environ. 2020, 174, 106654. [Google Scholar] [CrossRef]
  23. Zheng, T.; Bergin, M.H.; Johnson, K.K.; Tripathi, S.N.; Shirodkar, S.; Landis, M.S.; Sutaria, R.; Carlson, D.E. Field evaluation of low-cost particulate matter sensors in high- and low-concentration environments. Atmos. Meas. Tech. 2018, 11, 4823–4846. [Google Scholar] [CrossRef]
  24. Zikova, N.; Hopke, P.K.; Ferro, A.R. Evaluation of new low-cost particle monitors for PM2.5 concentrations measurements. J. Aerosol Sci. 2017, 105, 24–34. [Google Scholar] [CrossRef]
  25. Zusman, M.; Schumacher, C.S.; Gassett, A.J.; Spalt, E.W.; Austin, E.; Larson, T.V.; Carvlin, G.; Seto, E.; Kaufman, J.D.; Sheppard, L. Calibration of low-cost particulate matter sensors: Model development for a multi-city epidemiological study. Environ. Int. 2020, 134, 105329. [Google Scholar] [CrossRef]
  26. Wallace, L. Cracking the code—Matching a proprietary algorithm for a low-cost sensor measuring PM1 and PM2.5. Sci. Total Environ. 2023, 893, 164874. [Google Scholar] [CrossRef] [PubMed]
  27. Wallace, L.; Ott, W.; Zhao, T.; Cheng, K.-C.; Hildemann, L. Secondhand exposure from vaping marijuana: Concentrations, emissions, and exposures determined using both research-grade and low-cost monitors. Atmos. Environ. X 2020, 8, 100093. [Google Scholar] [CrossRef]
  28. Zhao, T.; Cheng, K.-C.; Ott, W.R.; Wallace, L.; Hildemann, L.M. Characteristics of secondhand cannabis smoke from common smoking methods: Calibration factor, emission rate, and particle removal rate. Atmos. Environ. 2020, 242, 117731. [Google Scholar] [CrossRef]
  29. Switzer, P.; Ott, W. Derivation of an indoor air averaging time model from the mass balance equation for the case of in-dependent source inputs and fixed air exchange rates. J. Expos. Anal. Environ. Epidemiol. 1992, 2, 113–135. [Google Scholar]
  30. Wallace, L.A.; Wheeler, A.J.; Kearney, J.; Van Ryswyk, K.; You, H.; Kulka, R.H.; E Rasmussen, P.; Brook, J.R.; Xu, X. Validation of continuous particle monitors for personal, indoor, and outdoor exposures. J. Expo. Sci. Environ. Epidemiol. 2010, 21, 49–64. [Google Scholar] [CrossRef]
  31. Bell, M.L.; Davis, D.L. Reassessment of the Lethal London Fog of 1952: Novel Indicators of Acute and Chronic Consequences of Acute Exposure to Air Pollution. Environ. Health Perspect. 2001, 109, 389–394. [Google Scholar] [CrossRef]
  32. Brittanica. 2022. Available online:https://www.britannica.com/event/Great-Smog-of-London (accessed on 1 January 2023).
  33. Walton, W.H.; Vincent, J.H. Aerosol Instrumentation in Occupational Hygiene: An Historical Perspective. Aerosol Sci. Technol. 1998, 28, 417–438. [Google Scholar] [CrossRef]
  34. Phalen, R.F.; Hinds, W.C.; Lioy, P.J.; Lippmann, M.; McCawley, M.A.; Raabe, O.G.; Soderholm, S.C.; Stuart, B.O. Particle Size-Selective Sampling in the Workplace: Rationale and Recommended Techniques. Ann. Occup. Hyg. 1988, 32, 403–411. Available online: https://www.sciencedirect.com/science/article/abs/pii/B9780080341859500462?via%3Dihub (accessed on 20 July 2023).
  35. CEN 1995. Workplace atmospheres—Guidance for the assessment of exposure by inhalation to chemical agents for compar-ison with limit values and measurement strategy. CEN 689, 1995. European Committee for Standardization (CEN), rue de Stassart 36, B-1050 Brussels, Belgium. Available online: https://standards.iteh.ai/catalog/standards/cen/cf6e7b0a-00ef-46c6-a0f3-89f61e2c5866/en-689-1995 (accessed on 20 July 2023).
  36. Stavroulas, I.; Grivas, G.; Michalopoulos, P.; Liakakou, E.; Bougiatioti, A.; Kalkavouras, P.; Fameli, K.M.; Hatzianastassiou, N.; Mihalopoulos, N.; Gerasopoulos, E. Field Evaluation of Low-Cost PM Sensors (Purple Air PA-II) Under Variable Urban Air Quality Conditions, in Greece. Atmosphere 2020, 11, 926. [Google Scholar]
  37. Crilley, L.R.; Shaw, M.; Pound, R.; Kramer, L.J.; Price, R.; Young, S.; Lewis, A.C.; Pope, F.D. Evaluation of a low-cost optical particle counter (Alphasense OPC-N2) for ambient air monitoring Atmos. Meas. Tech. 2018, 13, 1181–1193. [Google Scholar] [CrossRef]
Figure 1. Contribution of each bin to total PM2.5 during one year of monitoring indoor air PM2.5 in a home. Bin midpoints are the geometrical mean of the bin boundaries.
Figure 1. Contribution of each bin to total PM2.5 during one year of monitoring indoor air PM2.5 in a home. Bin midpoints are the geometrical mean of the bin boundaries.
Algorithms 16 00392 g001
Figure 2. Relationship of the CF_1 algorithm to the “atmospheric environment” (CF_ATM) algorithm.
Figure 2. Relationship of the CF_1 algorithm to the “atmospheric environment” (CF_ATM) algorithm.
Algorithms 16 00392 g002
Figure 3. PurpleAir (CF 3.24) estimates of particle source strength (mg/puff) compared to SidePak (CF 0.44). PurpleAir monitor IDs 1 and 2 (sensors 1a, 1b, 2a, 2b) were averaged for this comparison. SidePak values are means of triplicate SidePaks.
Figure 3. PurpleAir (CF 3.24) estimates of particle source strength (mg/puff) compared to SidePak (CF 0.44). PurpleAir monitor IDs 1 and 2 (sensors 1a, 1b, 2a, 2b) were averaged for this comparison. SidePak values are means of triplicate SidePaks.
Algorithms 16 00392 g003
Figure 4. Regression of individual model against measured PM2.5 using the CF_1 algorithm. The R2 was 0.9994.
Figure 4. Regression of individual model against measured PM2.5 using the CF_1 algorithm. The R2 was 0.9994.
Algorithms 16 00392 g004
Figure 5. Regression of general model for the same Plantower sensor as in Figure 4. The R2 was 0.9990.
Figure 5. Regression of general model for the same Plantower sensor as in Figure 4. The R2 was 0.9990.
Algorithms 16 00392 g005
Figure 6. Regression of CF_1 estimates of PM2.5 for sensor 1b vs. the calibrated pm2.5 alt estimate as the reference.
Figure 6. Regression of CF_1 estimates of PM2.5 for sensor 1b vs. the calibrated pm2.5 alt estimate as the reference.
Algorithms 16 00392 g006
Table 1. Low-cost particle instruments using Plantower sensors tested in chamber studies by AQ-SPEC.
Table 1. Low-cost particle instruments using Plantower sensors tested in chamber studies by AQ-SPEC.
Instrument Tested by AQ-SPEC in Laboratory ReportPlantower Model
Airbeam 27003
Airbeam 37003
Air Quality Egg5003
APT MinimaA003
AS-Lung3003
Davis InstrumentsA003
PurpleAir PM-I1003
PurpleAir PM-II5003
PurpleAir Flex6003
Redspira5003
Sain Smart5003
Smart Citizen Kit5003
Magna SCI SRL uRAD SMOGGIE A003
Table 2. Calculation of the coefficients a, b, and c.
Table 2. Calculation of the coefficients a, b, and c.
BinDp (Geom. Mean) (µm)Volume (µm3/100)Mass (Density = 1) (µg/m3/100)
0.3–0.5 µm0.3872983350.000304183a = 0.000304183
0.5–1 µm0.7071067810.001851201b = 0.001851201
1–2.5 µm1.581138830.020697059c = 0.020697059
Table 3. Contribution of each bin to PM2.5 over 1 year monitoring indoor particles.
Table 3. Contribution of each bin to PM2.5 over 1 year monitoring indoor particles.
Bin (µm)Mass per Particle (µg/m3/100) # Particles (Ni) per DeciliterMass per Bin (µg/m3)Fraction of PM2.5
0.3–0.50.0003041834450.1350.146
0.5–10.0018512011470.2720.294
1–2.50.020697059250.5180.560
Table 4. Datasets analyzed in this study.
Table 4. Datasets analyzed in this study.
SiteYearStart DateEnd DateN obs
Santa Rosa20211 January 202131 December 202152,400
Santa Rosa20231 January 202313 July 202324,700
Redwood City2021–202229 April 202131 October 202278,900
Redwood City2022–202322 September 202223 June 202339,900
Table 5. Comparison of collocated sensors 4a and 4b to the calibrated monitors 1 and 2.
Table 5. Comparison of collocated sensors 4a and 4b to the calibrated monitors 1 and 2.
Valid NMeanStd. Err.MedianUpper
Quartile
mean calibrated monitors (1a, 1b, 2a, 2b)48,0732.950.0291.603.14
4a pm2.5 alt CF3.452,4352.940.0391.372.80
4b pm2.5 alt CF3.452,4353.090.0421.362.91
Table 6. Estimates of a, c, and d for individual best-fit models of CF_1 PM2.5 during 2021.
Table 6. Estimates of a, c, and d for individual best-fit models of CF_1 PM2.5 during 2021.
SensorNacd
1a52,4680.0041270.093125−0.84448
1b52,4680.0039260.111286−0.97701
2a51,1020.003790.12145−1.08068
2b51,1020.004460.09843−1.4993
3a52,4680.0031480.103468−0.78551
3b52,4680.0039010.111096−0.99718
4a52,4680.004760.08175−1.06733
4b52,4680.0034610.12012−0.9171
     
mean 0.0039470.105091−1.02107
SD 0.0005150.0136430.218565
SE 0.0001820.0048240.077274
RSD 0.0461580.045899−0.07568
RSE 4.60%4.59%7.57%
Table 7. Comparison of collocated monitors 4–6 to the calibrated monitors 1 and 2 in 2023 data from the Santa Rosa site.
Table 7. Comparison of collocated monitors 4–6 to the calibrated monitors 1 and 2 in 2023 data from the Santa Rosa site.
Sensor IDValid NMeanStd. Err.Lower
Quartile
MedianUpper Quartile
1a, 1b, 2a, 2b24,7271.010.0100.320.611.17
4a24,7271.070.0110.300.611.22
4b24,7271.160.0120.330.641.26
5a24,7251.060.0120.290.571.15
5b24,7251.150.0130.310.621.24
6a24,7251.170.0130.310.621.30
6b24,7251.070.0120.300.591.18
Table 8. Individual best-fit models of CF_1 for 12 sensors. The general model is calculated as the mean values of a, c, and d.
Table 8. Individual best-fit models of CF_1 for 12 sensors. The general model is calculated as the mean values of a, c, and d.
SensorNacd
1a39,0110.0041090.092604−0.614406
1b39,0110.0036770.10811−0.685727
2a39,0100.004470.096386−0.76239
2b39,0100.004330.078342−0.959499
3a 39,0110.003820.095303−0.678204
3b 39,0080.0039720.0986−0.728282
4a39,0010.0045650.079921−0.92565
4b39,0010.0044670.097416−0.848253
5a39,0010.0038970.097416−0.848253
5b39,0010.0035430.106866−0.612662
6a38,9920.0026720.123556−0.775122
6b38,9920.0039370.10173−0.64114
     
mean 0.0039550.098021−0.75663233
SD 0.000520.0120650.11799929
SE 0.000150.0034830.03406346
RSD 0.1315860.123084−0.15595327
RSE 0.0379860.035531−0.04501983
RSE (%) 3.8%3.6%4.5%
Table 9. Regression results (intercept, slope, R2) for the individual models and the general model on observed values of CF_1 PM2.5 from the period 1 January 2023 to 7 January 2023 at the Santa Rosa site.
Table 9. Regression results (intercept, slope, R2) for the individual models and the general model on observed values of CF_1 PM2.5 from the period 1 January 2023 to 7 January 2023 at the Santa Rosa site.
Individual Best-Fit ModelsGeneral Model
SensorNInterceptSlopeR2InterceptSlopeR2
1a27,6290.01150.98930.9894−0.1340.99440.9893
1b27,6290.01380.98770.9857−0.05270.97470.9854
2a27,6280.01490.98870.9859−0.0180.98360.9857
2b27,6280.03420.96120.95650.29181.01160.9552
4a27,6110.01740.98580.9853−0.03861.0170.9853
4b27,6110.01110.99270.99060.18511.0260.9919
3OUT a27,6110.01840.99250.99260.04910.94080.9931
3OUT b27,6110.02370.99320.99320.04910.94080.9931
5A24,7230.00950.99050.9905−0.13251.0040.9905
5b24,7230.01330.98830.9883−0.11750.99250.9875
6a24,7230.03290.98580.9858−0.11240.97270.9897
6b24,7230.01070.98980.98970.18641.00350.9798
        
mean 0.01760.98710.98610.01300.98850.9855
SD 0.00840.00850.00970.14250.02740.0103
SE 0.002430.002460.002800.041150.007920.00297
RSD 0.47810.00860.009910.97870.02780.0105
RSE 0.13800.00250.00283.16930.00800.0030
RSE (%) 13.8%0.3%0.3%317.0%0.8%0.3%
Table 10. Best-fit estimates for parameters a, c, and d and regression results (intercept, slope, R2) for the individual models on observed values of CF_1 PM2.5 from the period 22 September 2023 to 7 December 23 at the Redwood City site.
Table 10. Best-fit estimates for parameters a, c, and d and regression results (intercept, slope, R2) for the individual models on observed values of CF_1 PM2.5 from the period 22 September 2023 to 7 December 23 at the Redwood City site.
Sensoracd
7a0.0048360.085754−0.823199
7b0.0047860.088221−0.737484
8a0.0044410.091017−0.776271
8b0.0035030.109911−0.825874
9a OUT0.004490.08056−1.61195
9b OUT0.004870.08143−1.05379
    
mean0.0044880.089482−0.971428
SD0.0005150.0107670.332573536
SE0.000210.0043960.135772578
RSD0.1150.120−0.342
RSE0.0470.049−0.140
RSE (%)4.7%4.9%14%
Table 11. Regression parameters (intercept, slope, r2) for the 2023 data from Redwood city.
Table 11. Regression parameters (intercept, slope, r2) for the 2023 data from Redwood city.
SensorIndividual ModelsGeneral Model
InterceptSlopeR2InterceptSlopeR2
7a0.01140.99490.9949−0.14130.99320.9995
7b0.01330.9950.9953−0.19420.95020.9993
8a0.02660.99050.9923−0.040.9940.9856
8b0.11070.9620.96110.05130.98920.9907
9a OUT0.05680.9910.9910.74741.06460.9967
9b OUT0.02020.99720.9972−0.32191.02420.9979
       
mean0.0398330.9884330.9886330.0168831.0025670.99495
SD0.0384270.0132010.0136690.3801120.0384660.005601
SE0.0156880.0053890.005580.155180.0157040.002287
RSD0.9650.0130.01422.5140.0380.006
RSE0.3940.0050.0069.1910.0160.002
RSE (%)39.0%0.5%0.6%919%1.6%0.2%
Table 12. Comparison of the number of observations lost for the two algorithms due to poor precision.
Table 12. Comparison of the number of observations lost for the two algorithms due to poor precision.
MonitorTotal NCF_1PM2.5 alt
N with
Adequate Precision
Fraction with Adequate
Precision
N with
Adequate Precision
Fraction with Adequate Precision
139,01285270.2238,4890.987
239,01119,0080.4937,8950.971
338,99325,3080.6538,7890.995
439,01219,8920.5137,7950.969
539,00220,0980.5238,4460.986
639,00220,7160.5338,1280.978
738,29028,2800.7438,1280.996
838,89623,5950.6135,2520.906
938,29017,4390.4632,6890.854
Table 13. Measured precision using the CF_1 and pm2.5 alt algorithms for six PA-II monitors in Santa Rosa during 2023.
Table 13. Measured precision using the CF_1 and pm2.5 alt algorithms for six PA-II monitors in Santa Rosa during 2023.
Valid NMeanMedianUpper Quartile90th
Percentile
1 precision pm2.5 alt39,0120.0500.0390.0670.11
2 precision pm2.5 alt39,0110.0590.0430.0810.13
4 precision pm2.5 alt39,0120.0620.0470.0840.13
3 OUT precision pm2.5 alt38,9930.0480.0380.0670.10
5 precision pm2.5 alt39,0020.0530.0400.0710.11
6 precision pm2.5 alt39,0020.0530.0370.0680.11
1 precision CF_132,6500.530.510.941.00
2 precision CF_129,1030.260.110.321.00
4 precision CF_130,2820.260.110.321.00
3 OUT precision CF_136,5840.210.130.230.45
5 precision CF_129,6180.240.090.291.00
6 precision CF_130,0600.230.080.271.00
Table 14. Comparison of general model parameters for four new datasets with the original finding in [26].
Table 14. Comparison of general model parameters for four new datasets with the original finding in [26].
SiteYearN obsacd
Santa Rosa 12020134,8000.00420.10−1.1
Redwood City2021–202278,9000.00450.09−1.0
Santa Rosa202152,4000.00400.11−1.0
Redwood City2022–202339,9000.00450.09−1.0
Santa Rosa202324,7000.00400.10−0.8
1 From [26].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wallace, L. Testing a New “Decrypted” Algorithm for Plantower Sensors Measuring PM2.5: Comparison with an Alternative Algorithm. Algorithms 2023, 16, 392. https://doi.org/10.3390/a16080392

AMA Style

Wallace L. Testing a New “Decrypted” Algorithm for Plantower Sensors Measuring PM2.5: Comparison with an Alternative Algorithm. Algorithms. 2023; 16(8):392. https://doi.org/10.3390/a16080392

Chicago/Turabian Style

Wallace, Lance. 2023. "Testing a New “Decrypted” Algorithm for Plantower Sensors Measuring PM2.5: Comparison with an Alternative Algorithm" Algorithms 16, no. 8: 392. https://doi.org/10.3390/a16080392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop