Next Article in Journal
A Balanced Portfolio Can Have a Higher Geometric Return Than the Risky Asset
Previous Article in Journal
The Turkish Cypriot Municipalities’ Productivity and Performance: An Application of Data Envelopment Analysis and the Tobit Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting Food-Safety Risk and Determining Cost-Effective Risk-Reduction Strategies

by
William E. Nganje
1,*,
Linda D. Burbidge
2,
Elisha K. Denkyirah
3 and
Elvis M. Ndembe
4
1
Department of Agribusiness and Applied Economics, North Dakota State University, Fargo, ND 58108-6050, USA
2
Dakota College, Bottineau, ND 58318-1198, USA
3
Department of Agricultural and Applied Economics, Texas Tech University, Lubbock, TX 79409, USA
4
Department of Management, College of Business Administration, University of Nebraska Omaha (Scott Campus), Omaha, NE 68106, USA
*
Author to whom correspondence should be addressed.
J. Risk Financial Manag. 2021, 14(9), 408; https://doi.org/10.3390/jrfm14090408
Submission received: 4 August 2021 / Revised: 20 August 2021 / Accepted: 21 August 2021 / Published: 1 September 2021
(This article belongs to the Section Applied Economics and Finance)

Abstract

:
Food safety is a major risk for agribusiness firms. According to the Centers for Disease Control and Prevention (CDC), approximately 5000 people die annually, and 36,000 people are hospitalized as a result of foodborne outbreaks in the United States. Globally, the death estimate is about 42,000 people per year. A single outbreak could cost a particular segment of the food industry hundreds of millions of dollars due to recalls and liability; these instances might amount to billions of dollars annually. Despite U.S. advancements and regulations, such as pathogen reduction/hazard analysis critical control points (PR/HACCP) in 1996 and the Food Modernization Act in 2010, to reduce food-safety risk, retail meat facilities continue to experience recalls and major outbreaks. We developed a stochastic-optimization framework and used stochastic-dominance methods to evaluate the effectiveness for three strategies that are used by retail meat facilities. Copula value-at-risk (CVaR) was utilized to predict the magnitude of the risk exposure associated with alternative, cost-effective risk-reduction strategies. The results showed that optimal retail-intervention strategies vary by meat and pathogen types, and that having a single Salmonella performance standard for PR/HACCP could be inefficient for reducing other pathogens and food-safety risks.

1. Introduction

In 1996, the United State Department of Agriculture, Food Safety Inspection Service (USDA-FSIS) introduced new, mandatory food-safety regulations following repeated discoveries of E. coli and Salmonella in the U.S. food supply chain during the 1980s and early 1990s. The new regulations, pathogen reduction/hazard analysis critical control points (PR/HACCP), mandated the creation of critical control points (CCPs) for food production and processing operations and also established testing routines for food products in order to ensure the safety of meat and poultry products. By 2000, these regulations had been adopted by meat and poultry processors. Pathogen levels decreased after adopting the mandatory PR/HACCP for meat and poultry processing (CDC FoodNet as reported by Marler 2010a). The CDC report showed a 30%, 9%, 32%, and 29% reduction in Camplylobacter, Salmonella, Listeria, and E. coli O157, respectively, over time. Vibro, a bacterial pathogen commonly caused by eating raw or improperly prepared seafood, saw a 41% increase during the same time period. Although most bacterial pathogens have been decreasing since 1996, the prevalence of viral pathogens has been increasing, despite additional regulations with the 2010 Food Modernization Act. However, there have been more outbreaks at retail meat facilities (CDC FoodNet, as reported by Marler 2010a), with increases for the magnitude of multi-state outbreaks.
Currently, food-service and retail meat facilities are implementing various control measures, including PR/HACCP. These interventions fall into three broad strategy categories: (1) USDA-Food and Drug Administration (FDA) verification, (2) contracting with an external firm, and (3) PR/HACCP. It should be noted that all three strategies require standard operating procedures (SOPs) for hygiene. The first strategy requires the USDA-FDA to conduct random checks and pathogen testing; the second strategy involves engaging an external firm (e.g., Fresh Check) to carry out the random checks, to conduct pathogen testing, and to track progress; and the third strategy requires the establishment to have a functioning PR/HACCP plan with critical control points and pathogen testing. Given public concerns about overall food safety, the question is whether mandatory regulations at the retail level are cost-effective and efficient for reducing food-safety risks. In this study, we developed a stochastic-optimization framework and used stochastic-dominance methods to evaluate the effectiveness of these three strategies when they were implemented at retail meat facilities. Copula value-at-risk (CVaR) was utilized to predict the magnitude of loss with alternative, cost-effective risk-reduction strategies for different meat types.
In this study, cost-effectiveness is defined as the optimal point at which additional expenditures to reduce pathogen prevalence will have minimal food-risk reduction effects. It is hypothesized that successfully implementing PR/HACCP is the most cost-effective and risk-reducing strategy for retail meat facilities. A major contribution for this study is to provide a framework to simultaneously evaluate the cost-effectiveness and the risk-reduction capabilities of alternative food-safety risk-mitigation strategies. Another contribution is to facilitate an efficient food-safety policy design at retail meat facilities. To the best of our knowledge, no study has evaluated the effectiveness of the three strategies implemented at the retail meat facilities using stochastic dominance methods and predicted the magnitude of loss with alternative cost-effective risk-reduction strategies for different meat types using CVaR.
This study aims to assess the various risk-reduction strategies implemented at the retail meat facilities and to determine the cost-effective strategy to be implemented at the retail level to reduce pathogen levels in meat and poultry products. Although PR/HACCP has been implemented at the meat processing facilities to reduce pathogen levels in meat, there has not been significant assessment of the strategies to reduce food-safety risk implemented at the retail level. We show that risk is reduced as tolerance levels are tightened. However, very low tolerance levels could also induce higher implementation costs, making a particular strategy cost-ineffective, and pushing retail firms out of business. Rather than maintain one performance standard (e.g., Salmonella in PR/HACCP); beef, chicken and pork call for E. coli testing at the retail level. Further, retail firms could significantly reduce food-safety risk by implementing all three pathogen reduction strategies (USDA-FDA verification, contracting with an external firm, and PR/HACCP) and improve profitability and consumer safety.

2. Literature Review

A leading cause of global mortality is foodborne disease, which is estimated to about 600 million illnesses and 42,000 deaths annually (Havelaar et al. 2015). Foodborne illnesses are a result of pathogens (including viruses, fungi, bacteria, worms, protozoa, etc.); chemicals (e.g., pesticides, food additives, herbicides, etc.); and objects, such as glass, metal, wood, etc., contaminating food for human consumption (Pouliot and Wang 2018). Moon and Tonsor (2020) note that foodborne illnesses have individual and societal costs, and these costs include hospitalizations and deaths. Due to these expenses, scholars and policy makers have focused particular interest on minimizing food-safety risks.
Numerous studies have examined the cost for and quality-of-life loss associated with foodborne illnesses. Minor et al. (2015) estimate that the annual social-welfare cost of foodborne illnesses is between USD 14 billion and USD 72 billion while the cost per illness is USD 3630, on average. Another study estimates that the annual cost of health issues arising from foodborne illnesses in the U.S. is about USD 90 billion (Scharff 2018). Meat and poultry products are the major sources of foodborne illnesses (Heredia and García 2018). About a third (30.9%) of all foodborne illnesses result from the consumption of meat and poultry products. This estimate yields 2.9 million illnesses annually, as well as an economic cost of USD 20.3 billion. Among meat contaminations, key pathogens include E. coli O157, Listeria, Salmonella, and Campylobacter (Shang and Tonsor 2017). Campylobacter spp. in poultry constitutes USD 6.9 billion of the costs, Salmonella spp. in chicken and pork represents USD 2.8 billion and USD 1.9 billion, respectively; and Toxoplasma gondii in pork represents USD 1.9 billion. The data reveal a decline in the share of foodborne illnesses attributable to meat and poultry from 48% in 1998 to 34% in 2017, suggesting that there is still much work to be done (Scharff 2020).
When food-safety outbreaks occur, state and public health organizations, including the USDA-FSIS and the CDC, assess the source to determine whether the outbreaks are due to meat or poultry. The resulting actions may include product recalls and public-health alerts (Robertson et al. 2016). A food recall is when a producer or government agency removes a food product from the market or shelves due to suspected foodborne outbreaks. Recalls can occur due to pathogen contamination; undeclared allergies; mislabeling; and the presence of foreign materials, such as plastics and metals. Food recalls are aimed at removing potentially adulterated or misbranded products from the shelves, protecting public health, ensuring fair trade, and mitigating economic consequences (Gorton and Stasiewicz 2017; Jianbin and Hooker 2019; Moon and Tonsor 2020).
Despite policy and strategy advancements, significant food-safety problems persist. During the 2004–2013 period, 4900 food-product recalls were supervised by the USDA-FDA and USDA-FSIS (Page 2018). Between 1994 and 2016, a total of 690 million pounds of products were recalled (Gorton and Stasiewicz 2017; Ollinger and Houser 2020).1 From 2014 to 2018, 622 meat and poultry products were recalled, which equates to about 140 million pounds. About 90% of these recalls were identified as a class I health hazard. Among the foodborne pathogens leading to the recalls, Listeria accounted for 9.65% of the total amount, followed by E. coli (7.07%) and Salmonella (2.57%). Poultry and beef items were the highest among the recalled (i.e., a total of 330 recalls, which was equivalent to 58,657,233 pounds of product).2 In 2019, 124 meat and poultry products, representing close to 20 million pounds of food, were recalled, with 96% of the recalls classified as a class I health hazard. For foodborne pathogens, E. coli was the highest cause of recalls (5.65%), followed by Listeria (4.84%) and Salmonella (2.42%) (USDA-FSIS 2020).
Food recalls can cause significant economic losses (Pozo and Schroeder 2016; Shang and Tonsor 2017), and downstream agents, such as retailers (including supermarkets and grocery stores), are more likely to see abnormal returns as a result of recalls than the upstream agribusiness agents, such as cattle ranches and feedlots (Moon and Tonsor 2020).3 The literature reveals that meat and poultry products have the highest risk and constitute a major commodity for food-safety studies.

3. Conceptual and Theoretical Framework

Food safety is a major risk for agribusiness firms. Outbreaks continue to increase, and global foodborne illness is on the rise. Understanding food-safety risk is important to help reduce foodborne illness and to ensure consumer food safety. An important task to the food industry is to ensure the best food quality and safety for the public. As a result of the food safety hazards at every stage of the food supply chain, effective and efficient risk reduction strategies need to be implemented throughout the food supply chain including the retail level (Liu et al. 2021). At the retail level, three food safety risk-reduction strategies (USDA-FDA verification, PR/HACCP, and contracting external firm) have been implemented to ensure food safety. Studies reveal a linkage between risk reduction strategies, food safety, and firm performance. Liu et al. (2021) reveal that food safety risk reduction strategies ensure food safety and firm performance. Minor and Parrett (2017) reveal that cost-effective risk reduction strategies do not only ensure food safety, but cost reduction as well. Nganje et al. (1999) showed that HACCP as a risk reduction strategy reduces food safety risk and improves a firm’s profitability. Firms have the social responsibility of ensuring food safety by following the necessary procedures established by the Food Safety Authorities (Hung et al. 2019; Maloni and Brown 2006; Piacentini et al. 2000). In addition, firms’ social responsibility should incorporate food safety measures beyond baseline requirements demanded by regulation or governmental policy.
In this paper, we developed the stochastic optimization model to determine the optimal testing and sampling intensity for alternative tolerance levels, as well as the probability of contamination by using survival analysis at the retail level. This theoretical framework has two major components: quality loss, i.e., the supply-demand effect; and the cost of intervention, i.e., the testing and sampling costs. The quality-loss costs cover expenditures associated with ensuring that products conform to food-safety quality specifications. The conformance costs include prevention and appraisal while the nonconformance costs incorporate the price of internal and external failure. Internal failure occurs when Salmonella (the only performance standard for pathogen with PR/HACCP) or any of the other pathogen levels are higher than the performance standard (e.g., 49% prevalence for ground turkey, 0% prevalence for ready-to-eat meat). The model is consistent with the quality-loss framework (Prevention–Appraisal–Failure) that has been used extensively in the quality-management literature. (See examples of the Taguchi (1986) loss function). The loss function is a financial measure about user dissatisfaction with a product’s performance as it deviates from a target safety value. As the sampling intensity increases, the testing and sampling costs accumulate.
The main costs associated with pathogen testing are the price for the test itself, labor, and the fee for utilities. These expenditures are the direct costs of intervention. The indirect costs are incurred when a product lot tests positive and must be rejected. Thus, the firm has purchased a product (e.g., fabricated carcass), and then finds it to be contaminated, and must discard that product. Direct and indirect cost components are simulated with firm-level microbial data at the retail level by using stochastic-optimizer software, with the objective function being a net-revenue function and the choice variables being testing intensity and sampling decision.
The marginal probability of contamination was estimated for each meat product by utilizing a risk extreme value (RiskExtValue) distribution and a stochastic simulation. Pathogen levels at the final stage (on the shelf) were set equal to a function of the pathogen levels for each meat product multiplied by a survival function. At the retail level, a survival analysis was performed to identify the pathogen-survival parameters. The survival analysis characterized the pathogens’ exposure and infectious doses. Contamination data for ready-to-eat meats were used to estimate the probability of pathogen survival. When the final product’s pathogen level is less than the performance standard, no violations occur; otherwise, if this level is greater than the performance standard, a violation has arisen.
When pathogen testing is done and the probability of contamination is greater than zero, benefits result from utilizing a risk-reduction strategy. The risk reduction’s value could be greater than the total revenue because recall costs include the total value of production, the loss of the market-share value, and the liability payments. The value for the risk reduction is an additional benefit that accrues for a retail firm which tests for a pathogen and implements a specific intervention strategy. The risk-reduction measure compensates the firm when it tests for pathogens at the CCPs and implements control measures if the performance standards are violated. Therefore, the value of risk reduction is a measure of the benefit derived from not shutting down the facility due to preventable food-safety risks and outbreaks. Hence, the value of risk reduction is a function of the decision to test and the sampling intensity. The value of risk reduction estimates the portion of the total revenue that is retained at each CCP when an outbreak is prevented.
The stochastic optimization model is used to determine the optimal intervention for each strategy (when to test and at what sampling intensity). To effectively compare the cost-effectiveness and risk preference across the optimal strategies, stochastic dominance was used. Stochastic dominance is a method that allows decision makers to assign rankings for alternative strategies while maximizing utility subject to its risk preference. The technique incorporates the firm’s preference for alternative strategies by utilizing a risk-aversion coefficient. There are several different types of stochastic dominance, but in this study, stochastic dominance with respect to a function was used. First-degree stochastic dominance uses the decision criterion that more returns is better than fewer returns as the main factor for comparing risky outcomes. Second-degree stochastic dominance utilizes expected returns and standard-deviation criteria, and it is comparable to a mean-variance efficient set. Stochastic dominance with respect to a function uses mean returns, variance, and preferences when comparing and ranking risky outcomes. Stochastic dominance allows researchers to rank the strategies in order to determine which one is the most cost effective. CVaR predicts the magnitude of the loss and adds a further dimension for ranking with stochastic dominance. CVaR also incorporates linkages, or the correlation between alternative distribution functions, adding accuracy to the approximation.

4. Data and Assumptions

To analyze the cost-effectiveness of the three alternative strategies data on pathogen prevalence (e.g., E. coli, Salmonella, and Campylobacter) for beef, chicken, turkey, and pork products were collected at retail meat facilities in North Dakota and Minnesota. The sample products were randomly purchased and analyzed. The retail outlets were also asked to complete a brief survey in order to determine which strategy they implemented along with the associated costs, pathogen-testing requirements, and other store characteristics (e.g., sales volume, prices, etc.). The microbial data were used to determine the prevalence and probability of pathogen contamination for each strategy. Although most retail firms did not use microbial testing, their employees were trained in PR/HACCP-based programs to help prevent and to reduce pathogen growth and contamination.
The sampled meats were whole chickens, beef cuts, pork cuts, whole turkeys, turkey cuts, and ground turkey. Various brands, including store brands, were purchased and tested by utilizing simple microbial swaps and counts. All products were raw and unfrozen, and they had no additives (i.e., spices, marinades, etc.) of any kind. Each store was visited for a five-day period to collect samples. The stores were visited randomly. Meat products were purchased at the store and then transported to the lab at the store’s temperature conditions by using ice. Processing for each sample began immediately upon arrival at the lab.
The meat sample’s distribution showed that, of the total 456 meats, 133, 123, 113, and 87 were from beef, chicken, pork, and turkey products, respectively. We assumed that a ground product posed a higher risk of foodborne illness because it requires more handling (grinding, processing, etc.) relative to cut meats. For ground products, 27%, 31%, and 21% were from beef, turkey, and pork, respectively.

4.1. Distribution of Risk Parameters

The distributions of retail and wholesale prices were fitted using BestFit with monthly prices from 1990 to 2020. Before calculating the quality-loss component, the probability of pathogen contamination at a given critical limit or tolerance level was calculated for each of the three pathogens and the contamination risk was determined using distribution functions. Because of the relatively small number of product samples that were tested for the three pathogens, the contamination data collected from the retail outlets formed the basis for 10,000 simulated draws for each meat and pathogen type, following a binomial distribution which depicted the presence or absence of each pathogen. At five different tolerance levels (29%, 15%, 10%, 5%, and 1%), the probability of contamination was estimated as θ i = n i / 10 , 000 , where n is the number of positive tests and I is the type of pathogen.
To account for the fact that retail products can be improperly handled, thus increasing the risk of a foodborne illness outbreak, a survival function was used to approximate a more accurate representation of pathogen prevalence and food-safety risk. An exponential probability distribution was utilized to model survival rates. The exponential distribution is a continuous distribution that is useful when calculating the area under a curve which corresponds to some interval of time; the calculation provides a probability that the random variable will take on a certain value (for instance, the number of positive Salmonella samples during the shelf life as a function of the average number of positive samples for the interval). The probability of the exponential random variable is as follows: P ( x x 0 ) = e ( x / μ ) , where μ is the average number of occurrences in an interval, e is Euler’s number, x is the number of occurrences in the interval and x 0 is the value of interest. In this case, x 0 is the number of occurrences that would violate the tolerance level (i.e., 29% positive tests from the 10,000 samples).
In this study, we assumed that pathogen presence and pathogen growth are independent events prior to the cVaR analysis because a product can test negative for pathogens but still have pathogen cultures that will multiply if they are exposed to ideal growth conditions. The probability of contamination is given by P ( A B ) = θ i P ( x x 0 ) , where θ i and P ( x x 0 ) are as previously defined.

4.2. Quality Loss

A quality-loss function was used to estimate quality loss due to violating the performance standards. Quality loss could occur at any point along the processing, retailing, and consumption continuum. A Taguchi loss function with smaller-is-better characteristics was utilized to calculate the quality loss. The Taguchi loss function establishes a financial measure for the user’s dissatisfaction with a product’s performance when that performance deviates from a target tolerance level in this case. The loss function is defined as L = ( A 0 / Δ 0 2 ) σ 2 , where L is the quality loss, A 0 is the welfare loss when the tolerance limit is violated, Δ 0 is the tolerance limit, and σ 2 measures the variance for the product’s quality. In these smaller-is-better models, variance is sometimes measured as a deviation from the target. Because the data were generated based on a binomial distribution (pathogen present or absent), the variance was calculated by utilizing the formula for binomial distributions. The loss to society is composed of costs incurred by the retail firm and the customer. The firm is exposed to rejection costs, the loss of future business, etc., while the consumer is exposed to foodborne illness and death. Quality deviations from the target value of zero represent an implicit cost to the system; therefore, shipments with minimal microbial pathogen content incur quality loss.
The welfare loss when the tolerance limit is violated is comprised of three major components. The first component is the loss from a decreasing demand when an outbreak occurs. Empirical evidence from Kay (2003) shows that the decreasing demand is the most important component of the loss because it represents about 60% of the total loss that a firm can incur. The second component is the loss due to the decreased market price. Studies have found a positive relationship between consumer perceptions about product quality and a product’s price (Grewal et al. 2003; Kerin et al. 1992). Thus, an outbreak that causes perceptions about poor food-safety quality could substantially lower prices for the affected products. This price decrease represents about 4.2% of the total cost when there is an outbreak (Kay 2003). The last component is the cost for the recall. Overall, product profitability may be influenced by the consumer’s overall production evaluation that spans nutrition, food safety, and a host of other variables. A positive image about the food product will enhance profitability via increased demand (Burton et al. 2009). When there is an outbreak, the firm may recall all of that day’s shipment, estimated as the total revenue (TR) for that day.
The welfare loss, Δ 0 , is an additive function of demand (D), the recall’s effect on consumer demand ( P m ), the influence on the meat’s price and total revenue (TR). The TR components of total output and price were modeled as stochastic variables. The total output was based on data collected from the survey and was modeled as a risk-triangular distribution with values of USD 156,250 for a high value; USD 75,000 for a low value and USD 98,125 for the most likely value. The price was simulated by taking the average monthly prices of each meat type for the years 1990 to 2020 and fitting those numbers to lognormal distributions. Each meat type had a different distribution. The model assumes that, if a test is made with a sampling intensity of at least two samples (the minimum number of samples required to be taken per critical control point, CCP), the potential quality loss is reduced by 50%. This loss is derived from one important model assumption that, when the contamination probability exceeds zero, there is a 50% reduction in quality loss if the appropriate minimal testing and intervention are performed. This assumption, in effect, was a cornerstone assumption for this study: PR/HACCP is at least 50% effective in reducing pathogen levels. FoodNet data revealed that pathogen levels decline significantly after PR/HACCP implementation. Assumptions about the effectiveness of PR/HACCP were reported by Antle (2000), Knutson et al. (1995), and Marler (2010b). Antle (2000) simulated safety levels ranging from 50% to 90%. In addition, the USDA-FSIS assumed 10% to 100% effectiveness for PR/HACCP as a basis for its regulatory impact assessment.

4.3. Value of Risk Reduction

When microbial testing is done by an agency and the probability of contamination is greater than zero, benefits result from risk reduction. The risk reduction’s value could be greater than the total revenue because recall costs include the shipment’s total value, the loss of market-share value, and liability payments. The value of risk reduction is an additional benefit for a firm that tests for pathogens and implements a specific intervention strategy. The value of risk reduction is a measure of the benefit the company derives from not shutting down due to an outbreak of a particular pathogen. Hence, the risk-reduction value is a function of the testing decision, the sampling intensity and the portion of the total revenue that is retained when an outbreak is prevented. The risk-reduction value is mathematically defined as follows:
π i = θ i ( T R ) β i
where π is the value of risk reduction and β is an element of the set (0,1) which is a binary-testing decision variable, where 1 equals the optimal decision to test for pathogens and 0 is otherwise.

4.4. Testing Costs

Testing for pathogens occurs randomly, at various times with either strategy. Testing may be done at different intensity levels (number of samples) or different tolerance levels (number of pathogens at which the product is still considered safe for human consumption). These testing costs are measured for each strategy. Conventional wisdom is that higher sampling intensities and testing decrease the probability of producing and selling contaminated food products.
Testing costs include three major components: the utilities’ cost for each strategy, associated labor costs, or the cost of pathogen testing in laboratories outside the retail firm. Survey findings reveal that, on average, the labor cost for different types of pathogen testing was USD 14 per test. However, labor costs can vary between USD 8 and USD 20 per test. Hence, labor costs are represented as a risk-triangular distribution in the model because the USDA-FDA inspection agents may require more testing if food-safety problems persist. The cost of utilities for each strategy is assumed to be fixed at USD 36 per test. The cost of Salmonella and Campylobacter testing can vary with the type of test used, ranging between USD 10 and USD 14 per test. Like the labor costs, the cost of Salmonella and Campylobacter testing is also represented by a risk-uniform distribution.
The cost of E. coli testing can vary from USD 100 to USD 200 per test, depending on the type of test, with the average price being USD 150. Like the labor costs, Salmonella, Campylobacter, and E. coli prevalence are represented by stochastic variables. E. coli tests are assumed to be a risk-triangular distribution, with USD 100 being the lowest cost, USD 200 being the highest possible cost and USD 150 being the most likely testing cost. The total testing costs, C , for each pathogen type are estimated by using the following equation:
C i = ( L i + U i + T i ) n i β i
where L is the labor cost for collecting and preparing product samples; U is the utilities’ cost; T is the cost of pathogen testing; and n , i , and β are as previously defined.

4.5. Total Economic Costs

The total economic costs associated with the retail meat sector are composed of the value of risk reduction, testing and sampling costs, and the quality loss. The direct cost components include testing costs, and utilities and labor costs. The indirect cost component accounts for the quality loss incurred when there is a violation of the tolerance level. The value of risk reduction is considered a benefit in this study because it is the cost avoided when there is adequate pathogen testing and an intervention strategy. Hence, the total system cost, T C , is defined as follows:
T C = L i + C i π i
A net-benefit function can be developed by subtracting Equation (3), as well as the product input costs and the fixed costs of alternative strategies, from the total revenue for the particular product. Hence, the net benefit function is
N B ( β , n ) = p Y T C ( β , n )
where p is the product price and Y is the total product and n and β are as previously defined.

4.6. The Stochastic Optimization Model and the Risk Premium

The risk premium measures the difference between the expected value of the net benefit and its associated certainty equivalent. Based on the expected utility concept, risk averters would prefer a certain return for a risky investment with an uncertain, but equal expected return. If we define the certainty equivalent as the amount of money that makes the risk-averse decision maker indifferent between the certain cash and the gamble, where the expected monetary value is equal to the certain cash, then the risk premium is the additional amount required to compensate the risk-averse decision maker for taking on risk and not implementing a food-safety measure at the retail facility. The effect of the market risk is captured with an expected utility model. Following Pratt (1964), the risk premium is the difference between the certainty equivalent and the expected value. The risk premium is a function of the risk-aversion level and is measured by the utility function’s curvature and the risk level.
Using three alternative mitigation strategies, a stochastic optimization model was developed for retail meat facilities. The model uses a utility-maximization framework with an expo-power utility function to quantify a risk premium (Saha 1993). The expo-power utility function is a flexible functional form that does not impose any predetermined risk-preference structure on risk attitudes, and may be used to model both absolute and relative risk aversion. The model chooses the optimal testing intensity for each strategy to maximize the firm’s utility. The model assumes a linear net-benefit function that estimates benefits above certain variable costs (testing costs and quality loss). The objective function can be expressed with the following equation:
Maximize   E [ U ( N B ) ] = E ( λ e α N B ( β , n ) δ ) ,   for   all   δ 0 , α 0 , α δ > 0 , subject   to :                                                                                                 0 n 4                                                                                                 β ϵ { 0 , 1 } ,
where λ is usually a positive parameter while α and δ are parameters that affect the absolute and relative risk aversion of the utility function. The first constraint reflects the fact that, with each strategy, a retail facility could be inspected at least four times per month or once per week. The second constraint is the binary-testing decision variable (1 to test and 0 otherwise).
The expo-power utility function is quasi-concave for all N B > 0 . Necessary and sufficient conditions for concavity exist if δ δ α N B δ 1 0 and δ 0 , respectively. This function exhibits a decreasing absolute risk aversion if δ < 1 , a constant absolute risk aversion if δ = 1 , and an increasing absolute risk aversion if δ > 1 . To ensure regularity with the utility function, values for λ , α , and δ were initially set at 2, 0.00005, and 0.04, respectively. The latter value (0.04) to confer decreasing absolute risk aversion because many retail facilities are more likely to change risk preferences as wealth levels increase. Additional analyses are performed to determine the optimal testing decisions and sampling intensities for constant absolute risk aversion because some retail facilities are conservative and would not change their risk preferences even as wealth levels increase over time.

5. Results and Discussions

Results for the three pathogen-contamination probabilities and prevalence were generated assuming tolerance levels of 29%, 15%, 10%, 5%, and 1%. In Table 1, the results revealed that Salmonella contamination was prevalent in turkey at the 5% and 1% tolerance levels; at the 1% tolerance level, Salmonella contamination was prevalent in chicken. Interestingly, beef and pork showed zero probability of Salmonella contamination. Exhibiting similar behavior as Salmonella was Campylobacter with low probabilities of contamination. Table 1 shows that E. coli was most prevalent in beef (0.6967 at a 29% tolerance level). This finding implies that 69.67% of beef samples will have a positive E. coli prevalence if the performance standard is set at 29%. The results predicted the possibility of E. coli contamination at all tolerance levels and across all meat types. Unlike beef or turkey, chicken displayed probabilities of Campylobacter contamination at the 15%, 10%, 5%, and 1% tolerance levels. Table 1 provides significant insight about the ineffectiveness of using Salmonella as the sole performance standard for PR/HACCP. E. coli and Campylobacter reduction levels did not correlate exactly with the Salmonella reduction level as hypothesized by the PR/HACCP regulation. This divergence could be a major reason why we continue to witness significant outbreaks and food recalls despite advancements with policies and regulations.
The results from Table 2 revealed that quality-loss estimates increase with the tightening of tolerance levels. The highest quality-loss values were found at the 1% tolerance level, an indication that stricter mandatory compliance could lead to increased quality loss and could force retail facilities to shut down or to go out of business. In contrast, higher tolerance levels decrease with the quality loss associated with E. coli for all meat types. The losses include recall expenses and other liabilities.

5.1. Optimal Intervention Strategies at the Retail Level

The results for Salmonella pathogen contamination showed that testing is optimal at the 5% and 1% tolerance levels. This finding was consistent with the idea that the prevalence of Salmonella contamination is low (possibly from reduced levels at processing facilities, CDC FoodNet, as; reported by Marler 2010a). Therefore, it is economically optimal to test at the lowest tolerance levels.
Table 3 shows the results for the stochastic-optimization analysis for E. coli with constant absolute risk aversion (CARA) and decreasing absolute risk aversion (DARA). When utilizing CARA, testing was only performed at the 1% tolerance level; turkey had an optimal strategy of two tests per batch with strategy 3, and chicken had an optimal strategy of two tests per batch with strategy 2. When using DARA, chicken had an optimal strategy of one retail sample per batch at the 1% tolerance level. Turkey had an optimal strategy of two retail samples per batch at the 1% tolerance level and three samples per batch at the 5% tolerance level for strategy 1, and one sample per batch for strategy 2. Similar results were obtained for other pathogens and meat types. Testing and sampling at the optimal strategies varied within each strategy and would play an important role in setting the pathogen performance standard at retail facilities.
With CARA, testing was only optimal for pork at the 1% tolerance level. It was optimal for the private firm to test two times per batch (strategy 2) or for the USDA to test once per batch. As expected, testing for E. coli in beef was shown to be optimal at all tolerance levels when utilizing CARA. The optimal CARA strategies for beef were to test once each batch at the retail level or four times per batch by the private firm at the 29% tolerance level; to test three times per batch with the PR/HACCP strategy or two times per batch by the USDA for both the 15% and 10% tolerance levels; and to test four times per batch with the PR/HACCP strategy, three times per batch by the private firm, or two times by the USDA at the 5% tolerance level; finally, at the 1% tolerance level, testing should be done four times per batch with the PR/HACCP strategy or the private firm, or twice per batch with the USDA strategy.
Under DARA, the optimal strategies for beef (Table 3) were to test once per batch with PR/HACCP or four times per batch by a private firm for both the 29% and 15% tolerance levels. Testing at the 10% tolerance level was optimal for either once per batch with PR/HACCP or twice per batch by the private firm. At the 5% tolerance level, the optimal strategies for beef were to test three times per batch with PR/HACCP, two times per batch by a private firm, or once per batch by the USDA. At the 1% tolerance level, the test was to be performed four times per batch with PR/HACCP or once per batch by the private firm.
For pork, the optimal E. coli testing strategies with DARA were to test three times per batch with PR/HACCP, twice per batch by the private firm, or one time per batch by the USDA at the 29% tolerance level. At the 15% tolerance level, the optimal strategy was to have two tests per batch by the USDA, one test per batch by the private firm, or three tests per batch with PR/HACCP. At the 10% tolerance level, the optimal strategy was three tests per batch by the private firm or one test for each batch with PR/HACCP. For the 5% tolerance level, it was optimal to test three times per batch with PR/HACCP or once per batch by the USDA, and at the 1% tolerance level, it was optimal to test one time per batch with PR/HACCP, four times per batch by the private firm, or one time per batch by the USDA.
The optimal E. coli testing intervention strategies for pork with CARA are also shown in Table 3. The optimal testing strategies at the 29% tolerance level were three times per batch by the USDA, once per batch by the private firm, or four times per batch with PR/HACCP. At the 15% tolerance level, the optimal strategies were four times per batch with PR/HACCP or USDA, or three times per batch by the private firm. The 10% tolerance level showed twice per batch by the USDA, once per batch by the private firm, or three times per batch with PR/HACCP. At the 5% tolerance level, the only strategy was to test three times per batch with PR/HACCP, and at the 1% tolerance level, it was optimal to have the USDA test one time per batch. As the tolerance level tightened, the testing intensity decreased. Again, this finding indicated that tighter tolerance levels are more costly to the retail firms because of product loss when samples are rejected.
For chicken, optimal E. coli testing strategies were four times per batch by the USDA, once per batch by the private firm, or twice per batch with PR/HACCP at the 29% tolerance level. At the 15% tolerance level, the optimal testing strategy was twice per batch. The 10% tolerance level showed optimal testing with three times per batch by the USDA or twice per batch by the private firm. The 5% tolerance level showed four times per batch by the retail store, four times per batch by the private firm, or three times per batch by the USDA. At the 1% tolerance level, there was only one optimal strategy of testing: one time per batch by the retail firm.
With CARA, the results for E. coli testing in chicken illustrated an optimal strategy of either one test by the USDA or one test by the private firm, per batch, at the 29% tolerance level. The 15% tolerance level only had one optimal result: two tests per batch by the USDA and the 10% tolerance level only had one optimal result: two tests per batch by the private firm. The 5% and 1% tolerance levels illustrated that no testing is required. This finding could be explained by the fact that there is less handling and repackaging of raw chicken products at retail meat shops, thus it was not optimal to test at tighter levels.
The results for turkey showed that, with CARA and DARA, it is not optimal to test for E. coli at the 5% and 1% tolerance levels. This result could be because turkey yielded a lower value for the risk-reduction estimate and because, like chicken with PR/HACCP, little processing or grinding is done with turkey. Under DARA, the 29% tolerance level gave optimal strategies of either one test per batch with PR/HACCP or two tests per batch by the private firm. At the 15% tolerance level, the optimal strategies were either three tests per batch by the USDA or one test per batch with PR/HACCP, and at the 10% tolerance level, the optimal strategies were once per batch with PR/HACCP, once per batch by the private firm, or four times per batch by the USDA.
Utilizing CARA, the optimal strategies for pathogen control at the 29% and 15% tolerance levels were to test once per batch with PR/HACCP or once per batch by the private firm. At the 10% tolerance level, it was optimal to test once per batch with PR/HACCP or three times per batch by the USDA. As previously mentioned, no testing was required at the 5% and 1% tolerance levels. The data generated in this section were used to evaluate cost-effective intervention strategies using a stochastic-dominance analysis.

5.2. Stochastic-Dominance Analysis Results

The three strategies [(1) PR/HACCP; (2) USDA/FDA verification; and (3) private, food-safety consulting firm testing and controls] were compared using stochastic-dominance methodologies for alternative meat types and pathogens. These alternatives were compared using SIMETAR software. Upper and lower risk-aversion coefficients were utilized. A lower risk-aversion coefficient (RAC) of 0.000001 and an upper RAC of 0.1 were used to depict risk neutral and strong risk aversion respectively. This wide range of risk attitudes helped to evaluate the robustness of the results.
The results are shown in Table 4. The analysis considered the entire set of strategies and tolerance levels for each meat type that could possibly be contaminated with E. coli and for turkey that could possibly be contaminated with Salmonella. The other combinations of meat types and pathogens were not relevant because of low or no pathogen prevalence or because there was only one clear strategy for that specific meat and pathogen. The results of the stochastic-dominance analysis showed that either strategies 2 or 3 were cost-effective and highly preferred, except in the case of turkey with possible E. coli contamination where the preferred technique was strategy 1 using a 10% tolerance level. This finding is a good indication that the current, predominant food-safety risk-reduction strategy, USDA-FDA, implemented by several retail firms across the U.S. may be less effective than contracting with a private firm or implementing PR/HACCP to mitigate food-safety risks at the retail level. For companies to understand the magnitude of loss for the risk exposure that they face, a CVaR was used to predict food-safety losses. CVaR also addressed robustness issues related to the results by incorporating the correlation between risky distributions and applying a test to determine if the estimated results over- or under-predict the magnitude of food-safety losses.

5.3. CVaR and the Magnitude of Food-Safety Risk Exposure

The CVaR equations are presented in Appendix A. CVaR is an intuitive measure of risk that effectively predicts the magnitude of food-safety losses that are incurred by retail firms for a given time period and confidence interval (e.g., 95% C.I). CVaR concentrates on adverse outcomes and is usually reported in dollars. The CVaR values, likelihood ratio, and Z statistic for out-of-sample tests are presented in Table 5. In practice, retail meat firms could implement a particular strategy or a combination of the three strategies. For this reason, we compute the CVaR for each strategy separately and for a combination of the strategies. The emphasis for CVaR was on the strategy used, allowing us aggregate tolerance levels for 29%, 15%, 10%, and 5%. This aggregation did not significantly alter the results.
The results indicated that, for no more than 5% of the time in any month, at the 29%, 15%, 10%, and 5% tolerance levels, a retail firm’s monthly CVaR was USD 29,873 for PR/HACCP; USD 31,551 for USDA/FDA verification; USD 32,942 for a private consulting firm; USD 27,768 for all three strategies implemented; and USD 30,577 for USDA/FDA plus PR/HACCP. The results showed that the monthly CVaR for pathogen reduction was the lowest when all three strategies were implemented simultaneously.
At the 1% tolerance level, monthly CVaR values were significantly higher, suggesting the high liability cost. Fresh meats always have pathogens. Firms will incur exorbitant recall and liability expenses that could force them to shut down or to go out of business.
The results showed a significant decrease for the downside risk as retail meat firms simultaneously adopted all three pathogen-reduction strategies. These findings indicated that retail meat firms which implemented such a strategy could significantly lower food-safety risks and could have a means to improve profitability and consumer safety.
Out-of-sample tests using a likelihood ratio and Z tests were conducted for all scenarios.4 Both tests are used to ensure that the estimated values are robust, controlling for under- or overestimation of CVaR. Test results showed that, for all pathogen-reduction strategies with all the tolerance levels, the likelihood-ratio statistic was not significant at the 5% significance level. This finding implies that, 95% of the time in any given month, the losses did not exceed the estimated CVaRs. Furthermore, the Z statistic was not significant for all pathogen-reduction strategies and all tolerance levels. This result indicated that the estimated CVaRs accurately predicted the losses. Both tests suggested that our model did not underestimate or overestimate the actual downside risk, implying that the results were robust.

6. Conclusions and Policy Implications

Food safety is a major risk for agribusiness firms. Despite advancements made to reduce the food-safety risk with major regulations, retail meat facilities continue to experience recalls and major outbreaks, costing billions of dollars annually. We developed a stochastic-optimization framework to evaluate cost-effectiveness, as well as optimal testing and sampling strategies, and we used stochastic-dominance methods to rank the best strategy at retail meat facilities. CVaR was utilized to predict the magnitude of the food-safety risk exposure for the cost-effective risk-reduction strategies.
The results of the stochastic-optimization analysis showed that quality loss and risk-reduction values increased with the probability of contamination. Risk was reduced as tolerance levels were tightened. However, very low tolerance levels could also induce higher implementation costs, making a particular strategy cost-ineffective. The results further illustrated that the optimal intervention strategies varied by meat type and pathogen. Rather than maintain one performance standard (e.g., Salmonella in PR/HACCP), beef, chicken, and pork called for E. coli testing at the retail level. Although the probability of contamination for chicken and pork was not as high as the risk was for beef, there was a need for testing. The results suggested marginal E. coli testing for turkey relative to other meats. Testing was also recommended for beef and turkey under DARA and for only pork under CARA.
The results of the stochastic-dominance analysis found that the preferred strategies were robust across the risk-neutral RAC and the risk-averse RAC. For example, PR/HACCP was the cost-effective strategy for testing E. coli at the 10% tolerance level for turkey at the lower and upper RAC.
The CVaR results showed a significant decrease for the downside risks as retail firms simultaneously adopted all three pathogen-reduction strategies. These findings indicated that retail firms which implemented such a strategy could significantly lower food-safety risks and could have a means to improve profitability and consumer safety. Given the large number of outbreaks that occur at retail meat facilities, this study provided information to initiate actions at those levels in order to adopt cost-effective intervention plans.
This study suggested the need to extend the PR/HACCP performance standard for Salmonella to other pathogens. Currently, Salmonella is the only performance standard for PR/HACCP. The assumption is that, if Salmonella levels are decreasing, then so are other pathogen levels. This assumption might not be true with retail meat facilities. Overall, pathogen levels will decrease significantly for Salmonella, but the changes are smaller with E. coli and Campylobacter. This assumption could be a major reason why we continue to observe major food-safety outbreaks and recalls despite regulation advancements. Food-safety regulations are necessary because pathogens cannot be observed without a microscope, but intervention or policy has to be designed efficiently.
This study also suggested cost-effective tolerance levels that could provide guidelines for broader evaluation and a need to tighten the tolerance levels. Perhaps, the need is not for different types of pathogen testing, but for more intense testing. Tightening the tolerance level to an amount lower than 29% could also fix some problems with high pathogen levels. However, an earlier study by Nganje et al. (2007) suggested that the tolerance levels could be tightened, but not below 15% for fresh-meat processing and packaging firms. Information from this study may be used by policy makers to take steps toward encouraging tighter standards for retail companies and providing incentives to firms that adopt plans and take the initiative to ensure a safe food supply at the retail level.

Author Contributions

Conceptualization, W.E.N.; methodology, W.E.N.; formal analysis, W.E.N. and E.K.D.; data curation, E.K.D. and L.D.B.; writing: original draft preparation, W.E.N., L.D.B., E.K.D., and E.M.N.; and writing: review and editing, W.E.N., L.D.B., E.K.D., and E.M.N. All authors have read and agreed to the published version of the manuscript.

Funding

Graduate-student funding (research assistantship) was provided by the Agribusiness and Applied Economics department at North Dakota State University.

Institutional Review Board Statement

Study did not involve subjects or live animals.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Data for monthly prices are available at https://quickstats.nass.usda.gov/ (accessed on 18 March 2021). Survey and microbial data are available from the authors upon request.

Acknowledgments

The authors thank NDSU microbiology department for their assistance with providing the microbial testing data.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Copulas have been widely used in financial applications, such as risk management, portfolio allocation, and derivative pricing (Abbara and Zevallos 2018; Brechmann and Czado 2013; Han et al. 2017; Kakouris and Rustem 2014; Weiß 2013). CVaR is utilized to model joint distribution because the technique does not require any assumptions about the selection of the distribution function. CVaR also allows the decomposition of any k-dimensional joint distribution into k marginal distributions and a copula function. Copulas allow researchers to better describe the dependence structure among variables and among quantiles, providing a flexible and well-suited specification of the joint distribution (Cherubini et al. 2004; Joe 1997).
Sklar (1959) theorem is a keystone for copula theory. Consider a k-dimensional joint distribution function, F ( x ) , with uniform margins, F 1 ( x 1 ) , , F k ( x k ) ;   x = ( x 1 , ,   x k ) , with x i ; then, there exists a copula, C: [ 0 ,   1 ] k [ 0 ,   1 ] , such that
F ( x 1 , , x k ) = C ( F 1 ( x 1 ) , , F k ( x k ) )
is determined under absolute continuous margins as follows:
C ( u 1 , ,   u k ) = F ( F 1 1 ( u 1 ) , ,   F k 1 ( u n ) )
Otherwise, C is uniquely determined in the range R ( F 1 ) × × R ( F k ) . Equally, if C is a copula and F 1 , ,   F k are univariate distribution functions, then Equation (A1) is a joint distribution function with margins F 1 , ,   F k (Tsay 2013).
The copula, C ( u 1 , , u k ) , has density, c ( u 1 , , u k ) , associated with it, which is defined as
c ( u 1 , , u k ) = k C ( u 1 , , u k ) u 1 , , u k )
and is related to the density function, F, for continuous random variables denoted as f, by the canonical copula representation
f ( x 1 , ,   x k ) = c ( F 1 ( x 1 ) , , F n ( x n ) ) i = 1 n f i ( x i ) ,
where f i is the marginal densities that can be different from each other (Tsay 2013).
A copula provides appropriate knowledge about average and extreme upward or lower co-movements, referring to tail dependence. The upper (right) and lower (left) tail dependence can be computed from the copulas as follows:
τ U = lim u 1 Pr [ X F X 1 ( u ) | Y F Y 1 ( u ) ] = lim u 1 1 2 u + C ( u , u ) 1 u
τ L = lim u 1 Pr [ X F X 1 ( u ) | Y F Y 1 ( u ) ] = lim u 1 C ( u , u ) 1 u ,
where F X 1 and F Y 1 are the marginal quantile functions while τ U , τ L   [0, 1]. The lower and upper tail dependence implies that τ U > 0 and τ L > 0 , respectively, indicating a non-zero probability of observing an extremely small or large value, for one series together with an extremely small or large value for another series.

Notes

1
The most severe class of recalls is Class I, which involves a ‘‘health hazard situation where there is a reasonable probability that the use of the product will cause serious, adverse health consequences or death.” Class II recalls involve a ‘‘health hazard situation where there is a remote probability of adverse health consequences from the use of the product.” Class III recalls involve a ‘‘health hazard situation where the use of the product will not cause adverse health consequences” (USDA-FSIS 2020).
2
Annual figures are reported by the USDA-FSIS (USDA-FSIS 2020).
3
Abnormal returns are defined as the deviation from normal returns and are calculated by utilizing the difference between the actual and normal returns during the event window (Moon and Tonsor 2020).
4
Following Manfredo and Leuthold (2001) and Lopez (1997), the likelihood-ratio test statistic for out-of-sample testing is defined as follows:
L R ( δ ) = 2 [ ln ( δ * X ( 1 δ * ) N X ) ln ( δ X ( 1 δ ) N X ) ] ~ χ 1 2
The null hypothesis is that δ = δ * , where δ is the desired coverage level (5%) corresponding to the 95% level of significance; δ * is defined as X/N, where X is the number of out-of-sample violations and N is the number of out-of-sample observations. If we fail to reject the null hypothesis, then the actual food-safety losses do not exceed the values predicted by the CVaR model. Following Mahoney (1995), the Z test is used as bias test where large samples are normally distributed, such that
Z c = L r e a l i z e d N ( 1 c ) / N c ( 1 c ) ,
where L r e a l i z e d is the number of observed CVaR violations at a given confidence level, c; N ( 1 c ) represents the number of violations for the CVaR estimations; and N is the number of out-of-sample observations. When the Z test is significantly positive (or negative), then CVaR underestimates (or overestimates) the actual downside risks.

References

  1. Abbara, Omar, and Mauricio Zevallos. 2018. Modeling and forecasting intraday VaR of an exchange rate portfolio. Journal of Forecasting 37: 729–38. [Google Scholar] [CrossRef]
  2. Antle, John M. 2000. No such thing as a free safe lunch: The cost of food safety regulation in the meat industry. American Journal of Agricultural Economics 82: 310–22. [Google Scholar] [CrossRef] [Green Version]
  3. Brechmann, Eike C., and Claudia Czado. 2013. Risk management with high-dimensional vine copulas: An analysis of the Euro stoxx 50. Statistics and Risk Modeling 30: 307–42. [Google Scholar] [CrossRef]
  4. Burton, Scot, Elizabeth Howlett, and Andrea Heintz Tangari. 2009. Food for thought: How will the nutrition labeling of quick service restaurant menu items influence consumers’ product evaluations, purchase intentions, and choices? Journal of Retailing 85: 258–73. [Google Scholar] [CrossRef]
  5. Cherubini, Umberto, Elisa Luciano, and Walter Vecchiato. 2004. Copula Methods in Finance. West Sussex: John Wiley & Sons. [Google Scholar]
  6. Gorton, Acton, and Matthew J. Stasiewicz. 2017. Twenty-two years of U.S. meat and poultry product recalls: Implications for food safety and food waste. Journal of Food Protection 80: 674–84. [Google Scholar] [CrossRef]
  7. Grewal, Dhruv, Julie Baker, Michael Levy, and Glenn Voss. 2003. The effects of wait expectations and store atmosphere evaluations on patronage intentions in service-intensive retail stores. Journal of Retailing 79: 259–68. [Google Scholar] [CrossRef]
  8. Han, Yingwei, Ping Li, and Yong Xia. 2017. Dynamic robust portfolio selection with copulas. Finance Research Letters 21: 190–200. [Google Scholar] [CrossRef]
  9. Havelaar, Arie H., Martyn D. Kirk, Paul R. Torgerson, Herman J. Gibb, Tine Hald, Robin J. Lake, Nicolas Praet, David C. Bellinger, Nilanthi R. de Silva, Neyla Gargouri, and et al. 2015. World Health Organization global estimates and regional comparisons of the burden of foodborne disease in 2010. PLoS Medicine 12: e1001923. [Google Scholar] [CrossRef] [Green Version]
  10. Heredia, Norma, and Santos García. 2018. Animals as sources of food-borne pathogens: A review. Animal Nutrition 4: 250–55. [Google Scholar] [CrossRef] [PubMed]
  11. Hung, Shiu-Wan, Chiao-Ming Li, and Joe-Ming Lee. 2019. Firm growth, business risk, and corporate social responsibility in Taiwan’s food industry. Agricultural Economics 65: 366–74. [Google Scholar] [CrossRef]
  12. Jianbin, Yu, and Neal H. Hooker. 2019. Exploring relationships among recall effectiveness indicators in the U.S. meat and poultry industry. International Food and Agribusiness Management Review 22: 97–106. [Google Scholar]
  13. Joe, Harry. 1997. Multivariate Models and Dependence Concepts. New York: Chapman & Hall/CRC. [Google Scholar]
  14. Kakouris, Iakovos, and Berç Rustem. 2014. Robust portfolio optimization with copulas. European Journal of Operational Research 235: 28–37. [Google Scholar] [CrossRef]
  15. Kay, Steve. 2003. $2.7 billion: The cost of E. coli O157:H7. Meat Poultry 49: 26–34. [Google Scholar]
  16. Kerin, Roger, Ambuj Jain, and Daniel J. Howard. 1992. Store shopping experience and consumer price-quality-value perceptions. Journal of Retailing 68: 376–97. [Google Scholar]
  17. Knutson, Ronald D., Russell H. Cross, Gary R. Acuff, Leon H. Russell, Fred O. Boadu, John P. Nichols, Suojin Wang, Larry J. Ringer, Asa B. Childers Jr., and Jeff W. Savell. 1995. Reforming Meat and Poultry Inspection: Impacts of Policy Options. Working Paper 95–1. College Station: Agricultural and Food Policy Center, Texas A&M University. [Google Scholar]
  18. Liu, Feng, Hosun Rhim, Kwangtae Park, Jian Xu, and Chris K. Y. Lo. 2021. HACCP certification in food industry: Trade-offs in product safety and firm performance. International Journal of Production Economics 231: 107838. [Google Scholar] [CrossRef]
  19. Lopez, Jose A. 1997. Regulatory Evaluation of Value-at-Risk Models. Staff Report 33. New York: Federal Reserve Bank of New York. [Google Scholar]
  20. Mahoney, James M. 1995. Empirical-based versus model-based approaches to value-at-risk: An examination of foreign exchange and global equity portfolios. Board of Governors of the Federal Reserve System (US) Proceedings 1995: 199–220. [Google Scholar]
  21. Maloni, Michael J., and Michael E. Brown. 2006. Corporate social responsibility in the supply chain: An application in the food industry. Journal of Business Ethics 68: 35–52. [Google Scholar] [CrossRef]
  22. Manfredo, Mark R., and Raymond M. Leuthold. 2001. Market risk and the cattle feeding margin: An application of value-at-risk. Agribusiness 17: 333–53. [Google Scholar] [CrossRef]
  23. Marler, Bill. 2010a. Infection with Pathogens Transmitted Commonly through Food. Available online: http://www.marlerblog.com/case-news/cdc-preliminary-foodnet-data-on-the-incidence-of-infection-with-pathogens-transmitted-commonly-throu/ (accessed on 8 January 2020).
  24. Marler, Bill. 2010b. Food Recall at Retail. Available online: http://www.marlerblog.com/admin/mt-search.cgi?IncludeBlogs=1&limit=20&search=foodborne+illness+at+retail (accessed on 8 January 2020).
  25. Minor, Travis, Angela Lasher, Karl Klontz, Bradley Brown, Clark Nardinelli, and David Zorn. 2015. The per case and total annual costs of foodborne illness in the United States. Risk Analysis 35: 1125–39. [Google Scholar] [CrossRef]
  26. Minor, Travis, and Matt Parrett. 2017. The economic impact of the food and drug administration’s final juice HACCP rule. Food Policy 68: 206–13. [Google Scholar] [CrossRef]
  27. Moon, Donghyun, and Glynn T. Tonsor. 2020. How do E. coli recalls impact cattle and beef prices? Journal of Agricultural and Resource Economics 45: 92–106. [Google Scholar]
  28. Nganje, William E., Michael A. Mazzocco, and Floyd K. McKeith. 1999. Food Safety Regulation, Product Pricing, and Profitability: The Case of HACCP (No. 1186-2016-93449). Available online: https://ageconsearch.umn.edu/record/23077/ (accessed on 8 January 2020).
  29. Nganje, William, Simeon Kaitibie, and Alexandre Sorin. 2007. HACCP implementation and economic optimality in turkey processing. Agribusiness, an International Journal 23: 211–28. [Google Scholar] [CrossRef]
  30. Ollinger, Michael, and Matthew Houser. 2020. Ground beef recalls and subsequent food safety performance. Food Policy 97: 101971. [Google Scholar] [CrossRef]
  31. Page, Elina T. 2018. Trends in Food Recalls: 2004–2013. Economic Information Bulletin, No. 191; Washington: U.S. Department of Agriculture, Economic Research Service.
  32. Piacentini, Maria, Lynn MacFadyen, and Douglas Eadie. 2000. Corporate social responsibility in food retailing. International Journal of Retail and Distribution Management 28: 459–69. [Google Scholar] [CrossRef]
  33. Pouliot, Sebastien, and Holly H. Wang. 2018. Information, incentives, and government intervention for food safety. Annual Review of Resource Economics 10: 83–103. [Google Scholar] [CrossRef]
  34. Pozo, Veronica F., and Ted C. Schroeder. 2016. Evaluating the costs of meat and poultry recalls to food firms using stock returns. Food Policy 59: 66–77. [Google Scholar] [CrossRef]
  35. Pratt, John W. 1964. Risk-aversion in the small and in the large. Econometrica 32: 122–36. [Google Scholar] [CrossRef]
  36. Robertson, Kis, Alice Green, Latasha Allen, Timothy Ihry, Patricia White, Wu-San Chen, Aphrodite Douris, and Jeoffrey Levine. 2016. Foodborne outbreaks reported to the U.S. Food Safety and Inspection Service, fiscal years 2007 through 2012. Journal of Food Protection 79: 442–47. [Google Scholar] [CrossRef]
  37. Saha, Atanu. 1993. Expo-power utility: A ‘flexible’ form for absolute and relative risk aversion. American Journal of Agricultural Economics 75: 905–13. [Google Scholar] [CrossRef]
  38. Scharff, Robert L. 2020. Food attribution and economic cost estimates for meat-and poultry-related illnesses. Journal of Food Protection 83: 959–67. [Google Scholar] [CrossRef]
  39. Scharff, Robert. L. 2018. The economic burden of foodborne illness in the U.S. In Food Safety Economics. Edited by Roberts Tanya. Cham: Springer Nature, pp. 123–42. [Google Scholar]
  40. Shang, Xia, and Glynn T. Tonsor. 2017. Food safety recall effects across meat products and regions. Food Policy 69: 145–53. [Google Scholar] [CrossRef]
  41. Sklar, M. 1959. Fonctions de répartition à n dimensions et leurs marges. Publications De l’Institut De Statistique De l’Université De Paris 8: 229–31. [Google Scholar]
  42. Taguchi, Genichi. 1986. Introduction to Quality Engineering: Designing Quality into Products and Processes. Tokyo: Asian Productivity Organization. [Google Scholar]
  43. Tsay, Ruey S. 2013. Multivariate Time Series Analysis: With R and Financial Applications. Hoboken: John Wiley & Sons. [Google Scholar]
  44. U. S. Department of Agriculture, Food Safety and Inspection Service (USDA-FSIS). 2020. Summary of Recall Cases in Calendar Year 2019. Available online: https://www.fsis.usda.gov/wps/portal/fsis/topics/recalls-and-public-health-alerts/recall-summaries (accessed on 12 December 2020).
  45. Weiß, Gregor N. F. 2013. Copula-GARCH versus dynamic conditional correlation: An empirical study on VaR and ES forecasting accuracy. Review of Quantitative Finance and Accounting 41: 179–202. [Google Scholar] [CrossRef]
Table 1. Probability of contamination by pathogen for each meat type using survival analysis.
Table 1. Probability of contamination by pathogen for each meat type using survival analysis.
Meat Type29%
Tolerance
15%
Tolerance
10%
Tolerance
5%
Tolerance
1%
Tolerance
Salmonella
Beef0.00000.00000.00000.00000.0000
Chicken0.00000.00000.00000.00000.0147
Pork0.00000.00000.00000.00000.0000
Turkey0.00000.00000.00000.07100.0544
E. coli
Beef0.69670.81420.86020.90850.9489
Chicken0.39220.53240.58780.64660.6963
Pork0.41470.55380.60870.66670.7158
Turkey0.29130.43420.49180.55360.6064
Campylobacter
Beef0.00000.00000.00000.00000.0272
Chicken0.00000.07700.12590.18760.2482
Pork0.00000.00000.00000.00000.0123
Turkey0.00000.00000.00000.01710.0627
Table 2. Monthly quality-loss estimates for retail stores.
Table 2. Monthly quality-loss estimates for retail stores.
Meat Type/Pathogen29%
Tolerance
15%
Tolerance
10%
Tolerance
5%
Tolerance
1%
Tolerance
Beef/Campylobacter0000225,500
Beef/E. coli92,634.99404,500960,0004,065,0004,245,000
Chicken/Campylobacter0137,000505,0003,010,0003,980,000
Chicken/E. coli185,353.84940,0002,335,00010,300,00011,050,000
Chicken/Salmonella000069,000
Pork/Campylobacter00005500
Pork/E. coli15,090.8975,500186,500815,000875,000
Turkey/Campylobacter00061,000224,000
Turkey/E. coli167,968.59935,0002,385,00010,750,00011,750,000
Turkey/Salmonella000349,00173,500
Table 3. Optimal intervention strategies for E. coli testing and HACCP implementation at the retail level.
Table 3. Optimal intervention strategies for E. coli testing and HACCP implementation at the retail level.
Under Constant Absolute Risk Aversion (CARA)
29% Tolerance15% Tolerance10% Tolerance5% Tolerance1% Tolerance
Meat TypeTest
Decision
# of
Samples
Test
Decision
# of
Samples
Test
Decision
# of
Samples
Test
Decision
# of
Samples
Test
Decision
# of Samples
BeefPR/HACCP1PR/HACCP3PR/HACCP3PR/HACCP4PR/HACCP4
Private4 Private3Private4
USDA2USDA2USDA2USDA2
Chicken
Private1 Private2
USDA1USDA2
PorkPR/HACCP4PR/HACCP 4PR/HACCP3PR/HACCP3
Private1Private3Private1
USDA3USDA4USDA2 USDA1
Turkey
Under Decreasing Absolute Risk Aversion (DARA)
29% Tolerance15% Tolerance10% Tolerance5% Tolerance1% Tolerance
Meat TypeTest
Decision
# of
Samples
Test
Decision
# of SamplesTest
Decision
# of SamplesTest
Decision
# of
Samples
Test
Decision
# of Samples
BeefPR/HACCP1PR/HACCP1PR/HACCP1PR/HACCP3PR/HACCP4
Private4Private4Private2Private2Private1
USDA1
ChickenPR/HACCP2PR/HACCP2 PR/HACCP4PR/HACCP1
Private1Private2Private2Private4
USDA4USDA2USDA3USDA3
PorkPR/HACCP3PR/HACCP 3PR/HACCP1PR/HACCP3PR/HACCP 1
Private2Private1Private3 Private4
USDA1USDA2 USDA1USDA1
Turkey PR/HACCP1PR/HACCP1
Private2 Private1
USDA1USDA3USDA4
Table 4. Stochastic-dominance comparison for the intervention strategies.
Table 4. Stochastic-dominance comparison for the intervention strategies.
Meat Type/PathogenPreferred Strategy and Tolerance Level
RAC = 0.000001RAC = 0.1
Beef/E. coliStrategy 3 @ 1%Strategy 3 @ 1%
Chicken/E. coliStrategy 3 @ 5%Strategy 3 @ 5%
Pork/E. coliStrategy 2 @ 1%Strategy 2 @ 1%
Turkey/E. coliStrategy 1 @ 10%Strategy 1 @ 10%
Turkey/SalmonellaStrategy 3 @ 5%Strategy 3 @ 5%
Table 5. CVaR estimates for food-safety losses and out-of-sample test results.
Table 5. CVaR estimates for food-safety losses and out-of-sample test results.
95% Confidence Limit
ScenarioPathogen Reduction StrategiesMonthly
CVaR ($)
LR StatisticZ Statistic
29%, 15%, 10%, and 5% ToleranceIntrafirm29,8732.736−1.406
USDA31,5512.736−1.406
Private32,9420.0450.216
All27,7682.736−1.406
USDA + Intrafirm30,5770.886−0.865
1% ToleranceIntrafirm6,796,699.210.886−0.865
USDA13,560,406.230.886−0.865
Private6,796,714.460.885−0.865
All27,087,149.030.111−0.324
USDA + Intrafirm20,325,948.810.111−0.324
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nganje, W.E.; Burbidge, L.D.; Denkyirah, E.K.; Ndembe, E.M. Predicting Food-Safety Risk and Determining Cost-Effective Risk-Reduction Strategies. J. Risk Financial Manag. 2021, 14, 408. https://doi.org/10.3390/jrfm14090408

AMA Style

Nganje WE, Burbidge LD, Denkyirah EK, Ndembe EM. Predicting Food-Safety Risk and Determining Cost-Effective Risk-Reduction Strategies. Journal of Risk and Financial Management. 2021; 14(9):408. https://doi.org/10.3390/jrfm14090408

Chicago/Turabian Style

Nganje, William E., Linda D. Burbidge, Elisha K. Denkyirah, and Elvis M. Ndembe. 2021. "Predicting Food-Safety Risk and Determining Cost-Effective Risk-Reduction Strategies" Journal of Risk and Financial Management 14, no. 9: 408. https://doi.org/10.3390/jrfm14090408

Article Metrics

Back to TopTop