A Compromise Programming Application to Support Forest Industrial Plantation Decision-Makers

: The conﬂicts that arise between natural resources consumption and the desire to preserve them make the multicriteria decision theory necessary. Brazil, one of the 10 largest timber producers globally, uses optimization models that represent the growth of forests integrated with decision support systems. Brazilian forest plantation managers often face conﬂicts when continuously seeking efﬁciency gains (higher productivity at lower costs) and efﬁcacy (higher proﬁts with minimum social and environmental impacts). Managers of leading producing countries on timber, pulp, and ﬁberboard constantly interact to ﬁne-tune industry processing demands vis-a-vis the demands of highly productive fast-growing forest plantations. The decision process in such cases seeks a compromise that accommodates short-term industry productivity optimization and long-term forestry production capacity. This paper aims to apply a forest management decision support system (FMDSS) to a case study that represents the challenges that industrial plantations in Brazil usually face. A vertically integrated pulp company situation was simulated to provide a real scenario. In this scenario, forest managers tend to shorten the rotations due to Brazil’s usually high-interest rates; meanwhile, industrial managers tend to ask for longer ones due to the positive correlation between age and wood density. Romero ® , a Forest Management Decision Support System, developed by following the multi-criteria decision theory, was used to process the case study. Expressly, the hypothesis that mill managers initially have, that older ages rotation could improve mill production, was not conﬁrmed. Moreover, mill managers lean towards changes in the short term, while the case study shows that changes in rotation size and genetic material take time, and decisions have to be made involving both interests: forest and mill managers. curation, S.R.N.; writing—original S.R.N.; writing— review L.C.E.R. and L.D.-B.; L.D.-B. and L.C.E.R.; project admin-istration, L.D.-B.; L.D.-B.


Introduction
Forest management science encompasses the challenge of working with the forests that produce required benefits now without compromising future benefits [1]. Due to the need for disruption with business-as-usual approaches and the adoption of sustainable production patterns [2] (p. 13), there is an increasing trend for participation and transparency to improve planning processes [3,4] (p. 58; p. 217).
Environmental concerns, including sustainable production, bring challenges and opportunities to the forest industry. South America has continued expanding pulpwood production with an increasing number of new mills being built in Brazil, Chile, and Uruguay. These countries in 2016 accounted for 15% of global pulpwood production and 33% of exports. Pulp and paper demand are expected to grow by 2.7% annually, reaching 747 million tons by 2030 [5]. Following this trend, the Brazilian pulp industry has increased from the preliminary results until the compromise solution, and (iii) a reasonable compromise between industrial and forest managers involving wood density and forest genetic material.

Case Study
In Brazil, the yearly production of pulp mills ranges from 200,000 to 2 million tons of pulp per mill. In 2019, Brazil produced 19.7 million tons, 16.9 of short fiber, 100% from Eucalyptus planted in 2 million hectares, where MAI varies from 30 to 50 m 3 /ha-year, averaging 35.3 m 3 /ha-year [27].
In 2016, Brazil produced 69 million m 3 of non-coniferous pulpwood [28]. The industrial short fiber average consumption was 4.34 m 3 of wood to produce one ton of pulp. The literature mentions productivities from 3.5 to 5.1 m 3 per ton of dry pulp [13]. To sustain their high level of productivity, Brazilian pulp mills' main concerns regarding wood quality are (i) wood density [13], (ii) size uniformity of the chips [29], and (iii) chemical composition [30].
The case study refers to a hypothetical pulp mill that replicates actual conditions in Brazilian pulp mills. The main aspects of the plantation supplying wood to the mill are presented in Table 1. Due to the short rotation plantation being around six years, the length of the planning horizon was set to 19 years to guarantee the completion of at least three entire rotations [31].
The total area is 21,400 hectares distributed in 18 management units. The smallest unit has 550 hectares, and the largest one has 1900 hectares. Each management unit is homogeneous in terms of region, species, cycle, and age. The silvicultural costs are9500 R$/ha over the years to complete one entire Eucalyptus plantation cycle, the pulpwood price is R$ 50/m 3 , and the annual nominal interest rate is 7% [32].
Wood density was used to classify clones into three groups. Group GenMat3 is the lower density class (average 420 kg/m 3 ). The other two GenMat2 and GenMat1 classify clones into mid-density (500 kg/m 3 ) and high density (580 kg/m 3 ), respectively. The

Case Study
In Brazil, the yearly production of pulp mills ranges from 200,000 to 2 million tons of pulp per mill. In 2019, Brazil produced 19.7 million tons, 16.9 of short fiber, 100% from Eucalyptus planted in 2 million hectares, where MAI varies from 30 to 50 m 3 /ha-year, averaging 35.3 m 3 /ha-year [27].
In 2016, Brazil produced 69 million m 3 of non-coniferous pulpwood [28]. The industrial short fiber average consumption was 4.34 m 3 of wood to produce one ton of pulp. The literature mentions productivities from 3.5 to 5.1 m 3 per ton of dry pulp [13]. To sustain their high level of productivity, Brazilian pulp mills' main concerns regarding wood quality are (i) wood density [13], (ii) size uniformity of the chips [29], and (iii) chemical composition [30].
The case study refers to a hypothetical pulp mill that replicates actual conditions in Brazilian pulp mills. The main aspects of the plantation supplying wood to the mill are presented in Table 1. Due to the short rotation plantation being around six years, the length of the planning horizon was set to 19 years to guarantee the completion of at least three entire rotations [31]. The total area is 21,400 hectares distributed in 18 management units. The smallest unit has 550 hectares, and the largest one has 1900 hectares. Each management unit is homogeneous in terms of region, species, cycle, and age. The silvicultural costs are R$9500/ha over the years to complete one entire Eucalyptus plantation cycle, the pulpwood price is R$50/m 3 , and the annual nominal interest rate is 7% [32].
Wood density was used to classify clones into three groups. Group GenMat3 is the lower density class (average 420 kg/m 3 ). The other two GenMat2 and GenMat1 classify clones into mid-density (500 kg/m 3 ) and high density (580 kg/m 3 ), respectively. The productivity varies from 42 to 52 m 3 /ha-year at seven years old. In the business-as-usual scenario, harvesting ages are limited to only two alternatives: six and seven years old.
Lopes and Garcia [14] isolated the effect of age into the density of some species of eucalyptus. Foekel [33] also found a good correlation between wood density and pulp production efficiency. The hypothetical curves for wood density as a function of age shown in Figures 2 and 3 were created, fixing the slope coefficient and varying the intercept. Figure 3 presents the efficiency after the pulp digester; the efficiency was measured considering two moments in the process: (i) how many m 3 of wood had entered into the mill, and (ii) how much pulp had been produced in dry tons of pulp. production efficiency. The hypothetical curves for wood density as a function of age shown in Figures 2 and 3 were created, fixing the slope coefficient and varying the intercept. Figure 3 presents the efficiency after the pulp digester; the efficiency was measured considering two moments in the process: (i) how many m 3 of wood had entered into the mill, and (ii) how much pulp had been produced in dry tons of pulp. The GenMat3 group is equivalent to the Eucalyptus grandis average wood density [14]. The GenMat2 (500 kg/m 3 ) group and GenMat1 (580 kg/m 3 ) density are equivalent to E. grandis × E. urophilla hybrids and E.urophilla, respectively. These species are widely used in genetic improvement programs in Brazil to boost volume productivity [34,35].
The after-digester efficiency has a positive correlation with wood density and a negative with the size regularity of the chips [33]. The chip size regularity does not correlate with any forest management variables. Therefore, for simplicity, the same average regularity was used for all groups. Average wood density values per age of the GenMat1, Gen-Mat2, and GenMat3 groups were used to build the curves shown in Figure 3.
The initial situation of the plantation, in hectares per age class and GenMat groups, reflects the characteristics of the three groups of genetic material ( Figure 4). GenMat1, the most desirable group by mill managers due to its high density, is less represented in each age class. Group GenMat3, which has a higher volume productive clone but lower wood density, prevails in most age classes.     Since the beginning of the 1960s, forest researchers have been using mathematical programming to address forest planning. The harvest schedule is one of the most appropriate models used for long-term strategic forest planning worldwide [17,[36][37][38][39][40][41][42][43][44]. This model consists of looking for the best alternative to intervene in forest growth. Decision The GenMat3 group is equivalent to the Eucalyptus grandis average wood density [14]. The GenMat2 (500 kg/m 3 ) group and GenMat1 (580 kg/m 3 ) density are equivalent to E. grandis × E. urophilla hybrids and E.urophilla, respectively. These species are widely used in genetic improvement programs in Brazil to boost volume productivity [34,35].
The after-digester efficiency has a positive correlation with wood density and a negative with the size regularity of the chips [33]. The chip size regularity does not correlate with any forest management variables. Therefore, for simplicity, the same average regularity was used for all groups. Average wood density values per age of the GenMat1, GenMat2, and GenMat3 groups were used to build the curves shown in Figure 3. The initial situation of the plantation, in hectares per age class and GenMat groups, reflects the characteristics of the three groups of genetic material ( Figure 4). GenMat1, the most desirable group by mill managers due to its high density, is less represented in each age class. Group GenMat3, which has a higher volume productive clone but lower wood density, prevails in most age classes.

Forest Harvest Schedule Formulation
Since the beginning of the 1960s, forest researchers have been using mathematical programming to address forest planning. The harvest schedule is one of the most appropriate models used for long-term strategic forest planning worldwide [17,[36][37][38][39][40][41][42][43][44]. This model consists of looking for the best alternative to intervene in forest growth. Decision variables are related to when, where, and how interventions would be made to maximize profits, ecosystem services, or production [45]. In those models, interventions are often linked to several types of forest establishments, restoration, silvicultural treatments, cuts, thinning, and non-wood product pickups.
Johnson and Scheurman [46] offered three alternatives for linear programming formulations to address a harvest scheduling model. From those three formulations, two have been often used to formulate a harvest schedule since then.
In the Model I formulation, the decision variable Xur is an area of a unit u managed according to a prescription r, where prescription refers to a sequence of interventions that can happen within the planning horizon. In the Model II formulation, the decision variable is an area of a unit u where one complete forest cycle occurs. Romero ® encompasses an extension of Model II formulation where the decision variable is an area of a unit u in which

Forest Harvest Schedule Formulation
Since the beginning of the 1960s, forest researchers have been using mathematical programming to address forest planning. The harvest schedule is one of the most appropriate models used for long-term strategic forest planning worldwide [17,[36][37][38][39][40][41][42][43][44]. This model consists of looking for the best alternative to intervene in forest growth. Decision variables are related to when, where, and how interventions would be made to maximize profits, ecosystem services, or production [45]. In those models, interventions are often linked to several types of forest establishments, restoration, silvicultural treatments, cuts, thinning, and non-wood product pickups.
Johnson and Scheurman [46] offered three alternatives for linear programming formulations to address a harvest scheduling model. From those three formulations, two have been often used to formulate a harvest schedule since then.
In the Model I formulation, the decision variable X ur is an area of a unit u managed according to a prescription r, where prescription refers to a sequence of interventions that can happen within the planning horizon. In the Model II formulation, the decision variable is an area of a unit u where one complete forest cycle occurs. Romero ® encompasses an extension of Model II formulation where the decision variable is an area of a unit u in which a management intervention changes the previous course of the forest [47]. The formulation proposed by Nobre [47] is described in Appendix A.

Multicriteria Concepts Embedded in Romero ®
Essential multicriteria concepts, described by Ballestero and Romero [48], are the foundation of Romero ® 's model formulation. They are implemented according to the following descriptions.
(a) Attributes. These values are calculated independently from any stakeholder's choice.
In our case study, a "Production" accounting variable could be written using the expression, V iupk X iupk which is the total production within the horizon. It is called xTotalProd.
(c) Targets and Goals. In statement (1), in the line "Goal", the value vMin p is a target; decisionmakers want at least vMin p of production in period p. In Romero ® 's implementation, targets are parameters that are saved in a database and ready to be updated.
Moreover, deviation variables can be included in the "Goal" line. The objective function should be modified to minimize the sum of the deviation variables to turn the model into a Goal-programming model, which is not the case when compromise programming is the selected multicriteria technique. In this case study, Romero ® calculates the deviations but does not minimize them.
(d) Criteria. In statements (1) and (2) the Production criterion means that the attribute xTotalProd is used to calculate the objective MaxTotalProd.
Romero ® controls attributes, objectives, targets, and goals. It always calculates many other attributes like in the Production line of the statement (2). Depending on the decisionmakers choices, the attributes will be used in goals or objectives.
The acceptable technique for considering multiple objectives is multiobjective mathematical programming (MOLP). The MOLP notation proposed by Romero [49] states that the model seeks a set of efficient solutions that satisfy the objective functions f 1 , f 2 , . . . , f n , and they are subject to a set of constraints represented by a vector of functions Q(x) (statement (3)).
In this case study, f 1 and f 2 of statement (3) are the mill production and the forest profits, respectively. The function Q(x) will be the entire Model formulation proposed by Nobre [47]. To do so, Romero ® implements the two following steps.
The first step of a MOLP consists of (i) optimizing one criterion at a time, using the same scenario, and (ii) calculating all attributes related to the other criteria (the ones that are not being optimized). After calculations, a payoff matrix shows the level of conflicts among the objectives. The second step is to figure out what happens among the extreme points presented in the payoff matrix, which are optimal for each criterion using the Pareto optimality concept. Figure 5 summarizes the Romero ® calculation process.
The most common methodology for calculating Pareto-optimal solutions [48,49] is the constraint method because it has the lowest computational cost. This method consists of optimizing one of the objectives while the others are placed as parametric equality constraints. We generate one Pareto-optimal solution for each vector of values to be used as a right-hand side in the parametric constraints. methodology, o saves the results (attribute values, decision variables, and deviation variables) of all points into the database.

•
The user updates the decision-makers' weights for each criterion according to their preferences. • Romero ® : o calculates L1 and L∞ for each efficient point, o finds the minimum L1 and minimum L∞ using as SQL to implement and find a proxy of the linear programming models of the statements (A8) and (A9) in Appendix B.

Romero ® Architecture
This section describes Romero ® architecture and embedded mathematical programming. Most importantly, this section focuses on how the system guarantees the interactive MCDM principles.
According to Kanojiya and Nagori [54] guidelines, a DSS, and any other type of model-driven system, should include, at least, these general components: (i) a user interface, (ii) a database, knowledgebase, or data repository to handle with inputs and outputs Romero ® shows the payoff matrix and the Pareto frontier as described above and saves the entire MOLP solution of each point along the curve; however, the decision process needs to go further. The next phase is to select a small set of solutions that could best fit a specific group of decision-makers along the Pareto frontier. One of the most productive ways to accomplish this task was proposed in 1973 by Yu and Zeleny under the name of Compromising Programming [48,49].

Compromise Programming (CP)
The ideal point is the point where all criteria have their best level, which is never achievable because all criteria cannot assume the best level at the same time in a conflict context. The fundamental concept of CP, the Zeleny axioma, says that efficient solutions closer to the ideal point would be preferred to those farther. Moreover, a rational choice is as close as possible to the ideal point [49][50][51].
Therefore, we need to calculate the distance to the ideal point [52] and deal with the decision-maker's preferences regarding staying closer to one best attribute level than to another. Additionally, to avoid meaningless comparisons between attribute values due to their different measurement units, the attribute values must be normalized [52]. Distance, normalization, and preference concepts [49,50] are described in Appendix B.
According to Ballestero and Romero [48], the optimum choice for the decision-maker is given by the feasible solution where the utility function reaches its maximum value, i.e., the Lagrangian maximum tangency between the Pareto frontier and the family of iso-utility curves. CP defines a compromise set as the portion of the Pareto frontier where the tangency with the iso-utility curves will occur; that is, something like a landing area of the utility curve. The limits of the compromise set are Min(L 1 ) and Min(L ∞ ), which are also described in Appendix B.
Therefore, Romero ® calculates both distances to the ideal point, with a minimum of L 1 and a minimum of L ∞ [53]. That is to say, the compromise set is a closed interval of the Pareto frontier, and Romero ® shows the distances Min(L 1 ) and Min(L ∞ ) along the Pareto frontier, later shown graphically in the results section.
As all attribute values of each efficient solution are saved in a database, there is no need to build and run the linear models of the statements (A10) and (A11) described in Appendix B. To find the two points Min(L 1 ) and Min(L ∞ ) along the Pareto frontier, Romero ® implemented the following sequence of steps: finds the minimum L 1 and minimum L ∞ using as SQL to implement and find a proxy of the linear programming models of the statements (A8) and (A9) in Appendix B.

Romero ® Architecture
This section describes Romero ® architecture and embedded mathematical programming. Most importantly, this section focuses on how the system guarantees the interactive MCDM principles.
According to Kanojiya and Nagori [54] guidelines, a DSS, and any other type of modeldriven system, should include, at least, these general components: (i) a user interface, (ii) a database, knowledgebase, or data repository to handle with inputs and outputs (iii) model base, and (iv) a rule engine. To follow the literature guidelines, Romero ® comprises the general components shown in Figure 6.  In the optimization modules, while optimization for each criterion is running, Romero ® prepares the results to calculate the payoff matrix and automatically sets the parameters to the Pareto frontier. Two steps are needed to use the constraint method: (1) get the range from minimum and maximum values of all attributes and (2) use these range values as constraints. Romero ® can recreate a solution when decision-makers choose any point along the Pareto frontier because it saves all decision variable results. To calculate the two points Min(L1) and Min(L∞), Romero ® runs a minimization SQL query from results stored in the database.
The main objective of the inference engine iGen is to generate the indexes i (interventions) of the model described in Appendix A. The initial area information and rules regarding changes in a specific forest project feed the engine. iGen's first step is to create a structure according to the initial area. The possible transitions are generated based on the rules and the initial situation of the forest. The last step schedules the possible transitions along the horizon using the information about periods and ages. Scheduled transitions get the name of intervention.

Romero's ® Application to the Case Study
Due to the high productivity in volume, and general low wood densities, the case study considers that pulp mill managers require higher densities and want to evaluate the hypothesis that longer rotations would increase mill productivity. Romero ® was configured to evaluate the consequences of harvesting the eucalyptus plantations at older ages.
The case study demands older ages of alternative prescriptions. New rules were set to iGen, allowing genetic material changes after clear-cuts, as shown in Figure 7. The transition types cl1, cl2, and cl3, represent clear-cuts; the cl1 clear-cuts and re-establishes the plantation with GenMat1, cl2 with GenMat2, and cl3 with GenMat3. After a cl1 intervention, it will be alternatively possible to have interventions of all three types (cl1, cl2, and The first component, the database, deals with many types of information. A SQL Database Management System controls information such as parameters, results, and rules. The user's optimization process and set-up information choices are stored in XML files. The model is saved in Pyomo© source files. The second component, the user interface, comprises a set of screens to communicate with users. The third component includes the optimization modules written in Pyomo©, Python©, and its libraries to read parameters, generate forest interventions, run the model, and save results. Finally, the fourth component, the inference engine named iGen, covers the generation of the possible forest interventions. In the optimization modules, while optimization for each criterion is running, Romero ® prepares the results to calculate the payoff matrix and automatically sets the parameters to the Pareto frontier. Two steps are needed to use the constraint method: (1) get the range from minimum and maximum values of all attributes and (2)  points Min(L 1 ) and Min(L ∞ ), Romero ® runs a minimization SQL query from results stored in the database.
The main objective of the inference engine iGen is to generate the indexes i (interventions) of the model described in Appendix A. The initial area information and rules regarding changes in a specific forest project feed the engine. iGen's first step is to create a structure according to the initial area. The possible transitions are generated based on the rules and the initial situation of the forest. The last step schedules the possible transitions along the horizon using the information about periods and ages. Scheduled transitions get the name of intervention.

Romero's ® Application to the Case Study
Due to the high productivity in volume, and general low wood densities, the case study considers that pulp mill managers require higher densities and want to evaluate the hypothesis that longer rotations would increase mill productivity. Romero ® was configured to evaluate the consequences of harvesting the eucalyptus plantations at older ages.
The case study demands older ages of alternative prescriptions. New rules were set to iGen, allowing genetic material changes after clear-cuts, as shown in Figure 7. The transition types cl1, cl2, and cl3, represent clear-cuts; the cl1 clear-cuts and re-establishes the plantation with GenMat1, cl2 with GenMat2, and cl3 with GenMat3. After a cl1 intervention, it will be alternatively possible to have interventions of all three types (cl1, cl2, and cl3). Figure 7 shows the ina transition type, which represents the initial forest status. After an intervention of the ina type, interventions cl1, cl2, or cl3 may follow; ina does not change the value of any forest attribute. cl3). Figure 7 shows the ina transition type, which represents the initial forest status. After an intervention of the ina type, interventions cl1, cl2, or cl3 may follow; ina does not change the value of any forest attribute.  Figure 8 shows that the transition types cl1, cl2, and cl3 are possible in this scenario. They all end a forest cycle, reset the age to zero, renew plantations, and define the productive transition according to the corresponding production parameter. All transition types (cl1, cl2, and cl3) trigger changes in the values of attributes c, r, and sp for the forest plantation (Figure 7). Attribute u (management unit) remains the same u. However, attribute c (cycle) turns into c+1, attribute r (rotation) turns to 1, and attribute sp (species) turns to cl1, cl2, or cl3, depending on the transition type. Furthermore, cl1, cl2, and cl3 interventions may be repeated for a set of possible harvesting ages. This case study considers three scenarios for a distinct set of ages described ( Table 2).  Figure 8 shows that the transition types cl1, cl2, and cl3 are possible in this scenario. They all end a forest cycle, reset the age to zero, renew plantations, and define the productive transition according to the corresponding production parameter. All transition types (cl1, cl2, and cl3) trigger changes in the values of attributes c, r, and sp for the forest plantation (Figure 7). Attribute u (management unit) remains the same u. However, attribute c (cycle) turns into c + 1, attribute r (rotation) turns to 1, and attribute sp (species) turns to cl1, cl2, or cl3, depending on the transition type. cl3). Figure 7 shows the ina transition type, which represents the initial forest status. After an intervention of the ina type, interventions cl1, cl2, or cl3 may follow; ina does not change the value of any forest attribute.  Figure 8 shows that the transition types cl1, cl2, and cl3 are possible in this scenario. They all end a forest cycle, reset the age to zero, renew plantations, and define the productive transition according to the corresponding production parameter. All transition types (cl1, cl2, and cl3) trigger changes in the values of attributes c, r, and sp for the forest plantation (Figure 7). Attribute u (management unit) remains the same u. However, attribute c (cycle) turns into c+1, attribute r (rotation) turns to 1, and attribute sp (species) turns to cl1, cl2, or cl3, depending on the transition type. Furthermore, cl1, cl2, and cl3 interventions may be repeated for a set of possible harvesting ages. This case study considers three scenarios for a distinct set of ages described ( Table 2). Furthermore, cl1, cl2, and cl3 interventions may be repeated for a set of possible harvesting ages. This case study considers three scenarios for a distinct set of ages described ( Table 2). iGen uses rules to generate alternative forest prescriptions that can potentially meet mill requirements regarding higher wood density. Besides wood quality, the mill also imposes further requirements such as: • regulated wood flow, • genetic diversification to mitigate risk exposure to plagues and diseases, and • production sustainability.
From period two onwards, wood delivery cannot decrease above 10% or decrease above 25%. The planted area should simultaneously have (i) less than 60% covered by one single genetic material and (ii) at least 20% of each of the three genetic materials to avoid plagues and diseases.
Regarding production sustainability, the age class constraints were included in the model to guarantee a reasonably well-regulated plantation forest. Therefore, one age class can have up to 10% more or less area than the following age class. Table 3 summarizes the mill requirements. The initial situation, the forest managers' demands, and the mill requirements were applied by setting Romero's ® parameters and constraints. After setting the parameters and acceptable prescriptions for future management, it is possible to define the prime conflicts accurately.
MaxNPV (maximize Net Present Value), listed among available criteria in Romero ® © (Table 4), was chosen to represent forest managers' aspirations. At the same time, mill managers' aspirations are represented by MaxMillPrd (Maximize Mill Production). MaxNPV and MaxMillPrd are determined using accounting xTotalNPV and xTotalMillPrd. Additionally, one more criterion represents the environmental concerns since environmental issues pressure both sides. MaxStck (Maximize Final Stock) can represent the environmental concern; MaxStck expresses wood stock at the end of the planning horizon employing the accounting variable xTotalStock.
The objective functions built to calculate the three objectives (MaxNPV, MaxMillPrd, MaxStck) depend on decision variables x i , which refers to the area in which the intervention i will occur. Romero ® was used to set up the parameters, handle parameter combinations, run different scenarios, and report the results.
Finally, upon the definitions mentioned in previous paragraphs, it is possible to summarize the model that represents the case study. Statement (4) is a brief schema of the resulting formulation, which shows a multiobjective linear programming (MOLP) model designed to address the described case study.
The first sentence in the statement (4) states that this model seeks efficient solutions using variable x i . Likewise, the following list of the constraints defines the feasibility, which depends on the same decision variables x i .

Results
This section initially presents a subset of three scenarios created by iGen combining different ages (Table 2), limiting the choice of ages considering only short, medium, or long rotations as forest management prescriptions. For each scenario, the corresponding multiobjective linear programming (MOLP) problem optimized all three criteria, one at a time. Table 5 shows the main results of the three scenarios. Due to the conflicting nature of these two criteria, as mill production increases, the forest net present value (NPV) decreases. Table 5 shows the differences observed between the highest and lowest NPV as Forest NPV Reduction. The differences observed between the highest and the lowest pulp production are reported as Pulp Production Reduction.
The results demonstrate how conflicting interests expressed by forest managers and industrial managers can create fairly different outcomes. Consequently, the multi-criteria approach is adequate to evaluate the case study. Additionally, Romero ® could support the multi-stakeholders simulated teams, starting by defining the minimum and maximum ages that foresters would be allowed to harvest.
Considering the three scenarios presented in Table 5, and as described in Section 2, the simulation would start by confirming the parameter and asking Romero ® to run. The simulation finishes when Romero ® shows the results window. Table 6 summarizes the running information of Romero ® after each scenario.  Table 6 shows the data processing as time-consuming; the more variables and constraints Romero ® had to deal with, the more time-consuming the data processing became. The most time-consuming step is the concrete model building routine when Pyomo builds an abstract model. Naturally, the larger the model, the longer it took to build it.
As observed in Table 6, the short-rotation scenario took 21 min to build, while the smallest one took only three minutes. It is worth noting that although lengthy, this is not relevant when it gets to the phase when stakeholders seek a consensus. At that point, with the model built, Romero ® will cycle through solving, changing the values of a few parameters defining the next objective, and solving again. These latter steps run faster than the model building steps (6-7 s for the smallest problem and 18-19 s for the biggest problem).
Scenario number 217 illustrates the analysis provided by Romero ® to facilitate the search for consensus between stakeholders. The third criterion, maximize final stock, was included to add more substance to the analysis. It considers the consequences of regulating the annual pulpwood demand and maximizing the standing wood stock at the end of the planning horizon. NPV and pulp parameters were only calculated and left unconstrained.
A payoff matrix was built to evaluate the scenario and to allow us to see in detail the conflicting results in terms of NPV (MaxNPV), pulp production (MaxMillPrd), and final wood stock (MaxStck). Table 7 shows the resulting payoff matrix. The test was done using the midrange rotations scenario, considered a good starting point due to its conciliatory nature of not tending to either of the two extremes (very short or very large rotations). It allows for harvests to happen in 6-, 7-, and 8-year-old stands. The first column of Table 7, "Clear-cut area", refers to a summation of all areas harvested during the planning horizon. Additionally, the "Production" column reports the total volume produced in these cuts. From these two pieces of information, it is possible to get average productivity along with the horizon.
It is assumed that foresters and mill managers agree on how far they would like to be from the final stock constraint. Therefore, from this point on, for simplicity, the results are shown considering only the two original groups of stakeholders. Figure 9 shows the annual flow of pulpwood when each criterion is maximized. It is worth noting the striking differences in how the plantations evolve along the planning horizon in terms of the area of GenMat groups distributed annually. Visibly, the MaxNPV criterion conducts the management units to the more extensive sharing of GenMat3 (the lower density group). On the other hand, the MaxMillPrd strategy gradually replaces Gen-Mat3 with GenMat1 and manages the plantations to more significant sharing of GenMat1 in the future. It is assumed that foresters and mill managers agree on how far they would like to be from the final stock constraint. Therefore, from this point on, for simplicity, the results are shown considering only the two original groups of stakeholders. Figure 9 shows the annual flow of pulpwood when each criterion is maximized. It is worth noting the striking differences in how the plantations evolve along the planning horizon in terms of the area of GenMat groups distributed annually. Visibly, the MaxNPV criterion conducts the management units to the more extensive sharing of GenMat3 (the lower density group). On the other hand, the MaxMillPrd strategy gradually replaces Gen-Mat3 with GenMat1 and manages the plantations to more significant sharing of GenMat1 in the future. Figure 10 shows the quality of the pulpwood in terms of the age classes at which the logs were harvested. Conversion rates were higher, though density increases with age, and the use of logs from longer rotations would be more significant. More substantial participation of logs from older trees seems to be a trend in the future but is not so evident at the beginning and middle of the horizon. Nevertheless, maximizing pulp changes the annual composition of logs in terms of age classes.   Figure 10 shows the quality of the pulpwood in terms of the age classes at which the logs were harvested. Conversion rates were higher, though density increases with age, and the use of logs from longer rotations would be more significant. More substantial participation of logs from older trees seems to be a trend in the future but is not so evident at the beginning and middle of the horizon. Nevertheless, maximizing pulp changes the annual composition of logs in terms of age classes. An agreement would not be on extreme points; therefore, the two stakeholder groups might be willing to verify the trade-offs between the two criteria. Twenty points were calculated between the two extremes to create the Pareto frontier between the two criteria ( Figure 11). The Pareto frontier can give further support to negotiation. It visually expresses the tendency of the consensus solution toward one stakeholder or the other. It may also help as a tool during the discussion to mitigate eventual tendencies of one part becoming more influential than the other. An agreement would not be on extreme points; therefore, the two stakeholder groups might be willing to verify the trade-offs between the two criteria. Twenty points were calculated between the two extremes to create the Pareto frontier between the two criteria ( Figure 11). An agreement would not be on extreme points; therefore, the two stakeholder groups might be willing to verify the trade-offs between the two criteria. Twenty points were calculated between the two extremes to create the Pareto frontier between the two criteria ( Figure 11). The Pareto frontier can give further support to negotiation. It visually expresses the tendency of the consensus solution toward one stakeholder or the other. It may also help as a tool during the discussion to mitigate eventual tendencies of one part becoming more influential than the other. Figure 11. Pareto frontier.
The Pareto frontier can give further support to negotiation. It visually expresses the tendency of the consensus solution toward one stakeholder or the other. It may also help as a tool during the discussion to mitigate eventual tendencies of one part becoming more influential than the other.
In this analysis stage, compromise programming (CP) can lead one step closer toward the consensus. Figure 12 shows the compromise set, considering that the relative importance between forest and industrial managers is even. Table 8 demonstrates how the distances were calculated.
In this analysis stage, compromise programming (CP) can lead one step closer toward the consensus. Figure 12 shows the compromise set, considering that the relative importance between forest and industrial managers is even. NPV and pulp production assume the higher values in the theoretical optimum point, reported as "Optimum", in Figure 12, R$ 266,445,003 and 5,022,290, respectively. The distance function calculates the distance from any point over the Pareto frontier to this theoretical optimum point. Line segments a and c (Figure 12) are the distances from points L∞ and L1 to the industrial managers' best value. Likewise, b and d are the distances from L∞ and L1 to forest managers' best value. The sum of the distances from point L∞ to the Optimum point is a+b. The sum of the distances from point L1 to the Optimum point is c+d. Herein, these distances (a, b, c, and d) from the Optimum point are called losses.
L1 is the point where the sum of both losses is at its minimum along the Pareto frontier (line segments c+d). In other words, L1 is the point where stakeholders lose less together. On the other hand, L∞ is the minimum of the max between them, which corresponds to line segment a in Figure 12. At L∞, there are no noteworthy losses for any of the stakeholders.  Table 8 shows details of only three points of the Pareto frontier. The 9th point is the L∞, and the 11th point is the L1. The columns "Best Value" and "Worst Value" repeat the optimum point's values. The "Interval" column refers to the difference between the best and the worse values. The Distance column presents the distances between the point on  NPV and pulp production assume the higher values in the theoretical optimum point, reported as "Optimum", in Figure 12, R$266,445,003 and 5,022,290, respectively. The distance function calculates the distance from any point over the Pareto frontier to this theoretical optimum point.
Line segments a and c (Figure 12) are the distances from points L∞ and L 1 to the industrial managers' best value. Likewise, b and d are the distances from L∞ and L 1 to forest managers' best value. The sum of the distances from point L∞ to the Optimum point is a + b. The sum of the distances from point L 1 to the Optimum point is c + d. Herein, these distances (a, b, c, and d) from the Optimum point are called losses.
L 1 is the point where the sum of both losses is at its minimum along the Pareto frontier (line segments c + d). In other words, L 1 is the point where stakeholders lose less together. On the other hand, L∞ is the minimum of the max between them, which corresponds to line segment a in Figure 12. At L∞, there are no noteworthy losses for any of the stakeholders. Table 8 shows details of only three points of the Pareto frontier. The 9th point is the L∞, and the 11th point is the L 1 . The columns "Best Value" and "Worst Value" repeat the optimum point's values. The "Interval" column refers to the difference between the best and the worse values. The Distance column presents the distances between the point on the Pareto front and the line that crosses the optimum point. These distances are calculated according to statement (A2).
According to CP concepts, the utility curve touches the Pareto frontier between L∞ and L 1 . Point 10 of Table 8, which is located between L ∞ and L 1 (Figure 12) can be considered a good compromise in this case study. Even if the decision-makers agree to select point 10, they might need a more in-depth understanding of this particular solution. Therefore, point 10 should be examined to determine if it meets the needs of the decision-maker.  Figures 13 and 14, it is possible to see a balanced solution that might be taken as a compromise. Therefore, point 10 should be examined to determine if it meets the needs of the decisionmaker.
The graphs shown in Figures 9 and 10 are rebuilt using data from the solution of the Pareto frontier point 10. While previous figures show the extreme solutions, in Figures 13  and 14, it is possible to see a balanced solution that might be taken as a compromise.
In this particular point, the solution strategy emphasizes the adoption of GenMat2, which is the intermediate group of genetic material in terms of forest productivity in volume, wood density, and industry efficiency. Gradually, the plantations with GenMat3 and GenMat1 were replaced with GenMat2 until it reached the maximum possible for each group, 60%.
Meanwhile, the other two groups, GenMat1 and GenMat2, achieve the lowest level for each group with 20%. The solution strategy leads to younger ages at the beginning of the planning horizon and, gradually, guides to 100% of the median age.

Discussion
Among the scenarios presented in the previous section, the differences between the best and worst outcomes, from the forest managers' perspective, cause losses in NPV, In this particular point, the solution strategy emphasizes the adoption of GenMat2, which is the intermediate group of genetic material in terms of forest productivity in volume, wood density, and industry efficiency. Gradually, the plantations with GenMat3 and GenMat1 were replaced with GenMat2 until it reached the maximum possible for each group, 60%.
Meanwhile, the other two groups, GenMat1 and GenMat2, achieve the lowest level for each group with 20%. The solution strategy leads to younger ages at the beginning of the planning horizon and, gradually, guides to 100% of the median age.

Discussion
Among the scenarios presented in the previous section, the differences between the best and worst outcomes, from the forest managers' perspective, cause losses in NPV, In this particular point, the solution strategy emphasizes the adoption of GenMat2, which is the intermediate group of genetic material in terms of forest productivity in volume, wood density, and industry efficiency. Gradually, the plantations with GenMat3 and GenMat1 were replaced with GenMat2 until it reached the maximum possible for each group, 60%.
Meanwhile, the other two groups, GenMat1 and GenMat2, achieve the lowest level for each group with 20%. The solution strategy leads to younger ages at the beginning of the planning horizon and, gradually, guides to 100% of the median age.

Discussion
Among the scenarios presented in the previous section, the differences between the best and worst outcomes, from the forest managers' perspective, cause losses in NPV, ranging from 14.51% to 16.97%. On the opposite side, from the mill managers' perspective, the differences between the best and worst outcomes cause decreases in pulp production, ranging from 6.29% to 7.85%. Table 5 presents interesting and unexpected results. The first is the most significant reduction in NPV, which occurs in the shorter-rotations scenario 216, precisely what prevails in Brazil. Another impressive result is that the value of total NPV increases in the midrange and more extended rotation scenarios. That seems counterintuitive to the current preferences for shorter rotations expressed by Brazilian forest managers; leverage for mill managers favors optimizing pulp production.
Contrary to mill managers' expectations, the best mill performance does not come from longer rotations but from the possibility of cutting at five years old at the beginning of the horizon and replacing the genetic material as soon as possible. Scenario 217 shows this effect clearly ( Figure 10). It is assumed that, in scenario 216, this effect was more intense because the model allowed cutting at five years old.
These results indicate that five-year-old cuts are good, paradoxically, only for the mill due to the opportunity to replace GenMat3. An in-depth analysis of forest yield curves might show that the maximum MAI is closer to seven years old than five years old.
It is worth mentioning that no improvements happen when age nine is added to alternative cut ages ( Table 5). One of the reasons might also be the need for MatGen3's replacement. However, there is one more significant reason in this context. Figure 4 shows that there is 2000 ha of age five and 2000 ha of age six. At the beginning of the planning horizon, it would not be possible to cut at ages eight or nine to achieve the required flow production. Cuts in older ages can only be possible from the fourth period on ( Figure 10).
Therefore, results demonstrate more gains in changing the genetic material than in cutting at older ages, given the industrial efficiency curves (Figure 3). Both extreme solutions show that the model conducts replacing genetic material according to the criterion that is being maximized. Even in the compromise solution, the model led to the maximum possible, a 60% area of the intermediate genetic material group.
The results illustrate how CP embedded in an FMDSS with the characteristics of Romero ® can support large-scale multi stakeholders' decision processes. Any harvestschedule-based problem involving diverse objectives can be addressed by the approach when parties involved in the negotiation are willing to seek consensus interactively driven by the compromise programming principle.
Romero ® can guide to a compromise solution where stakeholders assume responsibilities regarding the decisions' impact as prescribed by Bruña-García and Marey-Pérez [4] and other authors [20,21].
Forest monitoring information and knowledge usually indicate improvements in how the forest is managed [55]. The presented case study gave an example of this kind of dynamics through the age rotation analysis when stakeholders ask for a new forest management shape; not to mention that the attributes calculated under the multicriteria conceptual framework provide the stakeholders with the means to test their hypotheses and achieve their aims.
Despite the literature recommendation [4,19,[56][57][58], only a few FMDSS implemented MCDM, including interactive participatory planning [17,59]. Regarding the application of MCDM techniques, CP concepts embedded in Romero ® contribute to filling the gap on the use of MCDM techniques into FMDSS capable of supporting participatory processes.

Conclusions
From this discussion, we conclude that the initial hypothesis that cutting in older ages could improve mill production was not confirmed. Secondly, changes in rotation size and genetic material take time, and decisions should be made involving both longterm interests: forest and mill. Thirdly, Romero ® 's support was essential to consider and inspect all aspects of the solution; it was an information-based decision-making process as preconized by literature.
Romero ® and its embedded models and methods, when applied to an industrial forest plantation case involving interactions among groups with diverse interests, demonstrated that incorporating MDCM techniques in a participatory process could improve the decisionmaking process and the quality of the decisions. Expressly, the hypothesis proposed in the study case that longer rotations could improve mill production, as expected by the mill managers, could not be confirmed. Nevertheless, Romero ® led the compromise to an unexpected point where the possibility of harvesting in younger ages allowed an earlier replacement of less efficient genetic material.
Additionally, a specific MOLP integrated into an MCDM framework can produce the necessary elements for a comprehensive multicriteria analysis, such as efficiently reporting (i) pay-off matrices, (ii) the efficient solutions of the Pareto frontier, and (iii) the compromise set. Therefore, the Compromise programming approach proved suitable for dealing with cases in which different stakeholders have conflicting views on managing production forests. Furthermore, Romero ® was shown capable of promoting interaction among stakeholders while seeking consensual decisions.
The Romero ® approach can be packed as applying rigorous multicriteria principles to a MOLP harvest schedule formulation. It employed an interface that promotes compromise and consensual planning. Furthermore, Romero ® demonstrated through the case study application that strategic scenario analysis is still an essential tool to evaluate future impacts of short-term decisions. This case study model demands an empiric correlation between after-digester productivity and wood ages or other wood attributes. Further and more specific case studies should be conducted to increase the integration between pulp mill requirements and forest management, which will undoubtedly contribute to the effective use of the natural resources dedicated to pulp production. Data Availability Statement: Publicly available datasets were analyzed in this study. This data can be found here: romeromulticriteria.com/RomeroThesis.db; romeromulticriteria.com/RomeroThesis-Results.xlsx; romeromulticriteria.com/RomeroThesis.zip. The database is an SQLite file; it can be open using the free software SQLite Studio (RomeroThesis.db) and the spreadsheet is an Excel file that contains the analysis of the result (RomeroThesis-Results.xlsx).

Appendix B. Compromise Programming Definitions
The Ideal point is the point where all criteria have their best level, and this point is never achievable because all criteria cannot assume the best level at the same time in a context of conflict. Efficient solutions closer to the ideal point would be better than those that are farther, precisely the Zeleny axioma content and the fundamental concept of compromise programming. Moreover, to be as close as possible to the ideal point is the rationale of human choice [49,50].
Therefore, to go further in defining a compromise, there are questions to answer, such as (a) how to calculate the distance to the ideal point [52], and (b) what if the decisionmakers have preferences regarding staying closer to one attribute best level than to another. Additionally, to avoid meaningless comparisons between attribute values due to their different measurement units, the attribute values must be normalized [52].
The normalized distance from an efficient point along the Pareto frontier to the ideal point can be determined as follows: The set of techniques to measure this distance, inside the CP context, is called "a family of distance functions" L p . This text does not intend to go deep into the L p definitions; however, it is crucial to have a clear understanding of the distance concepts [49] to sanction the use of the two limits of the compromise set: Min(L 1 ) and Min(L ∞ ).
The starting point is the geometric distance defined by Pythagoras theorem that is the minimum distance between two points in two-dimensional space named "Euclidean distance". It is easy to generalize the "Euclidean distance" to an n-dimensional space where the distance between two points x 1 = (x 1 1 , x 2 2 , ... , x 1 n ) and x 2 = (x 2 1 , x 2 2 , ... , x 2 n ) can be written as follows: However, the most acceptable generalization of the n-dimensional "Euclidean distance" [49] is given by the following expression: where L p represents the family of distance functions. For each value of p, there is a different distance value L p for the same pair of points x 1 and x 2 . A particular case L p is for p = 2 when the two-dimensional "Euclidean distance" is obtained. A second particular case of L p is for p = 1, called "Manhattan distance", which, in a two-dimensional space, is the sum of the cathetus lengths. Therefore, L 1 is the largest distance between x 1 and x 2 , while L 2 is the shortest one according to Pythagoras theorem.
when the metrics p tends to ∞, L ∞ is called "Chebysev distance" and tends to the value that can be calculated as follows: The statement (A7) means that the L ∞ is given by only one deviation of the set of n deviations: the larger one. If L 1 and L ∞ are calculated for each point along the Pareto frontier, it is possible to determine two shorter distances, the Min(L 1 ) and Min(L ∞ ).
The Yu theorem, presented in the literature in 1974 [49] demonstrated that for a two-criterion problem, the compromise solutions calculated with L p (statement (A5)) are contained in a "compromise set", which is a subset of the efficient solutions set (Pareto frontier). In addition, the limits of this subset are given by L 1 and L ∞ (statements (A6) and (A7)). Two years later, Freimer and Yu demonstrated similar conditions to a more than two-criterion problem. Moreover, according to the traditional economic analysis and the theorems of Yu, the utility functions intercept a Pareto frontier in one of the points of the "compromise set". L 1 is a summation of the deviations between each attribute value from this ideal value (statement (A6)). The minimum L 1 is the maximum efficiency point, where the best point is achieved considering the value of the attributes. However, this point may be unbalanced, that is to say, in that point; one attribute can have much more value than others. Moreover, if the decision-makers have different preferences that can be translated into weights (from 0 to 1), L 1 could be mathematically represented as follows using previous definitions of statements (A3) and (A6): where : w a is the wheight o f the attribute a N a (x) is the normalized distance (A8) On the other hand, to obtain L ∞ , it is necessary to select the maximum deviation between attribute individual deviations. That is, only the largest deviation of each attribute from its best value is considered. Considering all previous definitions, L ∞ could be mathematically represented as follows: A linear programming model to calculate Min(L 1 ) is presented by Romero (1993) and can be written as follows, based on statements (A3) and (A8).
Additionally, to calculate Min(L ∞ ) a linear programming model can be written as follows: Min L ∞ = d Subject to :