Testing the Capability of Low-Cost Tools and Artiﬁcial Intelligence Techniques to Automatically Detect Operations Done by a Small-Sized Manually Driven Bandsaw

: Research Highlights : A low-cost experimental system was developed to enable the production monitoring of small-scale wood processing facilities by the means of sensor-collected data and the implementation of artificial intelligence (AI) techniques, which provided accurate results for the most important work operations. Background and Objectives : The manufacturing of wood-based products by small-scale family-held business is commonly affected by a lack of monitoring data that, on the one hand, may prevent the decision-making process and, on the other hand, may lead to less technical efficiency that could result in business failure. Long-term performance of such manufacturing facilities is limited because data collection and analysis require significant resources, thus preventing the approaches that could be pursued for competitivity improvement. Materials and Methods : An external sensor system composed of two dataloggers—a triaxial accelerometer and a sound pressure level meter—was used in combination with a video camera to provide the input signals and meta-documentation for the training and testing of an artificial neural network (ANN) to check the accuracy of automatic classification of the time spent in operations. The study was based on a sample of ca. 90 k observations collected at a frequency of 1 Hz. Results : The approach provided promising results in both the training (ca. 20 k) and testing (ca. 60 k) datasets, with global classification accuracies of ca. 85%. However, the events characterizing the effective sawing, which requires electrical power, were even better recognized, reaching a classification accuracy of 98%. Conclusions : The system requires low-cost devices and freely available software that could enable data feeding on local computers by their direct connection to the devices. As such, it could collect, analyze and plot production data that could be used for maintaining the competitiveness of traditional technologies. project administration, M.V.M., E.I. and S.A.B.; resources, M.C. and M.V.M.; software, S.A.B.; supervision, S.A.B.; validation, S.A.B.; visualization, M.V.M. and E.I.; writing—original draft, M.V.M., E.I. and S.A.B.; writing—review and editing, S.A.B. authors


Introduction
A sustainable provision of high-quality wood-based products to the market requires a supply chain designed to overcome bottlenecks and to meet an efficient allocation of commodities and resources flows. It should enable a diversified and customized offer as well as the resilience of different categories of stakeholders involved in it. From the forest to the customers, the wood supply chain follows a path that links several stakeholders, operations, logistics and transactions [1]. As such, it is now more than ever, that people working at different management levels need high amounts of reliable data Globally, there are many models of saws used to process the wood, which are available in different sizes and levels of technology integrated into them. A typical difference between them rests in their capability to automatically monitor the production, with some which are integrating such functions while some are much simpler by construction and do not hold such capabilities. In addition, some manufacturers provide production monitoring systems as an option that comes at supplementary costs, and many entrepreneurs working in the industry do not purchase them due to their limited financial availability and cost-saving reasons. For small-scale business, it is quite typical to use a lower-level technology in such operations and, in addition, a lot of the equipment used could be manually driven, even though they are electrically powered. As such, the situation prevents extensive data gathering on their performance, which limits understanding of the factors that affect or drive it. In addition, in well-established industries, the analysis of big data is of crucial importance to make decisions and to balance resources, a fact that applies also to small entrepreneurs.
This work is experimental in nature and it aimed to test whether it is possible to use triaxial accelerometers and sound pressure level sensors as low-cost data collectors to monitor the production of a simple small-sized locally manufactured bandsaw held by a small-scale family business. The concept behind the work was that of documenting the signals produced by the two types of sensors, which was completed by the use of video surveillance, followed by the use of techniques of artificial intelligence (AI) to train and test an artificial neural network (ANN) and to see to what extent the signals could be used to monitor the production. The choice of the bandsaw was based also on the model's wide use in the small business from the region.

Facility Description and Machine's Functions
The data needed in this study were collected in a small-scale family-held wood-processing facility located in Harghita county (Romania), in 2018. A full description of the facility is given in Figure 1, along with the main inputs and outputs of the production and the machines used in the sawmilling operations. Globally, there are many models of saws used to process the wood, which are available in different sizes and levels of technology integrated into them. A typical difference between them rests in their capability to automatically monitor the production, with some which are integrating such functions while some are much simpler by construction and do not hold such capabilities. In addition, some manufacturers provide production monitoring systems as an option that comes at supplementary costs, and many entrepreneurs working in the industry do not purchase them due to their limited financial availability and cost-saving reasons. For small-scale business, it is quite typical to use a lower-level technology in such operations and, in addition, a lot of the equipment used could be manually driven, even though they are electrically powered. As such, the situation prevents extensive data gathering on their performance, which limits understanding of the factors that affect or drive it. In addition, in well-established industries, the analysis of big data is of crucial importance to make decisions and to balance resources, a fact that applies also to small entrepreneurs.
This work is experimental in nature and it aimed to test whether it is possible to use triaxial accelerometers and sound pressure level sensors as low-cost data collectors to monitor the production of a simple small-sized locally manufactured bandsaw held by a small-scale family business. The concept behind the work was that of documenting the signals produced by the two types of sensors, which was completed by the use of video surveillance, followed by the use of techniques of artificial intelligence (AI) to train and test an artificial neural network (ANN) and to see to what extent the signals could be used to monitor the production. The choice of the bandsaw was based also on the model's wide use in the small business from the region.

Facility Description and Machine's Functions
The data needed in this study were collected in a small-scale family-held wood-processing facility located in Harghita county (Romania), in 2018. A full description of the facility is given in Figure  1, along with the main inputs and outputs of the production and the machines used in the sawmilling operations. Figure 1. Description of wood processing facility. Legend: in white: 1-log feedstock, 2-processed log, 3-machine (bandsaw), 4-processed planks, 5-regular circular saw, 6-processed products, 7-residues; in green: 1-transversal wooden strut, 2-fixing traverse, 3-rolling rail, 4-metallic cover, 5-blade, 6-frame, 7-water tank, 8-lever.
The sawmilling machine is manually driven and adjusted, requires one worker to operate it and it is electrically powered. A common feature of machines from this class is, however, the technical functions they enable, irrespective of whether or not they are mechanically or manually operated. The whole range of such machines provide cutting functions that are used to detach parts from the logs by a forward-backward movement of the cutting frame, which is supported by the possibility of vertical adjustment. The later enables one to set the cutting thickness at the desired dimensions and the cutting blade rising-lowering to accommodate them. For the machine observed in this study, only the active feeding of the blade into the logs was electrically powered since the worker that operated the machine turned on the engine only for this phase of sawmilling. The rest of work elements were manually powered and consisted of forward and backward movements of the cutting frame as well as of frame adjustments on the height. As a rule, these were completed with the engine turned off.

Data Collection and Processing
Data collection was completed by the use of three devices. An Extech ® 407760 sound level meter and an Extech ® VB 300 triaxial accelerometer (Extech Instruments, FLIR Commercial Systems Inc., Nashua, NH, USA) were used to collect the raw input signals (S-sound pressure level, dB(A) and A-acceleration, g) used in this work. They were set up to collect observations at a sampling rate of 1 Hz. The accelerometer was mounted on the machine's frame while the sound pressure level datalogger was mounted on the worker's helmet to enable also the collection of data on exposure to noise. However, exposure to noise was not addressed in this study. The full procedures used for setting, data transfer and data pairing, as well as the capabilities and dimensional features of the used devices are described in [27] and [21][22][23], respectively. A small-sized Schwartz B1080 video camera was placed on a wall of the facility to cover in the field of view the operations; it was set up to monitor the operations by continuously collecting video files at the maximum length enabled by it (20 min), and the operations were surveyed for three working days by recording and saving the data on internal memory.
Back at the office, the data collected by the first two devices were organized in a Microsoft Excel ® (Microsoft, Redmond, WA, USA, 2013 version) sheet; then, the video footage was used to document it by considering three types of events: cutting (C), moving (M) and pauses (P). To do so, string codes were used in conjunction with the video files played at low speed and each observation received a code (C, M or P) depending on the event to which it was identified to belong, based on the video analysis. As such, cutting (hereafter Cut) covered those observations in which the engine was on and the blade engaged into active cutting. Moving (hereafter Move) corresponded to all the events which supposed the movement of the cutting frame (forward feed, backward, vertical adjustment) without having the engine on, and pauses (hereafter Pause) consisted of events in which there was no intention to operate the machine observed, but still, the worker was near it. At this point, some parts of the initial dataset needed to be removed to cover only those events restricted to the machine use, as described in Table 1. In addition, Table 1 shows the input signals and their purpose in the framework of this study. For the acceleration signal, this study used a normalization procedure which aimed to enhance the independence of the datalogger orientation in the three-dimensional space. As such, vector magnitudes (g) were used as input signals instead of the axial responses, a procedure that is easy because the datalogger used outputs this signal derivation. In the case of sound pressure level data (dB(A)), for graphical comparison purposes, it was decided to use the datalogger's output signal values divided by a factor of 10. However, this does not alter the pattern of the original signal, therefore, the outcomes of the training and testing algorithms are also unaltered. In total, 78,189 observations were retained for the training (20,050) and testing (58,139) of the ANN. Before doing so, however, a median filtering procedure using a window size of 3 points (observations) was applied to remove the impulse noise and some data collection errors (see Figure 2, left side). The choice of this filter was due to its ability to preserve the edges of the signals in the time domain (e.g., [28,29]).

Setup of the Artificial Neural Network
The freely available Orange Visual Programming Software (version 3.2.4.1) [30] was used when setting up the ANN for training and testing. The rectified linear unit function (ReLu) was adopted as an activation function because it is assumed to solve nonlinear problems at high performances (e.g., [31,32]); however, it is worth mentioning that the used software enables the implementation of the most common activation functions. Adam solver (the stochastic gradient-based optimizer) was chosen and used mainly due to its low training costs [33], and the L2 penalty regularization term was set at 0.0001. Then, A MTRAIN , S MTRAIN and AS MTRAIN signal datasets (see definitions in Table 1) were used to train the ANN and to produce the performance indicators needed to check which one of the signals was the best. The indicators used to check the ability of the signals to train the ANN, as well as to evaluate the performance of the ANN-developed model on the test signals, were those commonly described in similar studies [34,35], from which the area under the curve (AUC), classification accuracy (CA), precision (PREC) and recall (REC) were retained and used as a reference in this study. In order to do so, the ANN was set up to hold three hidden layers of 100 neurons each and to run 1,000,000 iterations for each train signal dataset. The setup of the ANN training, as described above, was rather an educated guess that tried to maximize the performance of the ANN in testing at the expense of computational cost; for this reason, the time needed to train the ANN on the three signals was also counted. As a fact, choosing the number of hidden layers and neurons is seen more like an art than a science; even though there are some methods described in the available literature which propose criteria for choosing the number of neurons and hidden layers [36,37], to the best of our knowledge, finding the means to provide the best practices for given cases is a problem that is yet to be solved. Then, training and scoring was completed by cross-validation assuming a stratified approach and a number of folds set at 20. After the training procedure, the performance metrics were evaluated and the best model was saved for further use in the testing phase which was applied to its corresponding test signal dataset. Based on the tested data, the analysis of findings went in more detail to see which events and to what amount were correctly classified, as well as to see which of them were misclassified as other events. For that, data was imported from the software into Microsoft Excel ® (Microsoft, Redmond, WA, USA, 2013 version) and a detailed analysis was carried out at event type and classification outcome levels. Since the refined signals were used in training and testing, for balancing purposes, an analysis was carried out to see the proportion of the events in the time domain of the refined, training and testing signals. All the supplementary analyses as described above, as well as the basic statistics of the refined signals and the artwork shown in the results, were carried out or produced in the Microsoft Excel ® (Microsoft, Redmond, WA, USA, 2013 version) software.
The computer architecture on which the ANN was setup, trained and tested had the following parameters: system type-Alienware 17 R3, processor-Intel ® Core™ (Intel, Santa Clara, CA, USA) i7-6700 HQ CPU, 2.60 GHz, 2592 MHz, 4 cores, 8 Logical Processors, installed physical memory (RAM)-16 GB, operating system-Microsoft Windows 10 Home. However, one should note that the training phase is that which is the most computationally intensive, and once a model is settled, the testing phase takes much less time.  Table 1) by visual means, a fact that was generally true for all the dataset utilized in this analysis. A REF (see definition in Table 1), on the other hand, has provided less separability in its pattern, assuming here a linear approach. These two phenomena may be explained by the process physics and mechanics involved in the observed events. In the case of the sound pressure level signal, the separability of Cut events was enhanced by the higher and steadier noise level produced by the interaction of the blade with the wood during such events. As such, the outputs of this signal produced fewer variable outputs in the amplitude domain. In the case of acceleration, however, it seems that movement of the cutting frame in events such as the Cut and Move interfered with the outputs, providing less linear separability. Since the placement of the accelerometer was on the frame, other external events could have been also affecting its outputs, a fact that was true also for Forests 2020, 11, 739 7 of 13 the sound pressure level datalogger, but to a lesser extent. From this point of view, the information carried by the sound may provide better results compared to that provided by the acceleration.

Descriptive Statistics of the Refined Signal Datasets
The basic statistics of the two refined signal datasets have revealed some important information that could be used to judge the separability of data and to justify the need to filter it. Even if not given as a table here, the minimum, maximum, mean and standard deviation values are presented. For instance, in the case of A REF , Cut events were characterized by a range of values between 1.01 and 3.78 g, averaging 1.16 ± 0.12 g; the same statistics were 1.01 to 3.51 and 1.17 ± 0.11 for the Move events, and 1.01 to 4.98 and 1.14 ± 0.11 g for the Pause events, respectively. In the case of S REF , they were 5.09 to 10.19 and 8.48 ± 0.48 dB(A)/10 for the Cut events, 2.82 to 10.02 and 6.26 ± 0.73 dB(A)/10 for the Move events, and 0.01 to 10.57 and 4.83 ± 1.42 dB(A)/10 for the Pause events, respectively. As such, it is obvious that the refined signal datasets provided less information assuming at least a linear separability, even though the frequencies of the observations on magnitude categories were not documented in this study. Part of these effects, reflected in the main statistical descriptors (i.e., minimum value of 0.01 for S REF in case of Move), were also due to some impulse noise or measurement and recording errors, as shown in Figure 2 The analysis of the true events shares in the used signals revealed the results shown in Table 2. While it is typical for many applications of ANN learning techniques to use a higher proportion of the dataset to train the model, in this work only ca. 25% of the data was used to train the ANN, based on the assumption that the computational effort should be kept to a minimum. Table 2. Share of the true events in the signals used. As shown in Table 2, the proportions of the true events encoded in the used signal datasets were similar, providing a good balancing of the data used in different steps of the ANN implementation. They also reflect the proportion of time used in operations that may characterize the efficiency of production, which is typical to small-scale facilities. As shown, close to 70% of the time was used for different pauses, and only ca. 30% for operations. Of the later, only ca. 20% of the time was used in effective cutting. In these conditions, it is quite usual for such facilities to process less than 10 m 3 per day, with a usual daily input of ca. 5 m 3 .

Training Results and Selection of the Model
The results of the ANN training phase, which has used the three median-filtered input signals, are given in Table 3. Using both the acceleration and sound pressure level median-filtered signals (AS MTRAIN ) in the training process needed ca. 2.8 times more time resources compared to using only the acceleration (A MTRAIN ), and ca. 2 times more time resources compared to using only the sound pressure level (S MTRAIN ).
The area under the curve (AUC) stands for a metric which is often used to characterize the performance of a classifier in the area of receiver operating characteristics (ROC) graphs and it is equivalent to the probability that a classifier will rank a randomly chosen positively instance higher than a randomly chosen negative instance (e.g., [35]). It is also often assumed that the higher the AUC, the better the performance of a classifier. From this point of view, and by judging at the train signal datasets level, AS MTRAIN provided the best results, while A MTRAIN provided the poorest ones. Considering the same scale, S MTRAIN was close in performance to AS MTRAIN , with a difference in AUC of 0.005. At the event level, however, it seems that for the Move event, the AUC results were the poorest. Classification accuracy (CA) is a metric that characterizes the percentage of correct predictions where a class that has the highest probability is the same as the targeted one [34], being interpreted as a metric of true classification [38]; it is averaged among all the classes specific to a problem [34] and stands for the ratio of true positive and negative values to the total of the observed values [34,35]. Based on the results shown in Table 3, the situation in regard to CA was similar to that of AUC, indicating a better performance of AS MTRAIN , which was comparable to that of S MTRAIN . However, at the event (class) level, the poorest results were those associated with Pause events. Precision (PREC), or the positive predicted values, accounts for the fraction of the instances identified by a classifier as true positives within the total number of positively classified instances [35]. For multi-class problems, the precision is calculated by averaging among the classes [34]. At the train signal datasets level, the situation was similar to the AUC and CA metrics, with AS MTRAIN resulting in a better performance and the Move event showing the poorest one. In the A MTRAIN signal, however, PREC performed very low, by "switching" the metrics between the Move and Cut events. Therefore, this signal did not provide enough information for this attempt.
Recall (REC), as a classification performance metric, stands for the fraction of true positives from the total amount of true positives and false negatives [34,35], which is also sometimes called and used as the true positive rate, hit rate or sensitivity of a classifier [35,38], being averaged in the case of multiclass problems [34]. As such it stands for the ratio of hypothesized positives to the total positives in a sample and, because of that, it may be the best indicator for efficiency-monitoring applications. As shown in Table 3, the situation on REC kept AS MTRAIN as the best signal in the training phase, being closely followed by S MTRAIN . The overall REC for AS MTRAIN , however, reached only ca. 87%. Nevertheless, it provided quite accurate results for Pause and Cut (ca. 95%), but it performed poorer for Move (ca. 30%). In the case of A MTRAIN , the results were the poorest, showing the signal's inability to provide the information for a differentiation between the events. As such, all the data were recalled as belonging to the Pause event. The results of S MTRAIN , on the other hand, showed a similar behavior to that of AS MTRAIN for the REC metric, even though they were less accurate. The F1 metric stands for the harmonic mean of PREC and REC [34], and it is not discussed here but provided as a reference.

Statistics and Classification Performance on the Test Signal Dataset
Since AS MTRAIN provided the best results for the performance metrics observed in this study, the testing phase of the ANN was implemented on the corresponding test signal dataset (AS MTEST ). The overall performance of the test signal dataset was characterized by an AUC of 0.939, a CA of 0.849,  (Table 4). In the total number of correctly classified observations, the proportions of Cut, Move and Pause were of ca. 23, 5 and 72%, respectively. However, 8773 observations from the AS MTEST signal were incorrectly classified (Table 5), accounting for ca. 15%. In this subsample, the biggest inaccuracy problem seemed to be that of misclassifying Move events as Pause events (ca. 54%). This was followed by the misclassification of Pause events as Move events (ca. 26%) and by Pause events as Cut events (ca. 12%). Only 416 true Cut observations were misclassified as Pause (ca. 4%) or Move (ca. 1%) events. By considering the data from Tables 4 and 5, the REC of Cut was evaluated in the test set at ca. 96%, that of Move at ca. 39% and that of Pause at 91.5%. The results that are similar to those shown in Table 3 are for the REC calculated for AS MTRAIN . Ultimately, it seems that the system provided good classification outcomes. This can be observed for the Cut and Pause events, for which the recall metric provided very good results, with their share in the train and test dataset accounted for the majority (Table 2).

Discussion
This study tested the possibility of implementing an external sensor system to automatically collect relevant operational data coupled with the techniques of ANN to enhance the automation of data classification and analytics. The applicability of the system is obvious in the context provided in the introduction section. One thing to be addressed is related to the factors that could favorize or prevent its internal implementation in such facilities. From this point of view, at least for the applied science, the system may provide a useful tool to externally monitor sawmilling operations in small-scale companies and to produce and analyze big datasets. This is proven by the approach and results of this study, which have demonstrated the utility of the system for such attempts. In such a case in which there is willingness to implement it internally, some points need to be addressed. The first one would be that of the system's components cost. As such, the investment in the dataloggers used reaches the amount of EUR 450, being quite affordable from this point of view. Under the assumption of a well-designed ANN model, the camera could be excluded from the investment, while it is quite typical for many people to already hold a personal computer running the Microsoft Office Pack. Then, if a full automation of data analytics is in question, one should think about the connectivity of the dataloggers to a computer platform as well as to the additional software or routines needed there. Since the software used to run some data analysis was Microsoft Excel, and based on the fact that the ANN model and its figures on probabilities could be moved to Microsoft Excel, the only problem that would need to be solved is that of building some routines to bring and merge the signal data from the dataloggers' software into Microsoft Excel files and to also automatically run the model in real time for classification purposes. This would also enhance the use of dataloggers at finer sampling rates. Since Microsoft Excel enables the use of routines external to its environment [39], this approach would be achievable at rather low costs.
ANN, as well as other classes of AI techniques, were widely used for multivariate classification problems in various fields of research and practice. Recent results show their good performance for both classification [24] and regression [25] problems in forestry as well in other related fields [34], with classification accuracies of over 90% termed generally as very good performances [34]. Judging the results of this study by this metric, the system tested has shown a very good classification performance with the classification accuracy reaching almost 90% irrespective of the event observed in the study; it was also close to 100% for Cut events in the training phase, which could be seen as excellent. Nevertheless, the performance of classification is still dependent on the complexity of phenomenon surveyed and on the quality of the information carriers [34]. For the typical case of ANN use, some have found very good classification performances, while others have found average or less performant ones [34,38]. Thus, some features may be more or less recognized by the models [38], depending also on their complexity, the chosen AI techniques and their ability to learn. Nevertheless, the classification performance ultimately needs to be related to the intended uses of the models. As such, even if the share of Cut events in this study was low, it is still important because the machine used electrical power in this event, and it is highly suggestive of the technical efficiency of the machine. Since the REC rate for Cut was close to 96%, the results could be interpreted as very good; still, this will mean that from each one hour of effective cutting, close to 2.5 min of events will be misclassified, which could have an impact on results scaled to longer periods.
The described outcomes were related, in this study, to the information carrier, because it seems that in the training phase, the sound pressure level signal yielded classification performances, which were comparable with those of using both signals. It was probably equally difficult for the ANN to accurately learn movement events from the sound information carrier as it was to learn it from the acceleration information carrier. This is because the movement events were completed at similar speeds, while the location of the accelerometer enabled it to collect a general magnitude that was not sufficiently separable between the events. In the case of sound, it is possible for Pause and Move events to generate similar patterns in the signal, and only Cut is able to be more distinguishable. As the most contribution in the classification performance came from the sound carrier, it worth mentioning that close to 80% of the misclassifications were those confusing movements with pauses and vice versa. Therefore, the implications are evident, advocating for choosing better locations for the dataloggers. One may speculate that in a configuration that would suppose the use of both collectors in real applications, the best choice would be to place the sound pressure level datalogger as close as possible to the place in which the interaction between the cutting blade and the log will occur, thus accounting for the highest magnitudes and a better separability of the Cut events. This behavior was observed in the results of this study, even though the collector was placed on the helmet of the worker to be able to collect data regarding exposure to noise. In contrast, the acceleration datalogger should be placed on the lever used to manually operate the machine, thus accounting for a higher magnitude in the signal as a result of the lever movement, and enhancing the separability of Move and Pause events. While these locations should be carefully chosen and standardized, this is an approach that may still require trial and error.This is also related to the type of machine used and to the type and frequency of operations surveyed. Most probably, the machines operating vertically by feeding the logs in the blades will provide the opportunity to use just one data collector. In such cases, the operational complexity could be also lower. Given the types of signals used, there is the possibility of other operations running in parallel (e.g., the use of a chainsaw) to affect the results. However, this could be balanced to some extent by incorporating in the analysis the signals from both dataloggers under the assumption that they would be placed in the best positions. The extent to which impulse noise carries enough useful information in delimiting specific events should be also explored and, as mentioned before, a wise placement of the dataloggers could help in lowering the miss-classification of productive and non-productive time.
Last, but not least, the signal filtering procedure used in this study tried only to remove the impulse noise. Whether or not the use of a repetitive filtering procedure to reach to the root of the signal [29], or the use of a wider window to analyze and probably improve the signal-to-noise ratio, would enhance the separability of events and the ability of the ANN to learn them from the altered signals should be checked in the future. This also applies to the length of the input signals used for training purposes, which accounted for one quarter of the sample used in this study. Probably, the use of more data in the learning process would have been provided much better results, a fact that also needs to be checked since similar studies have typically used ratios as high as 90-10% for learning and testing, respectively [34].

Conclusions
Based on the results of this study, the main conclusion is that the tested system holds a promising potential for implementation in real world scenarios to accurately collect, process and analyze big datasets at a low cost, in real time, and under the current limitation that some more work would be needed to connect the loggers with the computer software. Such attempts could be facilitated also by the rapid development of cheap miniaturized data collectors. Under the assumption of an internal implementation, one should find the best locations to place the collectors and should maintain those locations for long-term monitoring of production, a fact that may require the development of completely new ANN models. The implementation of the system, on the other hand, will not only provide the science with the tools and evidence on the real performance of small-sized sawmills, but it will also contribute to a better internal planning, thus enhancing the competitiveness of small companies in the field.