Kilogram Sample Analysis by Nuclear Analytical Techniques: Complementary Opportunities for the Mineral and Geosciences

: Sample-size reduction including homogenization is often required to obtain a test portion for element compositional analysis. Analyses of replicate test portions may provide insight into the sampling constant, and often much larger quantities are needed to limit the contribution of sampling error. In addition, it cannot be demonstrated that the ﬁnally obtained test portion is truly representative of the originally collected material. Nuclear analytical techniques such as neutron and photon activation analysis and (neutron-induced) prompt gamma activation analyses can now be used to study and overcome these analytical problems. These techniques are capable of obtaining multi-element measurements from irregularly shaped objects with masses ranging from multiple grams to multiple kilograms. Prompt gamma analysis can be combined with neutron tomography, resulting in position-sensitive information. The analysis of large samples provides unprecedented complementary opportunities for the mineral and geosciences. It enables the experimental assessment of the representativeness of test portions of the originally collected material, as well as the analysis of samples that are not allowed to be sub-sampled or dissolved, the analysis of materials that are difﬁcult to be homogenized at large, and studies on the location of inhomogeneities. Examples of such applications of large-sample analyses are described herein.


Introduction
Laser-induced breakdown spectrometry (LIBS), micro-beam particle induced X-ray emission (micro-PIXE), and electron microprobe analysis (EMP) are examples of microanalytical techniques with opportunities for the (position-sensitive) surface analysis of solids to obtain information regarding the element content in amounts at the microgram level [1][2][3]. Small amounts of solids, varying from tens of micrograms to a few milligrams, can directly be analyzed using laser ablation inductively coupled plasma mass spectrometry [4], (solid state) atomic absorption spectrometry [5], and X-ray fluorescence (XRF) spectrometry [6], including total reflection XRF [7]. Larger amounts of solids can be analyzed directly using XRF (although the information obtained is limited to material thicknesses varying from a few micrometers to a few millimeters (depending on the element measured)) and by nuclear analytical techniques such as neutron activation analysis (NAA), photon activation analysis (PAA), and (neutron-induced) prompt gamma activation analysis (PGAA) [8], which can all handle substantially larger amounts.
The size of the test portion, i.e., the amount of material actually introduced into an instrument, is in all techniques limited for technical and fundamental reasons. The analyst may therefore be faced with problems if the amount of material collected is larger. This is not unusual, as e.g., soils, rocks, and plant material are preferably (and often more easily) collected at quantities in the order of hundreds of grams to kilograms rather than at quantities at the size of the test portion, thereby assuming that the composition of a larger amount better reflects the composition of the sampling target ( Figure 1). Eventually, this may be experimentally evaluated by estimating the sampling error. Thus, both for practical and sampling reasons often more material is collected and presented for analysis than is needed for analysis. Thus, a similar situation arises in the laboratory: the test portion (the amount actually to be analyzed) should be representative of the available material. A test portion is considered 'representative' when it has been achieved by a sampling plan through which 'it can be expected to adequately reflect the properties of interest of the parent population' [9]. The representativeness of the test portion is preserved a priori when (i) the collection or preparation is performed according to specific norms, or when (ii) a truly homogeneous material is sampled. Homogeneity is defined as 'the degree to which a property or constituent is uniformly distributed throughout a quantity of material' [9].
The preparation of a test portion from solid materials may imply material size reduction techniques and other processing methods such as sieving, crushing, milling, or blending and, for some techniques, dissolution. After sample-size reduction, the analysis of several test portions taken from the last batch in such series provides insight into the degree of homogeneity of the test portion for this final batch by comparing the between-test portion variance with the variance of the measurement itself [10], providing an estimate of the uncertainty due to degree of inhomogeneity.
Estimating the degree of inhomogeneity may be a common practice in the preparation of reference materials, but is not common in routine analysis, as such a procedure requires the analysis and statistical evaluation of at least 10 test portions of each sample.
This would raise the cost of analysis considerably. The use of the standard deviation σH based on the Horwitz-Thompson equations [11] may serve as a pragmatic alternative as an estimate for the contribution of inhomogeneity towards the uncertainty of measurement [12,13]:  Thus, a similar situation arises in the laboratory: the test portion (the amount actually to be analyzed) should be representative of the available material. A test portion is considered 'representative' when it has been achieved by a sampling plan through which 'it can be expected to adequately reflect the properties of interest of the parent population' [9]. The representativeness of the test portion is preserved a priori when (i) the collection or preparation is performed according to specific norms, or when (ii) a truly homogeneous material is sampled. Homogeneity is defined as 'the degree to which a property or constituent is uniformly distributed throughout a quantity of material' [9].
The preparation of a test portion from solid materials may imply material size reduction techniques and other processing methods such as sieving, crushing, milling, or blending and, for some techniques, dissolution. After sample-size reduction, the analysis of several test portions taken from the last batch in such series provides insight into the degree of homogeneity of the test portion for this final batch by comparing the between-test portion variance with the variance of the measurement itself [10], providing an estimate of the uncertainty due to degree of inhomogeneity.
Estimating the degree of inhomogeneity may be a common practice in the preparation of reference materials, but is not common in routine analysis, as such a procedure requires the analysis and statistical evaluation of at least 10 test portions of each sample. This would raise the cost of analysis considerably. The use of the standard deviation σ H based on the Horwitz-Thompson equations [11] may serve as a pragmatic alternative as an estimate for the contribution of inhomogeneity towards the uncertainty of measurement [12,13]: in which the mass fraction x A of a measurand A is expressed as in mg/kg (10 −6 or ppm). However, this does not give an insight into the representativeness of the test portion for the originally collected material. The sample-size reduction procedure may also lead to contamination or even element losses, which cannot be detected unambiguously.
Measurement of (trace) elements and their amounts is currently possible in large objects with masses up to many kilograms using NAA [14], PGAA [15], and PAA [16]. Several facilities (mainly for large-sample NAA and PGAA) have been developed in research reactors and for use with isotopic neutron sources and neutron generators. The attractiveness of large-sample analysis is that the 'as-received' sample is now also the 'test portion'. Some considerations on opportunities of analyzing large samples for the geosciences are presented in this paper.

Methods and Concepts
Activation analysis is a method for the measurement of chemical elements based upon the conversion of their stable nuclei to other, mostly radioactive nuclei via nuclear reactions, and the measurement of the activity of the produced radionuclides. In NAA the nuclear reactions occur via bombardment ('irradiation') of the material to be analyzed with neutrons provided by a nuclear research reactor, an isotopic neutron source, or using a particle accelerator or neutron generator [17]. In PAA, the high-energy photons are mostly obtained from electron accelerators as the 'bremsstrahlung' radiation produced by deceleration of accelerated electrons on a heavy target [18].
With the exception of radionuclide activity produced by neutron or photon bombardment, the radiation that can be measured is released almost instantaneously (within ca. 10 −14 s) during neutron bombardment. This 'prompt' (gamma) radiation results from the de-excitation of the compound nucleus formed upon the capture of the neutron (not from the activity of the reaction product) and forms the basis for PGAA.
All stable elements have properties suitable for the production of radioactive isotopes, albeit at different reaction rates. Each radionuclide is uniquely characterized by its decay constant-the probability for the nuclear decay in unit time-and the type and energy of the emitted radiation. Amongst the several types of radiation that can be emitted, gamma radiation offers the best characteristics for the selective and simultaneous detection of radionuclides and thus of elements.
NAA, PAA, and PGAA are complementary techniques as their capabilities are governed by the energy of the bombarding neutrons or photons, the likelihood that this results in a nuclear reaction with the stable isotopes, the nuclear decay parameters of the radioisotopes, or the energy of the prompt gamma radiation. A detailed description of the physical, instrumental, and analytical aspects for each of these techniques can be found elsewhere [17,[19][20][21].
Large-sample NAA, PGAA, and PAA involve calculations factoring the physics of the interaction of neutrons (for production of element-specific radionuclides) and gamma rays (emitted by the radionuclides produced) with matter [17]. The regular mass of a test portion in these techniques varies from a few milligrams to one gram. A 'large sample' is therefore defined as a test portion in which neutron and gamma-ray self-attenuation cannot be neglected in view of the required degree of accuracy.
Trace element measurement by NAA and PGAA in large samples has been performed for decades in, e.g., well-logging [22][23][24], on-line conveyor belt industrial analyzers [25], and with in-vivo studies of e.g., Ca in bones and Cd in kidney [26]. Neutron generators or isotopic neutron sources have been used. These industrial applications are mainly focused on raw material analysis and product control for one or a few major constituents. The procedures are customized by their calibration regarding the associated problems and cannot be translated into a routinely applicable method for the analysis of a large variety of sample types. The same applies to the use of neutron activation analysis principles in nuclear well-logging devices. These devices contain a neutron source and a radiation detector and are lowered into the borehole for neutron activation and/or prompt gamma analysis of the surrounding materials; as such, the entire surrounding rock may be considered as a large sample.
In the 1990s, a breakthrough came in the development of methods for estimating and correcting the effects of neutron and gamma ray self-attenuation [14,[27][28][29], for the estimation of the gamma-ray detector's photo peak efficiency of voluminous sources [30], and to account for extreme inhomogeneities [31,32]. All these corrections, including for the large sample's natural radioactivity, are inserted into the measurement equations for large-sample NAA and PGAA. These advances facilitated the development of reactor-based large-sample NAA and PGAA and, due to the much higher thermal neutron fluxes, yielded better sensitivities for many elements than can be obtained with neutron generators and isotopic neutron sources. By the end of the 1990s, a facility for large-sample PAA became available [16].
The amount of radioactivity to be induced in large samples should preferably be of the same order as in normal NAA/PAA in view of the acceptable count rates. As such, the required neutron flux is proportional to the increase in test portion mass. For an NAA of a 1 kg test portion, a thermal neutron flux of 10 8 -10 9 cm −2 s −1 will suffice. For the analysis of much larger quantities (up to tens of kilograms or more) neutron fluxes provided by neutron generators or isotopic neutron sources may suffice.
As explained in the above, the neutron-induced activity in a large sample and/or its prompt count rate should not exceed the level that is permitted according to the license issued by the national regulatory body in NAA/PAA/PGAA facilities. All handling of the radioactive material is done by pre-defined procedures approved by the radiation protection authority on radiological safety [33]. In addition, it can be estimated when the remaining activity is below the level that allows for a safe discharge, either as low-level radioactive waste or even as regular laboratory waste.
Irradiation facilities at nuclear research reactors are not always immediately suitable for physically handling large-sample NAA. Masses up to tens of grams may be irradiated using existing pneumatic transfer systems. For the analysis of larger objects (up to the (multi) kilogram range) facilities have been developed in the thermal columns of the reactor [14,34] (Figure 2), in its pool [35], or in external neutron beams [36,37], and dedicated neutron beams have been used for large-sample PGAA [38,39]. An overview of some operational large-sample NAA facilities for research reactors is given in Table 1.
The large samples may, in principle, be of any shape, but most irradiation facilities have been designed for handling materials that fit in common plastic bottles or jars. Nonetheless, it has been demonstrated that irregularly shaped objects can also be analyzed by large-sample NAA, as was demonstrated by an International Atomic Energy Agency (IAEA) laboratory intercomparison exercise in which specially manufactured copies of an irregularly shaped pseudo-Peruvian archaeological pottery object were used [40] (Figure 3). be considered as a large sample.
In the 1990s, a breakthrough came in the development of methods for estima correcting the effects of neutron and gamma ray self-attenuation [14,[27][28][29], for mation of the gamma-ray detector's photo peak efficiency of voluminous sources to account for extreme inhomogeneities [31,32]. All these corrections, including large sample's natural radioactivity, are inserted into the measurement equat large-sample NAA and PGAA. These advances facilitated the development of based large-sample NAA and PGAA and, due to the much higher thermal neutro yielded better sensitivities for many elements than can be obtained with neutron tors and isotopic neutron sources. By the end of the 1990s, a facility for large-sam became available [16].
The amount of radioactivity to be induced in large samples should preferab the same order as in normal NAA/PAA in view of the acceptable count rates. As s required neutron flux is proportional to the increase in test portion mass. For an a 1 kg test portion, a thermal neutron flux of 10 8 -10 9 cm −2 s −1 will suffice. For the of much larger quantities (up to tens of kilograms or more) neutron fluxes prov neutron generators or isotopic neutron sources may suffice.
As explained in the above, the neutron-induced activity in a large sample a prompt count rate should not exceed the level that is permitted according to th issued by the national regulatory body in NAA/PAA/PGAA facilities. All handlin radioactive material is done by pre-defined procedures approved by the radiation tion authority on radiological safety [33]. In addition, it can be estimated when the ing activity is below the level that allows for a safe discharge, either as low-level tive waste or even as regular laboratory waste.
Irradiation facilities at nuclear research reactors are not always immediately for physically handling large-sample NAA. Masses up to tens of grams may be ir using existing pneumatic transfer systems. For the analysis of larger objects (u (multi) kilogram range) facilities have been developed in the thermal columns o actor [14,34] (Figure 2), in its pool [35], or in external neutron beams [36,37], and d neutron beams have been used for large-sample PGAA [38,39]. An overview of s erational large-sample NAA facilities for research reactors is given in Table 1.   The neutron-or photon-induced activity in large samples is measured with conventional gamma-ray spectrometers with Ge detectors. Facilities have been designed for rotating the activated samples during counting to reduce geometrical effects. In the case of granular materials such as soil, the activated sample can also be transferred into a Marinelli beaker geometry for measurement.
Large neutron-activated samples and large samples in PGAA facilities can be measured using a collimated detector setup [41,42], which allows for position sensitive scanning and provides information on element profiles along slices. When combining large-sample PGA with neutron radiography/tomography, quantitative 2D and 3D element profiles can be generated [43,44].
Quantification in 'normal' NAA/PGAA is mostly done using the relative method (simultaneous irradiation of test portions of the object of interest and of a calibrator, such as a synthetic (multi-element) standard or a reference material) or using the k 0 method of standardization [17]. It may be obvious that the relative method is not suitable for the routine analysis of test portions from tens of grams to multiple kilograms. It has been shown that the k 0 method can be applied to analyze large test portions [45] and even  Figure 3. A pseudo-Peruvian archaeological pottery object used for intact analysis by large-sample neutron activation analysis (see [40]).
Nuclear analytical techniques for large-sample analysis also have their limitations and constraints, like any other analytical technique. There are limitations resulting from the nuclear physics nature of the techniques, with an impact on the analytical sensitivity and speed of analysis. The sensitivities for the chemical elements differ significantly between the three techniques due to the differences in the relevant nuclear physics constants of the isotopes of the elements for the nuclear reactions. In NAA and PAA, the measurement of the activation product of an element of interest may require a waiting period after activation varying from minutes to several days, until the activities of activation products of other elements in the sample no longer interfere. Such a decay is not relevant in PGAA, and measurements are often completed in minutes to hours depending on the element of interest. However, the types and numbers of elements that can be measured by PGAA are different from those of NAA and PAA, and sometimes NAA or PAA may result in better sensitivities than PGAA [18][19][20]. Additional limitations are related to the size of the objects analyzed as well as their composition. Elements such as Ho and Tm, which are measured by activation products emitting low-energy gamma rays, are difficult to measure with sufficient accuracy in large samples due to the gamma-ray self-attenuation of the material. The strongest effects of extreme inhomogeneities occur if the elements of interest are located mainly on the outside of the sample or along a virtual 'central axis' in the sample [31,32]. In addition, the techniques require access to a neutron or high-energy photon source. For some applications, a neutron generator or an isotopic neutron source may suffice. Large-sample analysis at a research reactor or for a suitable accelerator might be more versatile. Not all research reactors are equipped with facilities for large-sample NAA or PGAA, although it is not complicated to modify existing irradiation and counting facilities to manage larger samples than usual. Alternatively, the internal mono-standard method has also been successfully applied for quantification [46] in large-sample analysis for irregularly shaped objects. In this method, the prior known quantity of one of the elements in the material (from which neutron activation may result in a measurable amount of radioactivity) is used as the internal standard.
Quality control and demonstrating the degree of trueness in large-sample analysis is a challenge [47], as will be discussed below in this paper. Conventional approaches such as the use of certified reference materials cannot be applied at the kilogram scale. Laboratories may develop their own-large-sample-trueness control materials using, e.g., materials composed of formulated amounts of chemical elements. In addition, materials exist that are known for their high degree of homogeneity, such as coal fly ash or powdered nutritional products, and these have thus been used.
The verification of the trueness of the results of large-sample analysis was done through the sample-size reduction of a 1 kg sample after its direct analysis towards portions suitable for normal NAA [48]. A good agreement of the analysis results of both approaches was found for most elements, but several elements showed deficiencies that were attributed to the processing of the material. Obviously, such an approach requires ample radiological safety precautions.
The IAEA issued a document on the advances in large-sample activation analysis, with an emphasis on objects relevant for archaeological studies [40]. The document also included a thorough review of the technical aspects of the technique, considerations for implementation, the results of the above-mentioned laboratory intercomparison exercise of the irregularly shaped pseudo-Peruvian archaeological pottery object (Figure 3), examples of applications in other applied sciences, and an outlook for further developments. The document also included an inventory of the available facilities for large-sample analysis by nuclear analytical techniques.
Nuclear analytical techniques for large-sample analysis also have their limitations and constraints, like any other analytical technique. There are limitations resulting from the nuclear physics nature of the techniques, with an impact on the analytical sensitivity and speed of analysis. The sensitivities for the chemical elements differ significantly between the three techniques due to the differences in the relevant nuclear physics constants of the isotopes of the elements for the nuclear reactions. In NAA and PAA, the measurement of the activation product of an element of interest may require a waiting period after activation varying from minutes to several days, until the activities of activation products of other elements in the sample no longer interfere. Such a decay is not relevant in PGAA, and measurements are often completed in minutes to hours depending on the element of interest. However, the types and numbers of elements that can be measured by PGAA are different from those of NAA and PAA, and sometimes NAA or PAA may result in better sensitivities than PGAA [18][19][20]. Additional limitations are related to the size of the objects analyzed as well as their composition. Elements such as Ho and Tm, which are measured by activation products emitting low-energy gamma rays, are difficult to measure with sufficient accuracy in large samples due to the gamma-ray self-attenuation of the material. The strongest effects of extreme inhomogeneities occur if the elements of interest are located mainly on the outside of the sample or along a virtual 'central axis' in the sample [31,32]. In addition, the techniques require access to a neutron or high-energy photon source. For some applications, a neutron generator or an isotopic neutron source may suffice. Large-sample analysis at a research reactor or for a suitable accelerator might be more versatile. Not all research reactors are equipped with facilities for large-sample NAA or PGAA, although it is not complicated to modify existing irradiation and counting facilities to manage larger samples than usual.
Large-sample activation analysis can be operated for masses up to, e.g., 10 g in a routine fashion, but this is not yet feasible for large series of samples of kilogram mass. Such applications may require a tailored approach, which is also a scientific challenge for all stakeholders involved.

Opportunities for the Mineral and Geosciences
Examples of the complementary analytical opportunities of analysis of (very) large samples are:

The Assessment of the Representativeness of Test Portions for the Material Collected
Studies can be conducted in which a neutron-activated large sample, following all measurements, is subsequently processed through sample-size reduction steps. This can be conducted in a controlled environment such as a radiological laboratory. The activities of the remaining radionuclides with long half-lives can be measured again after each processing step (such as crushing, grinding, sieving, milling), and/or the material after each step can be irradiated again for analysis. Eventually, test portions can be taken from the finally obtained powdered material for normal analysis, thus obtaining insights into its compositional representativeness of the starting material.
Such an experiment was conducted as part of a study into the local variation of the composition of a uranium waste rock pile [48]. A large sample of 2 kg material was, after neutron activation and analysis, reduced in size by crushing, grinding at 14 mesh, homogenization, quartering, oven drying, and fine grinding to 200 mesh. A comparison of the analysis results of the large sample and of a 200 mg test portion of the finally obtained powder showed excellent agreement with regard to most of the 25 elements measured, with the exception for Sm and Ce. It was thus confirmed that the composition of the 200 mg test portion was sufficiently representative of the composition of the sampled material.

Direct Analysis of Materials for Which Homogenization Is Inconvenient, Difficult or Close to Impossible, and/or too Expensive Due to Material Properties
This may save time and money related to the careful preparation of the finally powdered material and experimental assessment by replicate analyses of the degree of homogeneity.
In a study on the NAA measurement of gold in Canadian reference ores, sampling constants of 60-400 g were found for a 1% sampling error, and of 10-60 g for a 3% sampling error [49]. Large gold ore samples of ca. 150 g and grain size < 2 mm were also analyzed using PAA [50]. Since the standard deviation of replicates analysis by PAA of Cu, Z, Cd, Pb, and Zr in shredded electronic (television) waste varies from 20-100%, it was shown that masses of at least 50-100 g were needed to limit the standard deviation of replicates to 10% or less [16]. Analyses of replicate test portions with masses of 50 g, grinded to 1-3 mm particles, clearly showed that even 300 g test portions were needed to reach the required variation of the between-test portion [51]. In the framework of metallic mineral survey projects, 10-15 g samples had to be analyzed to circumvent tedious homogenization procedures [52]. In view of a study on metal recovery and accounting, large-sample NAA was used for the quantification of Au and Ag in 50-100 g dross samples refined at the India Government Mint [53]. As such, the difficulties in the dissolution of the refractory material were circumvented.
It was also demonstrated that, using a 14 MeV neutron generator, 200 L drums with heterogeneous fillings of concrete and polyethylene could be analyzed with an adequate agreement of 34% between measured and expected element amounts [54], suiting the purpose.

Analysis of Materials That Have to Maintain Their Integrity and/or That Are too Precious and Not Allowed to Be Damaged by Sub-Sampling, such as Museum Pieces
The current status of large-sample analysis allows for the direct analysis of objects with masses up to 100 kg or more. A 252 Cf neutron source facility has been described for the PGAA of a 700-year-old inscribed stone with a mass of 215 kg as part of the study of its provenance [55].
Large-sample prompt gamma analysis was applied to eight, relatively large and irregularly shaped meteorite specimens (four stony and four iron) [56]. The stony meteorites had masses of about 10-50 g; the masses of the iron meteorites varied from about 50 g to about 1.3 kg. The internal mono-standard method was used for quantifying the mass fractions of 15 chemical elements. This approach turned out to be very practical for direct evaluation of the compositional differences between the various meteorites. Since the residual induced radioactivity is very low, the materials can be safely released after a limited decay time.
The computational algorithms of large-sample NAA are capable of providing acceptable results even if an entire ceramic vase of about 375 g was analyzed [57,58]. Such algorithms were further tested by analyzing simulated Bronze-Age metallurgical slags with a mass of about 125 g and arbitrary shapes using a medical accelerator for PAA [59]. Validation was done by grinding to powder one of the slags after large-sample analysis and performing subsequent normal (small test portion) NAA. The hood agreement between the results indicates that the method is suitable for analysis of real Bronze-Age slags. Element profiles were obtained using 2 cm slices of a neutron-activated 100-cm-long, 11-cm-diameter sample of a ditch bottom sediment [60]. Scanning large-sample analysis was also used for screening 0.7 kg samples of green coffee beans for local inhomogeneities [61].
PGAA has been combined with neutron tomography for the 2D and 3D element mass mapping [42,62] of large, irregularly shaped objects, resulting in a spatial resolution of 5 mm.

Quality Control
Quality control, or better 'trueness control' in regular NAA, PGAA, or PAA is done through the analysis of test portions of commutable materials with known property values, such as certified reference materials. There are limitations to this approach if large samples (masses up to multiple kilograms) have to be analyzed [47]. Certified reference materials are typically available in sizes up to 150 g, and not in much larger amounts. Processing this on a routine scale would be expensive and lead to a high consumption of the available material. Secondly, the degree of (in)homogeneity of a large sample is difficult to simulate using a reference material.
The validity of the calculations and corrections for neutron and gamma-ray selfabsorption and for the voluminous geometry effects can be demonstrated by analyzing self-made control samples. As an example, 2 kg test portions (20 cm height, 12 cm diameter) have been prepared at the Reactor Institute Delft, Delft University of Technology by using remainders (ca. 200 g each) of materials from intercomparison exercises [63]. Knowing their masses and the assigned property values of each fraction, the formulated quantities of the elements of interest in these test portions could be compared with the results of analyzing them as large samples. The above defined Horwitz-Thompson standard deviation was used as an estimate of the uncertainty of the formulated quantities. An example of such an experimental assessment in given in Table 2. The zeta score is an indication of the agreement between the formulated and measured quantities in view of the uncertainties in these values. A zeta score should preferably be within −2 and +2. Such an approach of producing a large test portion with known property values was also applied in the above-mentioned IAEA intercomparison with copies of the pseudo-Peruvian archaeological pottery object ( [31] and Figure 3), as the clay used for preparing these objects was also analyzed by regular NAA. The evaluation of this intercomparison indicated that the experienced large-sample NAA laboratories reached a degree of accuracy (zeta score, reflecting trueness and precision) for almost all elements measured in this intercomparison equivalent to that usually reached in normal NAA [40].
The uncertainty of measurement in the large-sample NAA, PAA, and PGAA is due to additional contributions by corrections for neutron and gamma-ray self-attenuation and for a voluminous geometry larger than the uncertainty in the 'normal' conduct of these techniques with small test portions, although the counting statistics associated with the element-dependent induced activity remain the major contribution. The latter considers that the uncertainty cannot be predicted in advance and varies from element to element (as can also be derived from the data in Table 2) and from sample type to sample type.
It should be noted that the increase in the ratio of test portion mass to container mass in large-sample analysis may result in a negligible contribution of the blank, i.e., the activity induced in impurities in the container material [47].
Nonetheless, trueness control remains a scientific challenge in large-sample analysis. It even constitutes a fundamental problem regarding how to express measurement results. In NAA, PAA, or PGAA, quantities are measured and mass fractions are calculated. However, the concept of 'mass fraction' implicitly assumes that the related quantity is uniform throughout the object. Large samples may have been selected for analysis because of this assumption may be questioned. Thus, it may be more realistic to report the results of large-sample analysis as quantities rather than mass fractions, and leave the interpretation thereof for further discussion.

Conclusions
Nuclear analytical techniques are now available for the analysis of samples much larger than the test portions that are used by regular techniques and instruments for measuring the amounts of (trace) elements in solid material. These large-sample analysis techniques provide new, complementary analytical opportunities for studies in many applied sciences, including the mineral and geosciences, as has been shown by the various examples in this paper. Objects of irregular shapes can be analyzed, and it has been shown that for some applications, 2D and 3D images can be made regarding the quantitative distribution of elements. Not all approaches allow for the routine application of large series of samples, but interested stakeholders may stimulate further developments in this direction.
Funding: This research received no external funding.

Data Availability Statement:
The data presented in Table 1 are openly available in reference [40]. The data presented in Table 2 are available on request from the corresponding author. The data are not publicly available as they relate to unfinished research.

Conflicts of Interest:
The author declares no conflict of interest.