A Scoping Review of the Validity, Reliability and Conceptual Alignment of Food Literacy Measures for Adults

The measurement of food literacy has recently gained momentum globally. The aim of this paper is to review the literature in order to describe and analyse the measurement of adult food literacy. The objectives are to i) identify tools that explicitly measure food literacy in adults; ii) summarise their psychometric properties; and iii) critique tool items against the four domains and 11 components of food literacy, as conceptualised by Vidgen and Gallegos. Using the PRISMA guidelines, a search of seven databases (PubMed, Embase, ScienceDirect, Scopus, EBSCOhost, A+ Education, and ProQuest) was undertaken. 12 studies met the inclusion criteria. Papers reported on either the development of a tool to explicitly measure food literacy or a part thereof (n = 5); food literacy strategy indicators (n = 1); tools developed to evaluate a food literacy intervention (n = 3); or tools to measure food literacy as a characteristic within a broader study (n = 3). Six tools captured all four domains. None measured all components. Items measuring the same component varied considerably. Most tools referenced a theoretical framework, were validated and reliable. This review will assist practitioners select and develop tools for the measurement of food literacy in their context.


Introduction
Increasing rates of diet-related disease has been linked to an apparent decline in the general population's food knowledge and skills [1,2]. A plethora of commentaries on this association exist in the literature, with authors describing a "gastronomic revolution" [3], an "epidemic of culinary ineptness" [4] and a "dietary cacophony" of conflicting information that "deadens" an individual's capacity to eat [5,6]. Indeed, as society's foodscapes become increasingly complex, there is concern that we are individually and collectively becoming increasingly "de-skilled" and no longer possess fundamental food skills and practices for healthy eating [1,[7][8][9].
In light of these issues, the concept of 'food literacy' has emerged as an integrative framework and approach to describe the relevant knowledge, skills and behaviours necessary to achieve a diet aligned with nutrition recommendations. In 2014, Vidgen and Gallegos empirically defined food literacy as "a collection of inter-related knowledge, skills and behaviours required to plan, manage, select, prepare and eat foods to meet needs and determine food intake", as well as, "the scaffolding that empowers individuals, households, communities or nations to protect diet quality through change and support dietary resilience over time" [2]. This definition significantly advanced the concept of food literacy and is widely cited as one of the most comprehensive food literacy definitions [10,11].
Along with increasing interest and clarity of food literacy, there has been a growing demand for comprehensive measurement tools [12]. The measurement of food literacy is important to test the

Search Strategy and Information Sources
A systematic literature search was performed in seven databases (PubMed, Embase, ScienceDirect, Scopus, EBSCOhost, A+ Education, and ProQuest) up to 18 November 2018 to identify published tools that explicitly measure food literacy in adults. No limitations were placed on year of publication. The following search terms were used to conduct a full-text search in each database: "food literacy" AND "intervention*" OR "program*" OR "survey*" OR "tool*" OR "question*" OR "measur*" OR "scale*" OR "assess*".

Inclusion and Exclusion Criteria, and Their Application
Articles acquired through the search strategy were imported into EndNote and duplicates were removed. Inclusion and exclusion criteria were applied at two stages of the review (refer to Figure 1). In acknowledgement that the review was seeking both, papers that were explicitly measuring food literacy, and those that were including the measurement of food literacy as part of a range of measures, screening occurred at two stages, with greater specificity at each stage. At stage one, the titles and abstracts were screened and excluded if they (1) did not focus on an adult population, (2) were not in the English language, (3) were grey literature, or (4) did not report a quantitative measure; that is, if they did not (i) report on the development of a measurement tool; (ii) evaluate an intervention; or (iii) examine a food or nutrition related behaviour. At stage two, the full text of remaining papers was screened and excluded if (1) they met the exclusion criteria of stage one, (2) the full text was not available and (3) if they did not include a measure of food literacy. Studies were also excluded if they included a tool that had already been identified in the review. In such circumstances, the article that originally published the food literacy measurement tool was prioritised. Publications were included if the article (1) reported on a measurement tool that explicitly referred to 'food literacy' in its conceptualisation or development and (2) included access to all items within its food literacy measurement tool. Each article was screened against the inclusion and exclusion criteria by two reviewers (C.A., H.V.) independently at both stages. Discrepancies were discussed and resolved. titles and abstracts were screened and excluded if they (1) did not focus on an adult population, (2) were not in the English language, (3) were grey literature, or (4) did not report a quantitative measure; that is, if they did not (i) report on the development of a measurement tool; (ii) evaluate an intervention; or (iii) examine a food or nutrition related behaviour. At stage two, the full text of remaining papers was screened and excluded if (1) they met the exclusion criteria of stage one, (2) the full text was not available and (3) if they did not include a measure of food literacy. Studies were also excluded if they included a tool that had already been identified in the review. In such circumstances, the article that originally published the food literacy measurement tool was prioritised. Publications were included if the article (1) reported on a measurement tool that explicitly referred to 'food literacy' in its conceptualisation or development and (2) included access to all items within its food literacy measurement tool. Each article was screened against the inclusion and exclusion criteria by two reviewers (C.A., H.V.) independently at both stages. Discrepancies were discussed and resolved.

Data extraction
Data was extracted from each article on: (1) characteristics of the study (including first author, country and purpose of the tool); (2) characteristics of the target group (including recruitment method, sample size, age, education level, ethnicity and socio-economic status); (3) type of food literacy outcome measure; and (4) psychometric properties (including underlying conceptual framework, content validity, face validity, construct validity and reliability). The items from each tool were also extracted to critique against the four domains and 11 components of food literacy. Tool items that were not listed in the article were back-referenced and sourced from the original publication or requested from the author.

Data Synthesis and Analysis
To ascertain content validity of the identified food literacy measures, three authors (C.A., H.V., D.F.) independently reviewed the questionnaire items against the Vidgen and Gallegos conceptualisation of food literacy [2]. Specifically, the items were coded against the four domains and 11 corresponding components of food literacy [2]. No assessment of risk of bias or study quality was undertaken due to the heterogeneity of study designs included in the review.

Results
A total of 12 studies describing 12 different tools met the inclusion criteria ( Figure 1) [16,17,[19][20][21][22][23][24][25][26][27][28]. Searches of the seven databases identified 360 unique records. After screening the titles and abstracts, 269 studies were excluded as they did not meet the inclusion criteria. The full text of the remaining 91 articles were assessed and 79 were excluded. Of those excluded, 20 were conference abstracts or did not have the full text available, two did not have the full text available in English, one reported on a food literacy measurement tool published elsewhere, one was in a school setting and 55 did not include a measurement tool that explicitly referred to food literacy in its conceptualisation or development. No additional studies were obtained through other sources. Table 1 presents the tool characteristics extracted from each paper.  Self-administered online survey

Boucher (2017), Canada [22]
To describe the Ontario Food and Nutrition Strategy (OFNS), which integrates multiple sectors and determinants, available indicators through existing information systems and activity across Canada to develop a strategy and surveillance system. Three key strategic directions were identified, one of which is "food literacy and skills".  [44].
Five step process undertaken by the OFNS Advisory Group, which began with an environmental scan of existing system level data, development of assessment criteria, face validity, and finally feasibility.

Barbour (2016) Australia [20]
To assess the impact of a food literacy program on (i) dietary intake (ii) diet quality, (iii) cooking confidence and (iv) food independence To evaluate the effectiveness of a 4-week nutrition education intervention to determine long term efficacy of food literacy intervention on long term food literacy.
11 items quantitative questionnaire and qualitative focus groups.
Vidgen and Gallegos (2014) [2] Participants in a 4 week dementia and nutrition education intervention. Participants were healthy, independently living individuals without a dementia diagnosis but an interest in the subject n = 72 81% aged >61 70% female 34% with CVD All participants Self-reported questionnaire At baseline (n = 72), post evaluation (n = 66) and >3 months post evaluation (n = 42) Fruit and vegetable variety and intake, herb, spice and salt use, and trimming fat behaviours

Amuta-Jimenez (2018) USA [19]
To examine: • differences between food label use and food label literacy between participants who had a cancer diagnosis and those without Included papers had four distinct purposes, those reporting the development of a tool to explicitly measure food literacy or a part thereof (n = 5); food literacy strategy indicators (n = 1); tools developed to evaluate a food literacy intervention (n = 3); and tools to measure food literacy as a characteristic within a broader study (n = 3). Most tools (n = 7) were used to assess the food literacy of individuals or populations, including adults residing in a particular country [17,[24][25][26], household food gatekeepers [28] and adults diagnosed with cancer [19]. One tool proposed key performance indicators for food literacy strategic directions in a state public health nutrition strategy [22]. Four were also used to compare food literacy levels between people of different socioeconomic status [19,24,25,28] and five examined the relationship between food literacy and food intake [19,26,28]. Six of the 12 identified tools were used to evaluate the effectiveness of food literacy interventions in Australia [20,21,27], Switzerland [16], Canada [22] and the United Kingdom [23].

Sample Characteristics
The sample size of tool respondents ranged from 21 to 62,373 adults. The age of participants ranged from 15 to 96 years. In seven studies, at least 60% of respondents were female. Three studies [20,21,23] used their tool with a high proportion of participants from a low socio-economic background. Four studies were undertaken in Australia [20,21,27,28], five in Europe (France [25], Italy [17], the Netherlands [26], Switzerland [16], the United Kingdom [23]), one in Canada [22] and two were in the United States [19,24]. Table 2 summarises the alignment of each reviewed tool against the eleven components of food literacy, which can be collapsed into the four domains of planning and management, selection, preparation and eating. The coding of each item within these tools is detailed in supplementary material.

Planning and Management
The 'planning and management' domain of food literacy encompasses the ability to prioritise time and money for food (1.1); plan food intake (formally and informally) so that food can be regularly accessed through some source, irrespective of changes in circumstances or environment (1.2); and make feasible food decisions which balance food needs (e.g., nutrition, taste, hunger) with available resources (e.g., time, money, skills, equipment) (1.3) [2]. Of the six tools that assessed this domain, most (n = 5) included items to assess 1.2 or 1.3. Component 1.2 was commonly assessed through the frequency of planning meals (n = 3) [21,24,28] and using a shopping list (n = 2) [21,28], including specifically to meet nutrition recommendations [21] in anticipation of distractions and then adjusting food decision accordingly [26]. One study assessed confidence planning meals [27]. Tools that measured 1.3, included items that measured confidence choosing foods that are the best value for money [17,21,28], maintaining a focus on healthy eating irrespective of cost [26] and deciding what to eat [17,24]. Measures also examined balancing time management regarding meal preparation with other responsibilities [24], and critiquing external influences, such as social marketing [17]. Three tools captured component 1.1. To do this, they gauged participants' attitude towards prioritising time for cooking [24,28], or experience of running out of money for food [21].

Selection
Measurement of the 'selection' domain of food literacy requires investigations into the ability to access food through multiple sources and know the advantages and disadvantages of these sources (2.1); determine what is in a food product, where it came from, how to store it and use it (2.2); and judge the quality of food (2.3) [2]. Seven of the eight tools that evaluated this domain assessed how well participants can source information about a food product (2.2). To do this, four tools [19,21,26,28] evaluated food label use, four [16,17,24,28] gauged food label reading confidence and one [19] included a label reading task to assess food label comprehension. Palumbo et al's tool included a greater number of items to measure a wider range of food information topics (e.g., providence), and sources (e.g., digital media). Items used to measure this component tended to overlap with components 4.1 and 4.2, specifically the nutrition knowledge needed to interpret the label, and the motivation to select the healthier product. Three tools alluded to food safety as a component of 'quality' food [17,26], while one attributed 'quality' to 'natural' foods and those free of additives and preservatives [28]. The analysis revealed three measures captured component 2.1. Specifically, confidence shopping [27], where to source particular foods [24], and the social, economic and environmental impact of food choices [17].

Preparation
'Preparation' was the most common domain captured in the identified food literacy measurement tools. The majority of tools (n = 10) assessed component 3.1 (i.e., make a good tasting meal from whatever food is available, including the ability to prepare commonly available foods, efficiently using common pieces of cooking equipment and having a sufficient repertoire of skills to adapt recipes, written or unwritten, to experiment with food and ingredients) via self-perceived confidence with cooking techniques (e.g., confidence using kitchen equipment) or meal preparation (e.g., confidence cooking from basic ingredients/following a simple recipe) [17,20,21,[23][24][25][26][27][28]. Other items corresponding with 3.1 assessed confidence trying and preparing new foods [20,21], as well as attitude towards cooking (e.g., cooking enjoyment) [24,25,28]. Some took a particular focus on preparing healthy foods [21,23,26]. One tool included an inventory of key items of kitchen equipment [25]. Tools ranged considerably in their level of specificity from an overall statement about cooking in general [23] to over 30 items on particular ingredients and dishes [25], and their focus e.g., confidence, frequency, attitude or behaviour. Two tools measured component 3.2 (i.e., apply basic principles of safe food hygiene and handling). Begley et al. [21] asked participants how frequently they thaw meat at room temperature, while Palumbo et al. [17] gauged confidence accessing information about food safety and hygiene practices.

Eating
Food literacy includes understanding that food has an impact on personal wellbeing (4.1), demonstrating self-awareness of the need to personally balance food intake, including knowing foods to include for good health, restrict for good health and appropriate portion size and frequency (4.2), and being able to join in and eat in a social way (4.3) [2]. Four tools measured component 4.1 by assessing how often participants consider healthy choices when eating or preparing a meal [17,21,27,28]. In this way, there was significant cross over with components in other domains. Palumbo included items related to specific individualised health needs, as opposed to population nutrition recommendations, which was more the focus of component 4.2 [17]. Six tools captured component 4.2, of these, two tools [16,19] included knowledge questions relating to portion sizes and national dietary guidelines. One tool was tailored to the context in which it was administered and included specific nutrition knowledge questions related to the target audience of the food literacy intervention [27]. The remaining tools assessed self-perceived confidence with nutrition knowledge [17,28] or competence balancing food intake [16,26]. Two food literacy measurement tools investigated social eating and relied on self-reported attitude toward shared eating occasions [24,26]
Content validity was assessed in nine articles through dietitians, public health experts and food literacy experts [17,[20][21][22]26,27], and by pooling items from pre-existing tools [16,21,24,26,28] or population monitoring and surveillance systems [22] (see Tables 1 and 3). Table 3 reports the face, content and construct validity, and reliability of reviewed tools. Eight tools [16,17,20,21,24,[26][27][28] reported reliability of their measure using Cronbach's alpha, with internal consistency ranging from α = 0.76-0.95. The highest reported rates were α = 0.94 for the confidence in cooking, shopping, planning and purchasing scale in the Wallace et al. tool [27], and α = 0.912 for the general food literacy scale in the Palumbo et al. tool [17]. The weakest internal consistency was reported for the selection, plan and manage, and preparation scales, at α = 0.76, 0.79 and 0.81 respectively, in the Begley et al. tool [21]. Eight tools were face validated [16,17,20,21,[24][25][26]28]. Five tools reported examining construct validity [16,17,21,24,26]. In order to test for construct validity, two tools regressed against gender and education, assuming food literacy would be higher among females and those with a higher education [16,24]. Overall, five food literacy measurement tools reported content, face and construct validity, as well as reliability [16,17,21,24,26]. Tool items were generated by experts in food literacy and using existing literature. Yes Yes; validated the scale against psychological constructs which are well known for their positive (self-control) and negative (impulsiveness) correlation with healthy food consumption (convergent and divergent validity).

Food literacy strategy indicators
Boucher (2017) Canada [22] Examination of existing tools within existing population monitoring and surveillance systems.
Yes Not reported Not reported

Tools developed to evaluate a food literacy intervention
Barbour (

Outcome Measure
Of the 12 tools identified, five evaluated the impact of food literacy on dietary intake (see Table 1). Dietary outcomes were assessed in various ways, including specific foods (e.g., core food groups, fruits, vegetables, fish, herbs/spices/salt, unsaturated spreads and oils, sugar-sweetened beverages), food types (e.g., fibre, discretionary choices, snacks), or nutrients (energy, macronutrients and micronutrients) [19,20,23,26,27]. Only two tools cited a validated dietary intake measure, including the US Department of Agriculture's five-step multiple-pass method [20] and a validated food frequency questionnaire [26].
This review revealed that this is a rapidly emerging area of public health nutrition activity, with all of the papers being published in the last three years. Of particular note is that all papers reporting the development of a tool to explicitly measure food literacy, or a part thereof, had been published in the last two years. These papers reported the most rigorous processes of development.
Food literacy measurement tools are becoming increasingly multidimensional, which reflects emerging theories that define the construct with multiple domains [10,11,[54][55][56]. A 2012 review of 21 food literacy interventions targeting disadvantaged youth found that 90% of evaluation tools measured the 'preparation' domain, while only 30% measured 'planning and management' [57]. Although the present review found that 'preparation' still dominates most tools (n = 11; 85%) and 'planning and management' remains the least captured (n = 7; 54%), there is now a more even distribution of domains within and among tools. Half of the reviewed tools [17,21,24,[26][27][28] captured all four domains of food literacy. These findings suggest food literacy measurement is extending beyond the cooking and meal preparation paradigm and recognising recent theoretical advances.
Existing food literacy measurement tools generally allude to a conceptual framework; however, interpretation and application of the theory is still limited. While the majority of tools (7/12) used the Vidgen and Gallegos empirical definition of food literacy [2], indicating greater agreement on its conceptualisation, there was significant variation in how the definition was applied. Alignment of the tools to the four domains of food literacy was typically clear; yet, there was difficulty coding the tool items against the 11 components. This was especially demonstrated by the disagreement between the coding of items by the tool developers and this paper's review team, which included the developer of the cited definition. This could be attributed to the highly inter-related nature of the food literacy components [2] and construct as a whole [10], or may point to a need for the components to be more explicit and independent of each other. As such, it may be necessary for a food literacy measurement tool to address all 11 components of food literacy in order to appropriately capture the construct, rather than conflate this to its four domains. No tool was found to align with all 11 components, and the five [17,21,24,26,28] most comprehensive tools missed at least three. This may also reflect the conceptual nature of the Vidgen and Gallegos components and their lack of testing quantitatively. Reviewed papers that described the development of a tool to explicitly measure food literacy or a component of it [16,17,21,24,26] also described the process of beginning with a larger pool of items, which were later discarded as they moved through various stages of validation. This review included only the final set of items. It may be that items aligning with missing components were in their original item pool.
Not only did tools vary considerably regarding the extent to which they included domains and components of food literacy, but items measuring specific components also varied between tools. That is, questions used to measure what is indicative of food preparation, for example, were very inconsistent. This variation included the focus of questions. More tools used subjective (self-report) measurement approaches, as opposed to objective (task-based) items, when measuring food literacy. According to the literature, self-reported confidence in food preparation, cooking and label reading does not necessarily translate to everyday use of such skills [12,[58][59][60]. Similarly, self-reported food safety practices are particularly prone to social desirability bias and inaccurate responses [12]. Nevertheless, most food literacy measurement tools required participants to self-report their confidence undertaking such behaviours. Only one tool [19] used a task-based item to assess label reading skills (component 2.2) and thereby increased reliability of results. Moreover, many food literacy measurement tools included subjective attitudinal items pertaining to food mavenism (e.g., "I find cooking a very fulfilling activity" [24]; "I consider myself to be an excellent cook" [28]) to gauge the 'preparation' domain. While food mavenism and pleasure may be a positive predictor of food knowledge and involvement [61,62], a food literate person does not have to be a 'food maven'. In fact, both food literacy experts and young people experiencing disadvantage agreed food preparation skills only needed to be "basic" to support needs [63]. Greater consideration of objective items is required to reduce social desirability bias and improve the validity of food literacy measurement tools. Items were generated using existing tools or expert consensus rather than empirical observation of people's lived experience or evidence of behaviours that result in an improved dietary outcome.
Although most food literacy measurement tools have been assessed for validity and reliability, they are limited by inadequate validation methods and narrow sample demographics. In order to test for construct validity, some tools regressed food literacy against social determinants of health, such as education [16,24] and income [24]. While poverty, social exclusion, social support, geography and transport can influence the development of food literacy, people from all socio-economic backgrounds are capable of demonstrating food literacy [63]. Therefore, validating the food literacy construct against education and income may not be accurate. Additionally, existing food literacy measurement tools were content validated with dietitians and public health experts [20], food literacy experts [16,17,21,26,27] and pre-existing scales [16,[19][20][21]24,28]. No tools were content validated with the general population; however, such validation may be important as food literacy is an everyday practice. Furthermore, sample respondents of the food literacy measurement tools were relatively homogenous. Indeed, most tools were tested with highly educated females living in Western countries, and none were applied across multiple contexts. Because of these limitations, it is difficult to determine the true validity and applicability of existing food literacy measurement tools in different contexts, in particular, if it is possible to compare food literacy between groups e.g., different countries.
All reviewed measures were developed or used within the health paradigm. As such, they reflect food literacy for dietary outcomes, rather than a broader conceptualisation of food wellbeing. The limited food literacy outcomes measured by existing tools further constrain use in diverse contexts. A comprehensive tool should encompass multiple indicators to measure against food literacy, including diet quality [63], intake of ultra-processed foods [64], food security [65] and sustainable eating [66]. Nevertheless, current tools are merely measured against food intake outcomes. In order to maximise relevance and applicability, food literacy measurement tools should measure against a range of outcomes. Our review chose to include only those papers that had explicitly mentioned food literacy. Another approach could have been to look for existing measures of each individual component. This may have resulted in a broader set of outcomes, e.g., food waste, being identified.
Although it is challenging to conclude which existing food literacy measurement tool is best, the findings of this review underscore the importance of using a multidimensional tool validated in the appropriate context. When choosing a suitable tool, those that capture the greatest number of food literacy components should be prioritised. More comprehensive food literacy measurement tools are more likely to accurately capture the interrelated nature of the construct. In this review, the tools used by Begley et al. [21], Lahne et al. [24], Palumbo et al. [17], Poelman [26] and Wijayaratne et al. [28] were found to be the most comprehensive. To further maximise validity, tool selection should be contextually driven. Existing tools have been validated with the general adult population [17,[24][25][26], participants from low socio-economic backgrounds [20,21,23], household food gatekeepers [28] and adults diagnosed with cancer [19]. Furthermore, they have been applied in Western countries, such as Australia [20,21,27,28], France [25], Italy [17], the Netherlands [26], Switzerland [16], the United Kingdom [23] and the United States [19,24]. Nevertheless, there are many contexts in which food literacy measurement tools have yet to be validated and further research is required to address these gaps. Indeed, application of highly comprehensive and contextually-relevant tools will enhance the validity of food literacy measurement and assist in advancing the construct. Only one tool considered performance indicators of food literacy at a population level within existing monitoring and surveillance systems [22]. Given that food literacy is conceptualised as existing at "individual, household, community or national level" [2], and is included in an increasing number of key national and state public health nutrition plans [67], this is an aspect of measurement which requires greater attention.
In considering these findings, certain limitations of this review should be noted. Only peer-reviewed journal articles published in English were included, which may have introduced selection bias. Furthermore, no reporting guidelines were used to evaluate the quality of studies. Finally, the food literacy measurement tools were coded from a nutrition paradigm, which may have incurred biases. Despite these limitations, this review is strengthened by its compliance to the PRISMA guidelines and robust search strategy conducted in seven databases. Moreover, this review appraised existing food literacy measurement tools against the Vidgen and Gallegos food literacy conceptualisation and offers valuable insight into the current state of food literacy measurement. It should also be noted that this review examined only those tools targeting adults. During the screening process, many additional tools targeting children and school settings were identified, but excluded (refer to Figure 1). Analysis of these tools would further add to the field, particularly within the context of education.

Conclusions
There are currently 12 tools available that measure food literacy among adults. Tools varied considerably in their item type, indicating there is still time before we are able to compare food literacy interventions, determine their effectiveness, report on populations over time, and most importantly, determine the relationship between food literacy and food intake. While most tools capture all four domains of food literacy, the application of theoretical frameworks is limited. To date, no tools have been explicitly built from the Vidgen and Gallegos conceptualisation or crafted to capture all 11 components of food literacy. Furthermore, existing tools have been validated across limited contexts and have relied heavily on self-report methods that are prone to bias. Existing tools must be applied with care and further research is required to develop comprehensive tools that are contextually valid. This review helps advance the measurement of food literacy and provides useful information that will assist researchers in selecting and developing validated food literacy measurement tools.