Next Article in Journal
Assessment of Sustainable Development in Secondary School Economics Students According to Gender
Next Article in Special Issue
Development of a Novel Gluten-Free Egg Pie Product: Effects of Sensory Attributes and Storage
Previous Article in Journal
The Moral Competence of Spanish Councilors
 
 
Article
Peer-Review Record

Creating a Mind Genomics Wiki for Non-Meat Analogs

Sustainability 2020, 12(13), 5352; https://doi.org/10.3390/su12135352
by Attila Gere 1,*, Ariola Harizi 2, Nick Bellissimo 3, Derek Roberts 4 and Howard Moskowitz 5
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Sustainability 2020, 12(13), 5352; https://doi.org/10.3390/su12135352
Submission received: 26 May 2020 / Revised: 18 June 2020 / Accepted: 27 June 2020 / Published: 2 July 2020
(This article belongs to the Special Issue Sustainability in Sensory Analysis and New Food Product Development)

Round 1

Reviewer 1 Report

Artificial intelligence applications are perfect, fast, efficient and work according to pre-inserted programming. Conclusions are drawn without errors based on the entered data.
Unfortunately, software's perfection is limited by a person whose designed it and whose thinking is not mathematically perfect and is variable over time.
Hypothetically, it is possible to assume that a "story of answers" judged by one person, as regards his/her evaluation, may change over time. Does it mean, when the same "story of answers" is repeatedly presented to the same person, this story can always be by the same person judged differently?
A person's decision/energy of his/her idea cannot be described by data, as it is influenced by a complex of various internal and external factors, the number of which cannot be expressed by a number or specified in detail.
Experiments performed with matter (eg tissue samples) are repeatable in reality, and if the conditions defined by the experiment are met, different research teams should always receive the same results.
As for the repeatability of the experiment with the person's decision/energy of his/her idea, I see a great disadvantage in it. I believe that the conclusion provided by the software may be different if the same sets of "response stories" are repeatedly presented to the same person, as it is possible to assume that a person will change their evaluation depending on their current mental state. The way of human thinking, its instability and creativity cannot be captured by data and passed on to hardware.

Author Response

The authors thank the detailed work and the helpful advices. Our research team has carefully read all comments and answered them in detail. Our answers are highlighted by bold and listed below:

Reviewer 1:

Artificial intelligence applications are perfect, fast, efficient and work according to pre-inserted programming. Conclusions are drawn without errors based on the entered data.

Unfortunately, software's perfection is limited by a person whose designed it and whose thinking is not mathematically perfect and is variable over time. Hypothetically, it is possible to assume that a "story of answers" judged by one person, as regards his/her evaluation, may change over time. Does it mean, when the same "story of answers" is repeatedly presented to the same person, this story can always be by the same person judged differently?

A person's decision/energy of his/her idea cannot be described by data, as it is influenced by a complex of various internal and external factors, the number of which cannot be expressed by a number or specified in detail.

Experiments performed with matter (e.g. tissue samples) are repeatable in reality, and if the conditions defined by the experiment are met, different research teams should always receive the same results.

As for the repeatability of the experiment with the person's decision/energy of his/her idea, I see a great disadvantage in it. I believe that the conclusion provided by the software may be different if the same sets of "response stories" are repeatedly presented to the same person, as it is possible to assume that a person will change their evaluation depending on their current mental state. The way of human thinking, its instability and creativity cannot be captured by data and passed on to hardware.

The question formed by Reviewer 1 is quite common in the field of consumer studies (from market research to sensory analysis). When dealing with people’s opinion, there are two major barrier we need to highlight:

  1. the validity of the typed answer (e.g. Does the respondent really agree or disagree with the presented statement? Or, their answer is influenced by any external factors, such as social expectations?) The problem is addressed by the repeated and fast presentation of ideas, hence respondents do not have a long time to think about their answers.
  2. the ever-changing opinions of consumers. Many scientific fields deal with the ever-changing opinion of consumers. Since we are working with people and food products, it cannot be expected that the same respondent will have exactly the same opinion after repeated exposures. However, there are some techniques available to handle it. One question is sample number which was set to 200 in the presented study (100 from California and 100 from New York). When dealing with average results, the differences among the opinions of the respondents fade away. If we would ask the same 200 respondents again, the average results are not expected to be changed since the individual differences would compensate each other. One thing, however, cannot be changed and these are the significant environmental effects, for example media-hype of the analysed topic (e.g. COVID-19 studies run in March vs in June) or the effect of season when dealing with seasonal food products (such as ice creams in winter or Christmas sweets in summer). When the study was run, there were no such external factors since there was no media-hype around meat alternatives and meat alternatives do not seem to have any seasonality.

We need to highlight, that due to human nature and the complex system of consumers mind, the presented results should not be treated as general, timeless ones, rather the representation of the present situation of meat alternatives. The studies should be repeated over time as the general knowledge about meat alternatives shows an increasing tendency.

Reviewer 2 Report

The paper is interesting for its idea to apply this methodology to the food sector. Nevertheless, I would have appreciated if the sample would have been selected more accurately in order to have some more consistent results. I see that there are only young people, moreover, 100 persons for each test is a very small sample for these types of studies. Therefore I think that this study demonstrates the applicability of the method with some limits that could be improved in future studies.

This is an interesting starting point for you or other researchers that will desire to improve the methodology and so obtain more consistent results. With regard to the methodology, you could provide, for e.s. some vignettes or provide clarifications for Table 2. Please explain how the 24 vignettes have been created. Who decided the questions? How?

Was the respondent taught about how to rate the vignettes?

Moreover, I think that a common consumer does not think that a score of 6 could mean a 0 or low feeling. Maybe the authors could have divided the scale into three parts rather than 2.

How consumers could test sensory attributes by watching vignettes?

Comments for author File: Comments.pdf

Author Response

The authors thank the detailed work and the helpful advices. Our research team has carefully read all comments and answered them in detail. Our answers are highlighted by bold and listed below:

The paper is interesting for its idea to apply this methodology to the food sector. Nevertheless, I would have appreciated if the sample would have been selected more accurately in order to have some more consistent results. I see that there are only young people, moreover, 100 persons for each test is a very small sample for these types of studies. Therefore, I think that this study demonstrates the applicability of the method with some limits that could be improved in future studies.

We have looked at the issue of base size. The typical base size of these studies is 100-300. 

Cattin, P., & Wittink, D. (1982). Commercial Use of Conjoint Analysis: A Survey. Journal of Marketing, 46(3), 44-53. doi:10.2307/1251701

http://www.macroinc.com/english/papers/A%20Users%20Guide%20to%20Conjoint%20Analysis.pdf

This is an interesting starting point for you or other researchers that will desire to improve the methodology and so obtain more consistent results. With regard to the methodology, you could provide, for e.s. some vignettes or provide clarifications for Table 2. Please explain how the 24 vignettes have been created. Who decided the questions? How?

These are concept studies. The vignettes are created by a systematic design, permuted, so that each respondent sees a different permutation of the design. This creates different vignettes for each respondent. Each respondent evaluates the same formal set of 24 vignettes (structure is the same) but the specific combinations vary.  All vignettes contain only one element from a silo, but not necessarily all silos are used. This means that a vignette can list one to four elements. This is required in order to provide the equal presentation of all the elements. The silos and the elements were created by the research team after a careful and throughout analysis of the scientific literature of the topic based on Scopus and Web of Knowledge searches.

 

Was the respondent taught about how to rate the vignettes?

No, respondents were not trained, the whole process was explained at the beginning of the study. The aim was to rate the vignettes as a whole on the presented scale.

We have begun to address this issue in three ways, which will become standard operating practice once they are validated.

 

  1. Insert an orientation vignette, in the 0th This is the 24thcombination that the respondent evaluates, but presented first, the data not captured.
  2. Look at the average ratings across time, for the first 4 vignettes, the second 4 vignettes, etc. Is there a marked change in the average, recognizing that each respondent is evaluating a unique set of combinations.
  3. Build models using the ratings for the vignettes for the first half of the study (positions 1-12) and for the second half of the study (positions 13-24). We do not notice dramatic, systematic shifts, although there is certain variability from the first set of positions to the second set.

 

Moreover, I think that a common consumer does not think that a score of 6 could mean a 0 or low feeling. Maybe the authors could have divided the scale into three parts rather than 2.

Since respondents are not “calibrated”, their answers show high deviations. In order to extract strong positive agreement, we choose to split the scale between 6 and 7. The upper 30% of the scale can be considered as strong agreement. A frequently used, option is what the reviewer suggests, namely splitting the scale between 3 and 4, creating the group of strong disagreement (rates 1, 2 and 3). Since the major aim of the paper was to find the key elements participants agree with, we did not include the results of disagreement.

How consumers could test sensory attributes by watching vignettes?

ConJoint studies are used to create hypothetical (food)products which can be rated by the consumers even if the product has not been ever produced. This way the major consumer opinions can be mapped in a fast and cheap way. Additionally, a higher number of product alternatives can be tested since all the presented vignettes could be a product, which could not have been managed during a consumer sensory test. We are aware that sensory attributes should be tested on real products, therefore the presented results should be used by product developers to find the most promising directions of new developments.

 

The comments included in the pdf version of the manuscript have also been addressed. 

Back to TopTop