Next Article in Journal
‘I Respect You but I Am Not Willing to Be You’: Critical Reflections of Western Teaching of Social Work to Students in China—What Can be Learned Both Ways?
Previous Article in Journal
What Helps and What Hinders? Exploring the Role of Workplace Characteristics for Parental Leave Use and Its Career Consequences
 
 
Article
Peer-Review Record

Reciprocal Personality Assessment of Both Partners in a Romantic Relationship and Its Correlates to Dyadic Adjustment

Soc. Sci. 2019, 8(10), 271; https://doi.org/10.3390/socsci8100271
by Evelyne Smith, Adèle Guérard, Hugues Leduc and Ghassan El-Baalbaki *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Soc. Sci. 2019, 8(10), 271; https://doi.org/10.3390/socsci8100271
Submission received: 14 August 2019 / Revised: 14 September 2019 / Accepted: 20 September 2019 / Published: 27 September 2019

Round 1

Reviewer 1 Report

This study is reasonably well-conducted and the write-up is coherent and well-organized. The methods seem adequate as well. On the other hand, this study is incomplete. The authors explicitly state that they withheld available data (e.g., behavioral interview data) in an effort to engage in piecemeal publication. They do not state formal hypotheses, though it is obvious that specific hypotheses are tested. The discussion section is largely repetitive with the results section.

I am not convinced that this paper makes a large contribution to the personality, couples therapy, or dyadic romantic relationship literatures. The literature review is decent, but also somewhat incomplete, as the authors ignore a large portion of personality literature and selectively interpret prior literature that conforms to their constructs of choice.

 

Overall, this manuscript has a lot of potential for improvement. In my review, I have attempted to offer constructive advice for how to revise this manuscript, regardless of whether the ultimate journal is Social Sciences or elsewhere. I have organized my comments by section. I hope the authors find these comments to be helpful.

 

Introduction

The authors’ description of the Honesty factor in the HEXACO model is not quite accurate. Their claim that Honesty is “a sub-product of two Big Five traits (neuroticism and agreeableness)” does not quite align with the last citation offered (i.e., Ashton & Lee, 2001). Specifically, the relevant quote from Ashton and Lee (p. 322) is “the terms defining lexical Honesty are usually located in the planes involving Agreeableness and either Conscientiousness or Emotional Stability; however, most of these terms show quite modest loadings on these factors.”
I appreciate the authors’ cursory inclusion of Block’s (1995) critique, however, my hesitation with the conceptual description of personality in the introduction is not limited to the discovery of and number of factors in the Galtonian lexical framework.

I urge the authors to consider the potential benefits of moving beyond a description of traits. Trait-based personality is visible to the outside world (and, therefore, to romantic partners), as described by McClelland (1951). However, motive-based personality (see Murray, 1938) or the integration of traits and motives (see Cattell, 1946; Cattell & Kline, 1977; McClelland, 1985) may provide another promising avenue for dyadic personality research.

 

Cattell, R. B. (1946). Description and measurement of personality. New York: World Book.

 

Cattell, R. B. & Kline, P. (1977). The scientific analysis of personality and motivation. New York: Academic.

 

McClelland, D. C. (1951). Personality. New York: William Sloane Associates.

 

McClelland, D. C. (1985). How motives, skills, and values determine what people do. American Psychologist, 40, 812-825.

 

Murray, H. A. (1938). Explorations in personality. New York: Oxford University Press.

 

The description of DSM-5-based maladaptive personality was excellent. I would caution the authors, however, not to fully endorse the construct equivalence between the PID-5 and NEO on the basis of Gore & Widiger (2013), as this article used an undergraduate (i.e., “normal range”) sample as opposed to comparing the measures in both clinical and non-clinical settings. Gore & Widiger’s article is excellent and well-conducted, but the external validity is limited by their sample.

To be clear, the current paper draws on a “normal” sample as well, so it is appropriate to cite Gore & Widiger here. I understand that this is a “picky” reviewer comment, but I would like the authors to note that the empirical evidence for construct equivalence is limited to sub-clinical populations.
Page 3: the authors cite research demonstrating the utility of neuroticism in predicting relationship satisfaction. Neuroticism has also been linked to work satisfaction and (more generally) life satisfaction, which is reasonable considering the content of the construct. Is relationship satisfaction different in some way, or is the effect of neuroticism on satisfaction with aspects of life a more general phenomenon?
Minor note: Decuyper et al’s PD:TRT article was published in 2018, not 2016.

Related to this article (which the authors cite repeatedly), Study 2 of Decuyper et al. (2018) demonstrated that perceived similarity on maladaptive personality traits was a positive predictor of relationship satisfaction for men. This “perceived similarity” perspective could easily be incorporated into the present study in order to align better with existing literature. In short, the authors only seem to focus on Study 1’s contribution while ignoring the relevance of Study 2.
On pages 4-5, the authors state “Previous research has shown gender differences with regard to the effect that personality has on dyadic adjustment, but results have often been inconsistent due to methodological issues.” I believe readers would benefit from a citation and/or description of the specific “methodological issues” referenced here as well as how and why they would lead to incongruent results across previous studies.
Why are there no formally stated hypotheses?
Method

While the justification for limiting the sample to heterosexual couples was acceptable, I would like to see better justification for the age constraint, relationship status constraint, and relationship duration constraint. In the introduction, the authors claim that peer-report quality is related to length of acquaintance.

Why do the authors believe that 6 months is long enough for couples to provide adequate partner ratings?
Additionally, why not include married couples in the current sample? The authors claim that this is the first wave of a longitudinal study (and, therefore, admit their plan to pursue piecemeal publication, which is a potential ethical issue). However, what harm would come from including married couples at this stage? In my view, it would actually *enhance* the present study by allowing the authors to analyze relationship status as a moderating mechanism.
Finally, the authors describe age 21 as a phase of relationship development, which is fine. However, the upper limit of 30 (when personality change allegedly slows) makes very little sense to me for two reasons. First, when relying on peer-reports (and even self-reports), wouldn’t it be ideal for personality to be as stable as possible, necessitating the inclusion of older samples (or at least a wider range of age groups)? Second, given the stated plan for longitudinal data collection, wouldn’t this constraint limit the authors’ ability to explore lifespan data moving forward?
I appreciate the addition of the supplement. However, I think readers may be curious to see the socio-demographic variables and relationship satisfaction variables included in these correlation matrices.
If using a sample recruited through Canadian universities and communities, why were scores normed using American norming tables (especially questionable for the DAS, which has available Canadian norms) as opposed to norming based on the sample?
Why is alpha reported for the POD-5 scales, but not for the NEO or relationship satisfaction scales?
The use of Spanier’s (1976) scale is highly questionable. First, as the authors used the entire scale (and included the variable name “dyadic adjustment” in figures), subheading 2.2.3 should be labeled “dyadic adjustment” instead of “relationship satisfaction.” Second, items 29, 30, and 32 are not measured on Likert-type scales, as the authors described. If the scale or response options were changed, the authors should explicitly state what changes were made. Third, Spanier’s scale was designed for use in married couples, a population the authors purposefully excluded from this study. While easily adaptable to cohabitating couples, some item content may be inappropriate outside of the context of marriage (e.g., “handling family finances” or “ways of dealing with parents or in-laws” or “career decisions”). While some committed dating couples may address these issues, I have trouble imagining that these items are relevant to all couples who dated for less than a year.
Results

Given the large number of relationships examined in this study, the authors should be cautious of familywise alpha. For example, the t-tests presented in Table 1involve 20 different comparisons. Perhaps the authors should consider employing the Bonferroni correction here (i.e., interpreting statistical sigs at .0025 (or .05 / 20) as opposed to .05) as well as in other sets of analyses that involve multiple comparisons or relationships. This is especially important for the results reported in Tables 2, 3, and 4.
The authors routinely make statements in this section such as “these results support hypothesis 1.” However, as I noted earlier, while the introduction contains a literature review, no formal hypotheses were stated by the authors. As a result, I found these statements quite confusing, and I suspect readers will also find them confusing. Are the “hypotheses” here simply attempted replications of previously discovered empirical relationships between self- and peer-rated personality and relationship satisfaction?
A minor quibble: the authors keep describing effects “when all other effects were controlled for.” In simultaneous-entry (i.e., non-hierarchical) models, the correct term is “held constant” as opposed to “controlled for.” The difference is related to the calculation of type I versus type III sums of squares (or partial and semi-partial correlations), which demand different interpretations.
Why are no omnibus model fit indices included for any of the models? Additionally, as models 1 and 2 were nested in Model 3, change (i.e., model comparison) statistics could have been reported to communicate the incremental contribution of self- or peer-rated personality to the overall model.
Discussion

Aside from the section of 4.2 in red font (i.e., lines 655-672 of page 17), the Discussion does not provide much beyond a basic summary and restatement of results. I found most of these sections to be repetitive and unnecessary, though I thought the part in red font was excellent.
Given that “there are no known studies on the direct effect of personality traits on couple satisfaction in same-sex couples” (p. 18), why would the authors not focus on homosexual couples in order to maximize their contribution. The authors admit that their results primarily serve to replicate what is already known.

As a related note, inclusion of a qualitatively different population (e.g., same-sex couples) will not influence the generalizability of results unless the results align well with those from heterosexual couples. Given the intense focus on gender in this study, such a comparison may not even be possible. Inclusion of this specific different population will enhance the research, but it will do so by allowing the researchers to examine a moderating effect (rather than generalizing to a broader population).
Personally, I am bothered by statements such as “additional data was collected for this study but not used in this article” (p. 18). I imagine readers will feel the same way, as this indicates that the authors are intentionally limiting this manuscript by not fully disclosing all available information (or that the authors were capable of including a more complex and interesting analysis than what is presented here).

In this specific context, the authors state that their reliance on surveys was a limitation and that future research would benefit from assessing actual behaviors. The authors then state that they *currently have access to* actual behavioral data, but are withholding it from this study so they can publish it separately.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors have addressed the concerns of the reviewer.

Author Response

We thank you for your time.

Reviewer 3 Report

Thank you for producing an interesting paper to read.  I do have a few comments which are listed below in order of the pages:

Abstract - the abstract is an abbreviation nightmare.  Three abbreviations are used without a clear explanation.  Please spell out PID-5, DAS, and APIM. Page 5, section 2.1 - all of the future research plans belong in the Discussion section and not the participants section Page 9, line 359, the sentence starting with "The measure of..." does not make any sense Tables - why is the term "back to text" beside each Table number? Page 13 - I'm surprised that the author(s) state that extraversion is a positive trait - it is only positive in certain contexts. Page 15 requires a complete re-write.  The whole page reads like a run-on sentence with multiple sentences starting with "That is..." and "Therefore..." Page 15, line 566, the statement, "participants were recruited in the community" is a bit bizarre.  What is the alternative?  Another planet? Page 16, top paragraph - I am concerned that the author(s) are trying to explain sex differences in personality with "gender stereotypes" only.  This explanation is quite limiting as the self-report scores in Table 1 do suggest that there are sex differences in personality (such as women scoring higher on conscientiousness).  The differences could reflect actual sex differences and not simply stereotypes. Page 16, line 622, the extraversion scale measured by the NEO does not cover "histrionic expression".  The statement needs changing. Discussion section - why is assortative mating not addressed?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

I would like to thank the authors for providing thoughtful responses to my earlier comments. Based on the revision, it seems the authors and I have very different research philosophies pertaining to practices such as piecemeal publication and what constitutes a contribution worthy of publication. While I continue to believe this paper to be well-written and the analysis to be somewhat reasonably conducted (though I continue to believe it could be improved and expanded), the authors routinely declined to incorporate some of my most important suggestions in their revision.

 

In the response document, the authors claim that “The behavioral interview data has not yet been analyzed and is therefore not reported in this particular study. It is also not of interest to this study …”

However, the authors also claim that “future research that includes self-reports of behavior as well as the assessment of actual behaviors are needed to corroborate the results” (p. 18).

In other words, the authors claim that the inclusion of behavioral data would help corroborate results. They also claim that they have access to behavioral data. However they willfully neglect to include this data because it “has not yet been analyzed.” If the authors believe that behavioral interview data would enhance the contribution of this manuscript, why would they not simply analyze those data and include that analysis in the present manuscript? Regarding the HEXACO, while I appreciate the removal of the claims about the H factor, the authors still claim that “the replicability of these six factors is not shown across all languages, including English.” This is not consistent with Ashton and Lee’s work (see Lee & Ashton, 2008).

I still continue to believe that the authors need to demonstrate a better rationale for limiting their lexical personality constructs to five factors. I understand that the focus of this manuscript primarily addresses relationship satisfaction. The point I made regarding the link between neuroticism and more general satisfaction (e.g., with life, work, etc.) was meant to assist the authors in clarifying their unique contribution. Again, I (as a reader of this manuscript) am left wondering whether (a) there is something unique about the relationship between neuroticism and relationship satisfaction or (b) this relationship is simply a byproduct of the fact that neuroticism is related to general satisfaction with life and aspects of life. Similarly, the authors seem to have misinterpreted my comment about incorporating perceived similarity (from Decuyper et al., 2018 study 2). Decuyper and colleagues extend their first study by incorporating similarity in a follow-up study (i.e., study 2). I am asking why the present authors chose not to do this. Again, the authors collected data that could address this issue, but chose not to include it. This practice aligns with my perception that the present manuscript represents a “half study” that purposefully withholds data for subsequent publication. If the authors want to omit similarity data, then they should better communicate to readers why similarity is irrelevant. Otherwise, readers may reasonably wonder what makes the present study different from the contribution made by study 1 of Decuyper et al. (or wonder why the present authors chose to align their contribution with half of the Decuyper et al. paper despite having the capability to align it with the full paper). I do not appreciate the allegation that I read the manuscript with a negative bias. I understand the tendency for authors to become defensive because reviews often come across as criticism. Communicating in writing is difficult, but when I review a manuscript I attempt to make comments that I believe will improve the quality and reception of the paper.

I apologize for missing the hypotheses. However, I urge the authors to consider why this may have happened. I am accustomed to hypotheses being formally stated on separate lines with paragraphs of justification preceding each hypothesis. The present manuscript states them back-to-back after a more general literature review. As a result, when I read terms such as “as hypothesized,” I went back to earlier portions of the manuscript to see if I missed a formally stated hypothesis. When I did not find anything, I assumed the authors had forgotten to state their hypotheses. Perhaps this represents a simple difference between what I am accustomed to seeing and what the authors are accustomed to presenting.

In the spirit of constructive criticism, please allow me to offer some suggestions with the goal of improving the presentation of these hypotheses.

Perhaps heading 1.7 could be more descriptive. Instead of “this study,” the authors could consider titling this section “hypotheses” or “hypotheses and contributions of this study.” Perhaps the hypotheses could be stated closer to the relevant portion of the literature review. For example, when discussing the proposed negative relationships between maladaptive personality and dyadic adjustment (in sections 1.2 or 1.6), the authors could state their relevant hypothesis just after this discussion so that the relevant literature review doubles as justification. Perhaps the authors could more clearly delineate their formal hypotheses. For example, on a separate line, the authors could state “Hypothesis 1: positive factors of the FFM through actor-partner assessments of personality traits (will) have positive effects on both partners’ self-reported dyadic adjustment

On a personal and professional note, when I receive reviews of my own research, I am often tempted to attribute ignorance or malfeasance to the reviewers. I have found, however, that if a reviewer misses or misinterprets something in a manuscript then readers are also likely to miss or misinterpret it. Seemingly negative reviewer comments represent opportunities to clarify or improve the manuscript. Accusations of bias or professional misconduct are unnecessary, and often misplaced.

My question about the choice of a 6-month time period was unrelated to how common this time period is in the literature. Instead, it was related to whether or not the authors could conceptually justify that this time period was adequate. The authors state that most couples research is conducted on married couples. Ceteris paribus, married couples have more experience with one another than young dating couples, and this difference may necessitate rethinking the appropriate time period. I recommend Mitchell & James (2001) as an excellent conceptual discussion about this point. The rationale for using dating couples (and eliminating married couples) that the authors provide in their response is excellent. This should be incorporated into the manuscript.

On a side note, I agree that science is (often) based on step-by-step discovery. However, I also agree that each step needs to make a unique contribution. I urge the authors to specifically delineate the unique contribution of the present analysis (including their decision to exclude married couples and what it stands to add to the existing literature).

The response comments about the age cutoff were similarly excellent. While I certainly understand limitations based on time and budget (and see no need for the authors to incorporate that into the manuscript itself), I still think it is important for the authors to effectively communicate that their single cohort design makes a contribution. The authors’ honest disclosure of their longitudinal plans is laudable, but I continue to believe that the sole reliance on of first-wave cross-sectional data makes this design and analysis seem incomplete. I completely disagree with the authors’ decision to omit the correlation matrix, especially given the rationale provided in the response document (i.e., that this will be “analyzed in another paper”). A correlation matrix could easily be integrated as a table (or even a supplement if there are space limitations) and would enhance the manuscript for readers. The authors are willfully withholding relevant available data from readers. In order to justify the use of American norms for a Canadian sample (i.e., justify the use of published American norms as an appropriate reference group), the authors would need to provide data showing that American norms and Canadian norms are similar. As I stated in my first review, this decision was especially questionable for the DAS, which has available Canadian norms. With respect to Spanier (1976), the authors are correct that the DAS was “designed” to be used with both married and unmarried cohabitating couples. However, I urge the authors to be cautious in their reading of Spanier (1976, p. 19). Specifically, while the scale was initially administered to a “small sample of never-married cohabitating couples … these data [were] not part of the scale construction analysis” and “only married couples were used to assess reliability.”

I’ll reiterate my hesitation in supporting the use of a measure primarily developed on married couples in a study that excluded married couples. The fact that some previous research neglects to correct for familywise alpha does not make this a correct decision. My Bonferroni suggestion was intended to improve the statistical fidelity of the present study. I would argue that any study examining multiple relationships should include some familywise alpha control in an effort to reduce the probability of making a type I error. I invite the authors to go read about the interpretability of results from just identified models (Bollen, 1989; Raykov et al., 2013). I also invite them to recall that nested models in SEM can be compared using a variety of different model comparison statistics (e.g., change in chi-square or -2lnL). I never suggested using multiple regression; I only suggested comparing nested models.

Additionally, the authors’ repeated claim that “we cannot run all possible and interesting analyses in one article” is yet another instance of the authors’ willingness to withhold information from readers. No one expects the authors to run “all possible” analyses. I suggested some relatively straightforward analyses (e.g., model comparisons, correlations, alphas for other administered measures). The authors failed to include *any* of these analyses, each time stating that they were unnecessary, claiming they would be part of a future publication effort, or implying that they would be irrelevant. The additional discussion about potential differences between heterosexual and homosexual couples is excellent. The explanation provided in the authors’ final point was, in my opinion, their most reasonable argument for withholding information from readers. However, I continue to believe the authors should refrain from intentionally withhold available and interesting information from readers in an attempt to purposefully limit the scope of this article or to publish as many papers as possible. That said, I appreciate that the authors considered how the way they communicated this plan may have led to misinterpretation by me (or by readers).

To be clear, I still believe that the authors should include the behavioral interview data in an effort to maximize the contribution of this paper. I do not begrudge the authors their decision to build a database or to pursue multiple papers, however, my opinion is that the contribution of the present manuscript would be greatly enhanced by my suggested additions.

References:

 

Bollen, K. A. (1989). Structural equations with latent variables. New York, NY: Wiley.

 

Lee, K., & Ashton, M. C. (2008). The HEXACO personality factors in the indigenous personality lexicons of English and 11 other languages. Journal of Personality, 76, 1001-1054.

 

Mitchell, T. R., & James, L. R. (2001). Building better theory: Time and the specification of when things happen. Academy of Management Review, 26, 530-547.

 

Raykov, T., Marcoulides, G. A., & Patelis, T. (2013). Saturated versus just identified models: A note on their distinction. Educational and Psychological Measurement, 73, 162-168.

Back to TopTop