1. Author’s Reminiscence on Peer Review 50 Years Ago
A half century has elapsed between my first scientific publications and the present ones. My early research and publication of papers in the field of radio astronomy occurred over 50 years ago in the period 1959 through 1967. At that time I was not aware of any rigorous review process, and I cannot recall receiving any significant reviewer comments, objections, changes, or corrections for the nine papers that I authored or co-authored during that period [
1]. I do remember a sharp rebuke from S. Chandrasekhar, editor of the Astrophysical Journal, to me on a multi-author paper on radio sources and their optical identification leading to the discovery of new quasars [
2]. However, this was for some infraction of the submission process, and not connected to a peer review of the paper’s contents.
The submission process for publications, such as Monthly Notices of the Royal Astronomical Society, Nature, the Astronomical Journal, and the Astrophysical Journal, was to send in the paper and wait to be notified when it was accepted and then published. Or so it seemed to a young researcher in his twenties. Although the idea of peer review dates back to the mid-18th century, it was not until the mid-20th century that a journal-oriented formal process using outside reviewers began to take shape [
3]. For example, in 1967, Nature, under the editorship of John Maddox, established a formal peer review process. The increasing volume of papers necessitated this development. Prior to that, Jack Brimble, the previous editor of Nature, would often hand out papers at the Athenæum Club in London for informal review by other scientific members [
4,
5].
Contrary to my own recollections, peer review processes with outside reviewers did exist in the 1950s, and many of the main journals had formal systems in place. Eugene N. Parker wrote a ground-breaking theoretical paper in 1959 that predicted an outflow in interplanetary space of plasma from the sun—the solar wind [
6]. The paper was strongly opposed and was rejected by its two reviewers, but saved by the editor, S. Chandrasekhar, who published it anyway in the Astrophysical Journal. The presence of the solar wind in interplanetary space was later confirmed by radio astronomers A. Hewish and J.D. Wyndham as well as by satellite observations [
7,
8]. Hewish’s continuation of his line of research led a few years later to the discovery of pulsars for which he shared the 1974 Nobel Prize in physics with Martin Ryle.
Since the 1960s there has been a vast increase in the literature on peer review, together with large international conferences on the issue. For example, Lawrence Souder in his 2011 paper, “The ethics of scholarly peer review: a review of the literature,” summarizes a subset of the available literature, namely that on the ethics of peer review [
9]. At the same time, the peer review process has become more formalized and technologically driven, and more the subject of study and criticism.
In 1967, I left pure research and, after some years in the aerospace industry and in college teaching, I spent most of my working career in the computer industry. Six years after retirement, I once again found myself in the field of scientific research, this time examining the events of 9/11. Here I have chosen this field for a peer review case study, not only because it has been my field of research for the past 10 years but also because it illustrates many of the current problems in the peer review process and their possible solutions.
2. A Peer Review Case Study—The Events of 9/11
In 2006 I was handed a DVD that questioned the official account of 9/11. I examined it reluctantly with a view to debunking the claim that the official story was false. To my great surprise and consternation, I found that the physical evidence for the controlled demolitions of the World Trade Center (WTC) buildings in New York City—the Twin Towers (WTC1/2) and Building 7 (WTC7)—was compelling [
10]. Controlled demolition may be defined as the “intentional destruction of a building by placing explosives in strategic areas [
11].”
In 2009, I became a founding member of Scientists for 9/11 Truth, serving as Coordinator of that organization for the past eight years [
12]. In 2014, I co-authored with Wayne H. Coste and Michael R. Smith a paper on the ethics, or lack thereof, of the official reports on the WTC building destructions [
13]. These official reports were produced by scientists and engineers at NIST (National Institute of Standards and Technology) [
14]. For these reports, there was no independent peer review process whatsoever, as shown in the passages quoted in the following paragraph [
15,
16]:
Dr. James G. Quintiere, fire protection expert, stated: “I know of no peer review of the NIST work on WTC. They had a[n] Advisory Committee, and even some of them did not agree with the NIST work and conclusions.” In a paper on the WTC investigation, Quintiere ends with this statement: “I would recommend that all records of the investigation be archived, that the NIST study be subject to a peer review, and that consideration be given to reopening this investigation to assure no lost fire safety issues.”
The absence of peer review for the NIST reports is alarming, especially in view of the consequences of 9/11. These consequences include the preemptive wars that have led to a devastating loss of life and property and to a substantial refugee problem. Additional consequences include the imposition of mass surveillance and the erosion of civil liberties, as well as a need to resolve fire safety and building code issues. However, contrary to NIST’s claims for the Twin Towers and WTC7, no steel-framed structure before or since 9/11 has ever been so completely devastated by damage and/or fire alone.
While it is true that reports are often not peer-reviewed, and that the military actions and some of the other consequences mentioned above occurred before the NIST reports were written and available, widespread public questioning of the official story that was launched within two days after 9/11, as well as the many omissions and distortions found in the 9/11 Commission Report of 2004, demanded an investigation of unimpeachable integrity with an independent peer review of the NIST work [
17,
18]. The 9/11 Commission Report never mentioned the destruction of WTC7, a 47-storey building, and NIST never examined the actual fall and aftermath of the WTC Twin Towers’ destructions. Also at stake were the lives of many in the ongoing wars and the treatment and care of thousands who had breathed the lethal dust or powder in New York City [
13].
Ironically, the very seriousness of NIST’s ethical failure in omitting meaningful peer review has resulted in a thorough, but non-official, independent peer review of the NIST reports. At the present time, over 2800 highly qualified scientists, engineers, and architects, as well as many other scholars, have examined the official account including the NIST reports and have found it to be in violation of the scientific method and the norms of genuine scientific research [
12,
19]. For example, as stated previously, the NIST investigation never examined the actual fall of the Twin Towers, nor did it examine the building remains for explosives, as required by the NFPA (National Fire Protection Association) guidelines in case of catastrophic collapse [
20]. Independent analysis of the physical evidence, including the WTC debris powder or dust, points to the controlled demolition of the three buildings cited above [
21]. For example, a very high percentage of iron-rich micro-spheres found by R.J. Lee group, USGS (U.S. Geological Survey) and others in the powder indicated the use of thermite, a substance that can have both incendiary and explosive properties [
22,
13]. The independent findings show the great value of the peer review process and point to the need for a more advanced and open form of peer review.
Such an open process occurred during one of the public input sessions held by NIST in 2008, with startling results. NIST had invited public comments on its preliminary findings on why WTC7 collapsed. In responding to a comment by David Chandler, Shyam Sunder, the lead NIST investigator, claimed that in NIST’s structural model the visible portion of WTC7 fell for a distance equivalent to 17 floors in 5.4 s, which is 1.5 s or 40% longer than a time of 3.9 s that would be the case for free fall [
23]. NIST had stated previously that this is “consistent with physical principles.” In his comment, David Chandler, a high school physics teacher, pointed out that a variety of methods showed from the motion of the top NE corner of the building that there was in fact free fall. Chandler’s measurements indicated free fall for the first 2.5 s, equivalent to a distance of 8 floors or about 30 m [
24]. However, NIST did not acknowledge this fact. As Shyam Sunder had previously stated: “[A] free fall time would be an object that has no structural components below it. […] [T]here was structural resistance […] in this particular case.” Later, NIST simply incorporated a value of 2.25 s of free fall, based on its own measurement, into its final report without comment and quietly removed the statement about its analysis being “consistent with physical principles.” By failing to address the implications of freefall, NIST’s final report, in this context, has all the earmarks of attempted scientific fraud [
25].
Two days after 9/11, on 13 September 2001, Professor Zdeněk P. Bažant of Northwestern University submitted to peer review a paper with one of his students, Yong Zhou, as co-author [
26]. The paper was a theoretical analysis of the WTC Towers’ collapses. It argued that, “if prolonged heating caused the majority of columns of a single floor to lose their load carrying capacity, the whole tower was doomed.” The paper was submitted to the ASCE (American Society of Civil Engineers) Journal of Engineering Mechanics and was, after peer review and some modifications, published in 2002. Later, NIST cited this paper as support for its own conclusions [
27]. However, Bažant’s and Zhou’s paper never attempted to explain the many different physical observations, such as lateral high-velocity ejections of materials for hundreds of meters, and the fact that there was no pile driver to crush each tower, since all materials were blown outside the buildings’ footprints [
28]. The acceptance of Bažant’s and Zhou’s paper by ASCE and its use by NIST is therefore highly questionable. In this important instance, the peer review process allowed publication of a theoretical paper purporting to explain an event with serious and ongoing consequences for society, but which ignored the major physical observations that disproved the paper’s theory. Moreover, Bažant’s critics have had difficulty in getting ASCE to publish their significant criticisms. See, for example, the experience of James Gourley [
29].
The highly charged political environment surrounding 9/11 has greatly impeded the acceptance and publication of research papers that question or contradict the official account of that event. A glaring example of bias on the part of the ASCE editors is provided by the experience of Tony Szamboti and Richard Johns who submitted a critique of a subsequent paper by Jia-Liang Le and Zdenek Bažant entitled “Why the Observed Motion History of the World Trade Center Towers is Smooth” [
30]. The latter paper appears to be a response to an earlier paper and critique of Bažant by Graeme MacQueen and Tony Szamboti that predicted a “jolt” if indeed the top 12 stories of WTC1 had fallen on the lower, undamaged portion of the building [
31]. The Szamboti and Johns paper was rejected by ASCE editors as being “out of scope.” As Szamboti and Johns have since noted, “It is not possible for a Discussion paper, one that simply corrects errors in a paper that is already published, to be out of scope for a journal [
32].” This is seen by independent researchers as clear proof that the editors were unwilling to allow Le’s and Bažant’s paper to be corrected.
Despite the difficulties encountered by independent scholars in their study of the events of 9/11, a number of important papers have survived the submission and peer review process in mainstream journals, though sometimes with attendant controversy. A particularly important peer-reviewed paper by Harrit, Farrer et al. analyzed red-gray chips found in the WTC debris powder, and showed them to contain nano-thermite, an advanced form of thermite that has incendiary and explosive properties, and is manufactured in military facilities [
33]. This finding by independent scientists, after the NIST investigators had neglected to examine the WTC dust, has thus far not been acknowledged or contested by NIST. The paper was published in the Bentham Open Chemical Physics Journal, whose editor in chief, Marie Paul-Pilenie, subsequently resigned, claiming she had not been informed of the paper’s publication [
34]. This incident further illustrates the high degree of tension and politicization surrounding this very important field of research. It is no wonder, under these conditions, that the peer review process appears to be broken, as illustrated by the examples cited in this paper and elsewhere [
35].
In a very recent paper published in the Europhysics News (EPN), a paper by Steven Jones, Robert Korol, Anthony Szamboti, and Ted Walter concludes that the physical evidence points to controlled demolition as the real cause of the three total building destructions in New York City [
21]. The editors of EPN stated that they “considered that the correct scientific way to settle this debate was to publish the manuscript and possibly trigger an open discussion leading to an undisputable truth based on solid arguments.” After receiving comments both pro and con the paper’s arguments, including a letter from an NIST spokesman, and a letter from a former NIST employee urging NIST to “blow the whistle on itself now” before awareness of the “disconnect between the NIST WTC reports and logical reasoning” grows exponentially, the editors declared in a letter that they themselves “do not endorse or support these (the paper’s) views.” The editors’ premature conclusion while the debate is in progress clearly illustrates the pressures on editors and institutions that can lead to suppression of research [
36]. However, in this case, the editors have allowed the paper to stand, and the paper now has over half a million views or downloads, a fact EPS (European Physical Society) President, Christophe Rossel, declared to be a “good thing”. The editors of EPN are to be commended for allowing the light to shine on this issue.
The universities, once bastions of independent thinking and research, appear to be increasingly controlled by corporate interests, on whom they are dependent for funding in a race for survival or competitive growth. This development not only impacts the choice of research topics, but also affects the peer review process. It is naïve to suppose that papers unfriendly to powerful corporate interests will always be treated fairly. For example, the venerable University of Cambridge, U.K., now includes the BP (British Petroleum) Institute [
37]. According to the Institute’s website, “The University of Cambridge BP Institute was established in 2000 by a generous endowment from BP, which has funded faculty positions, support staff, and the Institute Building, in perpetuity. The Institute research focuses on fundamental problems … spanning six University Departments.” Prominent individuals who have pointed to the pursuit of oil as the primary reason for the Iraq War include former Federal Reserve Chairman Alan Greenspan, former Senator and Secretary of Defense Chuck Hagel and General John Abizaid, former head of U.S. Central Command and Military Operations in Iraq [
38]. Since it is openly admitted that the Middle East wars, spawned by 9/11, were driven by oil interests, can a university, funded partly by oil interests, now deal with the events of 9/11 both scientifically and with intellectual integrity?
Like most other universities, Cambridge, with one exception (a theoretical paper by K. A. Seffen that, like Bažant, ignores the physical evidence), has not yet dealt scientifically with the events of 9/11 at all [
39,
40]. Instead of scientists analyzing 9/11 using the available observations and physical evidence, the university sponsors a Leverhulme-funded project,
Conspiracy and Democracy, that examines 9/11 as one of many “conspiracy theories” [
41]. “Conspiracy theory” is a pejorative term coined and promoted by the CIA (Central Intelligence Agency) since the 1960s to denigrate the views of anyone who questions the official accounts of Deep State events such as the John F. Kennedy assassination [
42].
One notable exception to the universities’ failure to deal with 9/11 is the work of Professor Leroy Hulsey at the University of Alaska [
43]. His work models the destruction of Building 7 (WTC7). According to Hulsey, his research is a “completely open and transparent investigation” that invites input from other technical experts and the public. Preliminary results of this research cast serious doubt on the NIST reports for WTC7 [
44]. When completed, Hulsey plans to submit his work to peer review by engineering journals. Hulsey’s open approach is in striking contrast to that of NIST, which clearly and unaccountably rejected the peer review process by stating that it will not release details of its WTC7 collapse initiation model because it “might jeopardize public safety [
45].”
The great silence on 9/11 from the universities indicates that they are presently unable to examine this subject openly. And overall, the presence in academic and other institutions of large amounts of corporate money is affecting research and the peer review process as well as their outcomes in many disciplines such as medical and drug research [
46].
3. Current Defects in Peer Review
The peer review process, a mainstay of scholarly research for decades, is now seen by many researchers as deficient, even unworkable. In his paper, “Peer review: a flawed process at the heart of science and journals,” Richard Smith, former editor of The BMJ (previously British Medical Journal), lists a host of deficiencies in the process, stating that “… [peer review] is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused.” In conclusion, he writes: “… peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review [
47,
48].”
Based on this author’s experiences in the past decade, the formal peer review process, like many human processes, is not so much itself at fault—rather it is the widespread lowering of ethical standards that is playing a major role in its perceived failures. Informal review of papers by colleagues is not only helpful but often essential in guaranteeing the quality of a paper. Informal review works well, but the subsequent formal process of submission, peer review, and publication by a respected journal can be difficult and unrewarding. Peer review serves to quality check the research, and to catch errors in the author’s methods, logic, analysis, and conclusions, with the goal of determining whether the paper is worthy of publication. In the ideal process, the editor and reviewers behave altruistically for the benefit of the author, the scientific community, and the general public. As far as an author goes, the assumption is that the research is genuine and original. However, if any of the participants in the process fail to honor the highest ethical standards, the process itself will most likely fail in some way. Regrettably, there are many pressures in today’s world that ensure such failures will happen.
These pressures include financial considerations such as substantial submission fees and article processing charges (APCs), the presence of so-called “experts” who actually lack both the knowledge and skills to referee a paper, reviewers who delay papers unduly, and editors who value the opinions of reviewers known to them but unknown to the author, even when the reviewers’ opinions are made in “bad faith”. For a paper that refuted the theory of another researcher, one editor disallowed the use of personal pronouns to refer to the researcher whose work was being rebutted. When challenged with the fact that other rebuttal papers published by the journal freely used personal pronouns, the editor gave no response. Because of this and other strictures placed on the authors, the paper had to be published elsewhere. There are many ways of forcing revision or rejection of a paper.
It was early known that good papers were being rejected at times while inferior papers were being published. In times past, this was more an indication of lack of skill, diligence, and competence on the part of editors and reviewers. More seriously, in this period of competition for grants and funding, competing theories with monetary and political consequences, and external pressures from institutions, corporations and governments with vested interests in the results of research, ethical standards themselves have often fallen by the wayside. To improve this situation, authors J.A. Garcia, R. Rodriguez-Sanchez, and I. Fdez-Valdivia stress the importance of a journal editor adopting a “peer review strategy,” which they define as “the smallest set of editorial decisions to optimally guide the other decisions with the help of handling and quality-assurance editors [
49].” While such a strategy has the promise of a greatly improved process, ultimately no amount of tinkering with an existing peer review process will overcome a failure in ethics.
Unfortunately, what is considered ethical can vary widely among groups and individuals. Organizations may well have codes of ethics but then, under pressure, can fail to live up to their own standards. Many, if not most, of society’s disputes involve differing and conflicting views of what is ethical, what is honest, and what has integrity. As 9/11 research has shown in perhaps magnified form, the formal peer review process can be used as a weapon to bury opposing views and stifle independent research whose natural conclusions are in opposition to established or official narratives and vested interests. The tendency to look down upon or disparage a paper that has not gone through or survived the formal peer review process is widespread but often unwarranted. At the same time, those wielding editorial or official authority can easily abuse the system in multiple ways such as inadequate peer review or none at all.
To help ameliorate this situation, it is good practice to insist that existing codes of ethics for peer review be followed by all participants in the review process. See, for example, COPE, Committee on Publication Ethics [
50,
51]. In addition, as suggested below, a more open process that includes review boards and public scrutiny is called for. In Lawrence Souder’s review of the literature on ethics in peer review, these words and concepts are among those pointing to the high goals of the peer review process: moral, fair, honest, (having) integrity and respect, (without) conflict of interest, unbiased, competent, accurate, courteous, and transparent [
9]. In the section on formal training in peer reviewing, Souder mentions the work of N.R. Gough when he writes: “… she recommends (in the absence of formal training) one ethical principle to reviewers in the spirit of the golden rule: reviewers should consider how they would feel when reading reviews they have written for others [
52].” Souder concludes that nothing much has changed in the “peer-review landscape” since J.M. Campanario’s 1998 review of the literature, except that “[o]ne significant change … seems to be the emerging online technologies that have created new possibilities, as well as new difficulties, for peer review [
53,
54].”
4. Some Ethical Guidelines for Peer Review
Here are some proposed guidelines for ethical conduct based on the author’s recent experience in submitting papers to formal peer review as well as on ideas in current literature. While many of the recommendations are well-understood by those familiar with peer review literature, the fact that participants in the process fail too often to adhere to them makes them worth repeating.
4.1. Editorial Review Boards
Even the best editors can occasionally make bad calls that exhibit bias, lack of knowledge, or poor judgment. Respectable journals will have editorial review boards to which authors may appeal. If a journal does not have such a board, this may indicate a lack of ethical standards, and that an author might better seek publication elsewhere.
4.2. Peer Review Strategy
As proposed by Garcia et al., the adoption of a peer review strategy offers the possibility of an improved process. A chief editor, working with associate, quality-assurance, and handling editors in an agreed-upon strategy could well provide checks and balances leading to a higher ethical standard.
4.3. Selection of Editors and Reviewers
Editors will ideally select reviewers who are competent and knowledgeable in the subject matter of a paper, but who are not competitors or friends of the author. The selection of a competitor, especially where the journal has a policy that all reviewers must approve a paper, is simply unfair. Individuals who have made public statements indicating disagreement with an author’s conclusions are best excluded as reviewers, and editors who have expressed such opinions ought to recuse themselves. If an editor has a conflict of interest or lacks knowledge of a research field, the journal can consider inviting a guest editor to oversee the review process.
Editors who place special strictures (see example given earlier) on an author that are not applied to other authors are exhibiting bias, and are best disqualified for the subject paper.
4.4. Rejection
Editors ought to retain the right to publish a paper even if all reviewers advise against it, and especially avoid a policy that all reviewers must be in favor of publication for this to occur. Ground-breaking research is almost always universally opposed when first presented. Reviewers and editors who reject a paper must clearly show where an author’s data, methods, and analysis are erroneous. They should also confine their review to the author’s work itself, and avoid if possible introducing other data and arguments of their own choosing, thus entering into a debate with the author. Such a debate will occur naturally as competing views and results are allowed the light of day. Failure to follow these guidelines leads to censorship. Coercive citations, or requests by a journal for an author to add citations that benefit the journal’s impact factor (IF), are widely seen as unethical if failure to comply can lead to rejection.
4.5. Timely Review
Timely review has an important role in a fair process. Formal peer review can take months, but if the time taken is over a year, this can indicate mishandling of the process.
Many other pertinent suggestions for ethical conduct in peer review could be made, but the above constitute some of the most important ones based on the author’s recent experience.
5. Improving Peer Review
Garcia et al. have suggested two different ways, in addition to the adoption of a peer review strategy, of achieving better results in peer review. One method is to select reviewers who are more biased or even hostile to a paper on the assumption that such reviewers are more likely to search for flaws in the paper. The negative effects of the reviewers’ biases would then be offset by an associate editor who evaluates the alignment quality of review reports [
55]. In this writer’s opinion, this is a dubious procedure at best and one that is both unpredictable and likely to backfire. Authors expect that reviewers will be competent, diligent, and not biased in any way, and it is likely that most authors would object strongly to this method were it made known to them. The second way suggested by Garcia et al. offers a far more positive approach and likelihood of success. In this method, the reviewer is rewarded through a reviewer factor that measures the reviewer’s importance in his field. Garcia et al. go so far as to suggest the founding of an institute for scholarly reviewing, with a database that lists reviewers and their quality status [
56].
The peer review process is essentially a quality check for research before it is presented to the public at large. Just as food items may be checked for quality before being sold to prevent negative health consequences, so information, in turn, is checked to help ensure that mistaken conclusions or deliberate disinformation do not take hold in the public mind. Quality control involves inspection of the processes to verify they are legitimate and trustworthy. If the quality control process is closed to public inspection, as it is in the current formal peer review process, problems can arise and proliferate. While there are good arguments for keeping deliberations of editors and reviewers private, such as the possible accumulation of “enemies” and the increased likelihood of reviewers declining to review, secrecy that results in the suppression of good research and the promulgation of poor research is not in the public interest [
57,
58] . The quality of a research paper is often discerned only after publication and widespread examination. What is most needed is a more open process that serves the public good. As Richard Smith writes: “The final step was, in my mind, to open up the whole process and conduct it in real time on the web in front of the eyes of anybody interested. Peer review would then be transformed from a black box into an open scientific discourse [
47].”
Several years ago, this author instituted just such a more open peer review process in response to a particular problem in 9/11 research. The problem was excessive and often acrimonious contention over what caused the damage and deaths at the Pentagon [
59]. Those promulgating competing Pentagon hypotheses were very often also researchers who fully agreed with the scientific findings of controlled demolition of buildings at the WTC. Resulting frictions among Pentagon researchers meant that some papers were unfairly rejected for publication. In a relatively small group of researchers whose work was already viewed with disapproval by many members of the public, such rejections were particularly damaging to the credibility of 9/11 research.
After a number of rejections of Pentagon research papers submitted by those supporting large plane impact, the author created a website on which to place rejected papers—as well as those that had been accepted and published in journals—and subject them to “open and ongoing discussion through comments by other researchers and readers.” This website, Scientific Method 9/11, is managed by the author as Moderator together with a panel of moderators who evaluate comments on papers and publish an author’s response in a discussion page unique to each listed paper [
60]. Unless there are very good reasons for anonymity, the identities of those commenting are made public.
The value of this website lies in the fact that authors of both rejected and published papers are encouraging and challenging competing researchers and the public to find flaws in their arguments and conclusions. By moderating comments before publication, uncivil or derogatory remarks can be screened out and separated from the debate. Guest moderators may be invited to assess papers and comments. If there are enough corrective comments to warrant it, an author can create a new version of a paper and post it along with the previous version without having to go through another peer review cycle or issuing a retraction. Rather than being seen as a finished product set in concrete with a stamp of approval by way of a secret process, each paper is viewed as a work in progress subject to possible open change and revision.
This open review process has played a positive role in resolving the Pentagon debate. The fact that few of the proponents of theories that compete with large plane impact at the Pentagon have chosen to take up the website’s challenge is in itself significant. Exceptions are Jerry Russell, an early skeptic of large plane impact, and Robin Hordon, former Federal Aviation Administration (FAA) Air Traffic Controller, who now agrees that a large plane with markings and implied dimensions matching an American Airlines Boeing 757 did impact the Pentagon on 9/11 [
61,
62]. A query by researcher David Cole on the eyewitness testimonies of two Pentagon police officers and a heliport air traffic controller led to an interesting discussion and result bearing on the reliability of eyewitness recollections many years after an event [
63]. These and other similar instances indicate that an issue that has bedeviled 9/11 research for many years is now being resolved.