Next Article in Journal
Leveraging the TOE Framework: Examining the Potential of Mobile Health (mHealth) to Mitigate Health Inequalities
Next Article in Special Issue
The Convergence of Artificial Intelligence and Blockchain: The State of Play and the Road Ahead
Previous Article in Journal
Quantum Convolutional Long Short-Term Memory Based on Variational Quantum Algorithms in the Era of NISQ
Previous Article in Special Issue
Multi-Objective Advantage Actor-Critic Algorithm for Hybrid Disassembly Line Balancing with Multi-Skilled Workers
 
 
Article
Peer-Review Record

Synthetic Displays and Their Potential for Driver Assistance Systems

Information 2024, 15(4), 177; https://doi.org/10.3390/info15040177
by Elisabeth Maria Wögerbauer *, Christoph Bernhard and Heiko Hecht
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Information 2024, 15(4), 177; https://doi.org/10.3390/info15040177
Submission received: 22 February 2024 / Revised: 14 March 2024 / Accepted: 18 March 2024 / Published: 23 March 2024
(This article belongs to the Special Issue Feature Papers in Information in 2023)

Round 1

Reviewer 1 Report (Previous Reviewer 1)

Comments and Suggestions for Authors

The authors have submitted a revised text of the article, in which they have taken into account the comments from the first review.

 

 

There are some errors in the text.

 Line 187

Figure 2Error! Reference source not found. depicts a complex synthetic display,

 Line 478

rmANOVA is presented in Error! Reference source not found.. 478

 Line 517

not differ significantly (see Error! Reference source not found.; F(2, 52) = 2.78, p = .071, η²p

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors

The authors discussed view-altering synthetic displays in general and camera-monitor systems (CMS) designed to replace rear-view mirrors as a particular instance of a novel synthetic display in the automotive domain. A standard CMS presents a camera feed on a monitor and undergoes alterations, augmentations, or condensations before being displayed. The implications of these technologies are discussed with findings from an experiment examining the impact of information reduction on a time-to-contact (TTC) estimation task. In experiments, observers judged the TTC of approaching cars based on the synthetic display of a futuristic CMS. Promisingly, TTC estimations were unaffected by information reduction.

Overall, the submission is kind of innovative but written tedious. Followings are some comments on the improvements.

1. Some "reference source not found" errors need to be corrected.

2. There are two Figure 8. 

3. In experiiments, the hardware environmental setup need to be described and the experimental procedure shoud also be described by a flowchar.

4. There are too many symbols in Table 2. Please provide their meanings and the connections(equations) with the input data and the output results.  Especially how to reach their conclusion or contributions.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report (New Reviewer)

Comments and Suggestions for Authors

The reviewer would like to thank the editor and was pleased to review this manuscript. This study investigates synthetic displays and their potential for driver assistance systems. The topic is interesting and fits with the scope of the journal. Before the final recommendation, some concerns are listed for further consideration by the authors:

(1) Figure 3 could be enriched with illustration samples and would be easy to follow by the readers.

(2) Figure 5 and Figure 6 seemed similar and could be fused together.

(3) Synthetic environments have been widely used in engineering to assist the computer vision tasks, including object recognition and scene modeling. The following papers could be added into the introduction to enrich the idea of using synthetic displays for solving engineering applications.

Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds. Sensors, 2022, 22(2), 532.

Vision-based multi-level synthetical evaluation of seismic damage for RC structural components: a multi-task learning approach. Earthquake Engineering and Engineering Vibration, 2023.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors

The comments have been addressed, and I have no other questions.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

In the article, the authors present a description of the concept of using synthetic display and a description of the experiment based on which they presented their conclusions. Although I can't imagine the pleasure of driving and enjoying the views outside the car window at the same time, I will try to make some suggestions that can be included in the article.

 

 

Note 1.

Using visible light cameras, infrared cameras, Lidar sensors, and thermal imaging, we face the problem of imaging resolution. Each of these sensors can capture an image. However, the smallest visible spatial area in each of the images will be different. Thermal imaging cameras have the lowest resolutions than visible light cameras. Lidar, depending on the mode of operation, does not register the presence of a pipe supporting road signs with a diameter of up to 20-30 mm in front of the car.

 

Will it be possible to combine images with different resolutions in real-time, and at what cost? How much computing power will we need just for the task of combining images? Which of the images will be selected as the most important?

 

Note 2. If we use a synthetic display, what dimensions and resolutions should they have to display the details we are interested in?

 

Note 3. If we remove unnecessary elements from the image in real-time, who will take responsibility for deciding what is redundant and what is not?

 

How should an image analysis algorithm be developed and what computing power do we need to perform such a task?

 

Note 4. A characteristic feature of driving a vehicle is the variability of ambient conditions, lighting conditions, and weather conditions. Should the image on a synthetic display always be similar regardless of the variability outside the car?

 

Will this reduce the driver's vigilance?

Will this limit your ability to detect threats?

 

Note 5. One of the dysfunctions that has appeared in people working with monitors, tablets, and mobile phones is a significant narrowing of the viewing angle. This hurts many aspects of a person's life, including maintaining balance. Won't the introduction of this type of display exacerbate this dysfunction?

 

Note 5. The authors presented the experiment, the results, and their analysis. Please consider experimenting as described in the article, but under different lighting conditions, acoustic interference, and simultaneous appearance of obstacles (in front of the car) on the main screen that the driver should avoid. This should better describe the driver's behaviors when two or more pieces of information appear on both monitors.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

 

The manuscript submitted to Information (information-2793543) titled “Synthetic displays and their potential for driver assistance systems” is about the use of synthetic displays in the automotive cockpit designed to supplement the out-of-window view such as analog speedometer, artificial horizon and projected overlays (speedometer, maps); more specifically, about the use of camera-monitor systems (CMS) designed to replace rear-view mirrors. This research is done in the context of a time-to-contact (TTC) estimation task. Results are that estimations are unaffected by the TTC information reduction, according to the significance of the visual reference frame.

 

The structure of the paper (1. Introduction, 2. Experiment, 3. Discussion) is mainly about a categorization of in-side car displays with the historical development from airplanes to cars of three subcategories of displays (1. natural: through windows and mirrors, 2. video-based: classical and Laparoscopic surgery, 3. Synthetic camera-monitor systems (CMS) and Synthetic vision).

 

ABOUT SECTION 1 - INTRODUCTION

 

Because the manuscript is about in-vehicle displays, section 1.1 “Synthetic displays” should be made of subsections corresponding to the evolution of in-vehicle displays that are more and more computerized  until SVS, already for airplanes and next to the autonomous cars. And, as such, introducing Section 1.2 (A taxonomy of displays).

 

The Wikimedia Figure 1 is not useful as being only illustrative with no content. Include instead a useful reference about NASA’s SVS.

 

Section 1.2 (A taxonomy of displays)  is an important contribution of the manuscript. However, the taxonomic tree is to be improved. Because Figure 3 is about “A taxonomy of artificial and natural displays”, it should be about in-vehicle displays and about human vision.

-       It should not be about synthetic vision system (SVS) in airplane or autonomous cars; making a distinction with Enhanced Flight Vision Systems (EFVS) using real-time sensor input to present an enhanced visual image of the outside view

-       It should not include surgical procedures on the images captured by a miniaturized camera inside the laparoscopic instrument.

 

Thus, the taxonomy (ontology) can be made of :

            1 -  Displays (according to Technology)

                        1.1 – No-screen

                                    1.1.1 – Window (human vision based on natural light)

                                    1.1.2 – Mirror (human vision based on reflected natural light)

                        1.2 – Screen

                                    1.2.1 – Video-based (human vision based on camera captured light)

                                    1.2.2 – Computer-based (human vision based on computed light)

 

Note that in the line of Flohr et al., (2023) [9], SVS could be introduced in the discussion section to show and explain drivers autonomous driving decisions.

 

 

ABOUT SECTION 2 - EXPERIMENT

 

The experimental design was about “evaluating the effects of the factors vehicle type (Line 383-406), reference visibility (416-418), and clutter condition (419-426), armANOVA was conducted with these factors“. Thus a quite different topic that the literature review.

 

Note that there no hypothesizes but a question “Can a synthetic display constitute a case of paradoxical enhancement, where reducing information enhances the display and improves performance?

 

The procedure is described as follows: “Subjects had to estimate when the approaching vehicle would have reached their position (prediction motion paradigm). They indicated this moment with a keypress. After providing an estimate, a new vehicle was placed in the scene. Upon another keypress, the trial began and the vehicle started moving at a constant speed. No audio was presented during the experiment”. Thus, the only task is to see a car approaching 1.5 seconds while approaching at varying constant speeds and to click at the time-to-contact.

 

1)    This was done in a no-driving experiment (see Figure 4). A driving simulator could be used,

2)    Figure 7 (effect of vehicle types) and Figure 8 (effect of varying the outside context by  Clutter condition: Full cues, Reduced clutter and Isolated target) show no difference among experimental conditions. Means differences were inside Standard Error variations. However, (lines 389-392) and Table 2 are stating significant differences for Vehicle type (.001), Reference visibility (.03), Clutter condition (.07). « This pattern is consistent across all three clutter conditions (see Figure 8). Descriptively, there are slight differences among the three clutter conditions, with the TTC estimates, on average, being largest for the schematic representation (M = 1.95 s, SD = 0.75 s), slightly shorter for the full information condition (M = 1.93 s, SD = 0.71 s), and shortest for the isolated target condition (M = 1.86 s, SD = 0.79 s). However, these three variations do not differ significantly (see Error! Reference source not found.; F(2, 52) = 2.78, p = .071, η²p 424 = .10). Thus, the removal of information, as presented in the two reduced variants, did not have a substantial impact on TTC estimation ».

 

Thus, the experimental results are not coherent and the conclusion that reducing information about the driving situation has no effect could be dangerous.

 

This manuscript therefore contains numerous errors and inaccuracies.

 

RECOMMENDATIONS

-       Please verify English to be improved, mostly pages 1-to-3

-       Avoid journalistic comments but develop arguments about the solving of ergonomics problems (explain why “Both authors observed that task performance in different vehicle control tasks improved with indirect vision systems, especially when using a higher field of view (lines 97-98).)

-       In the text (section 1.2. A taxonomy of displays), category 3 is introduced and labelled (line 142), introduce also categories 1 and 2;

-       Recommendation is to modify the taxonomy of inside-car displays as mentioned before (removing at least “Laparoscopic surgery”).

-       Section 2: “Experiment” instead of “the experiment”.

-       Have a 2.1 Method and a 2.2 Results sub-sections

-       Line 291 and everywhere “subjects” is demeaning. Use “Participants”. A participant is a person that voluntarily participates in a study.

-       “To test the hypotheses”, not “To analyze”

-       Avoid naming a car Manufacturer (Tesla)

-       Line 423 : This effect was also confirmed in the rm ANOVA (see Error! Reference 392 source not found.)

-       Line 423 : However, these three variations do 423 not differ significantly (see Error! Reference source not found.; F(2, 52) = 2.78, p = .071, η²p 424 = .10).

-       Figure 7 and 8 Captions and of Table 2 are incomplete

Comments on the Quality of English Language

-       Please verify English to be improved, mostly pages 1-to-3

-       Avoid journalistic comments but develop arguments about the solving of ergonomics problems (explain why “Both authors observed that task performance in different vehicle control tasks improved with indirect vision systems, especially when using a higher field of view (lines 97-98).)

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This well-written manuscript reports the results of an experiment that aimed to examine the impact of information reduction on a time-to-contact (TTC) estimation task. The observers' judgments of the TTC of approaching cars based on the synthetic display of a futuristic CMS demonstrated that TTC estimations are not affected by information reduction. In the introduction, the authors have presented and discussed 

view-altering synthetic displays in general and camera-monitor systems (CMS) designed to replace rear-view mirrors as a special instance of a novel synthetic display in the automotive domain. The implications of these technologies are also discussed, along with findings from an experiment examining the impact of information reduction on a TTC estimation task. The study also emphasizes the significance of the visual reference frame.

The methodology of the study was developed and described correctly. The results were presented in a logical, clear and detailed manner. The authors also pointed out prospects for future work in an objective manner.

In the discussion, the limitations of the study are not presented. I suggest authors add such a subsection in the discussion section. The limitations must be identified and discussed. Likewise, add a discussion on threats to validity and reliability. 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop