Previous Article in Journal
Formal Analysis of Bakery-Based Mutual Exclusion Algorithms
Previous Article in Special Issue
MacHa: Multi-Aspect Controllable Text Generation Based on a Hamiltonian System
 
 
Article
Peer-Review Record

NewsSumm: The World’s Largest Human-Annotated Multi-Document News Summarization Dataset for Indian English

Computers 2025, 14(12), 508; https://doi.org/10.3390/computers14120508 (registering DOI)
by Manish Motghare 1,*, Megha Agarwal 2,* and Avinash Agrawal 1,3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Computers 2025, 14(12), 508; https://doi.org/10.3390/computers14120508 (registering DOI)
Submission received: 1 October 2025 / Revised: 6 November 2025 / Accepted: 18 November 2025 / Published: 23 November 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper presents a large-scale dataset for multi-document news summarization, covering Indian English articles from 2000 to 2025. It includes a large number of  more than 300.000 fact-checked summaries created by a large number of annotators/volunteers. While the dataset has potential value, the methodology lacks transparency in several areas, including the annotation process, training procedures, and factual verification methods. This work could be further enhanced with the following suggestions and comments.

1. In the Related Work section, there seems to be an important gap in terms of setting the proper background regarding key summarization approaches, particularly the distinction between abstractive and extractive summarization. A clear explanation of these two methods is essential for readers to fully understand the nuances of your study, especially given the context of multi-document summarization. The current manuscript evaluates only abstractive systems and lacks a discussion justifying that choice. The authors should: (i) describe the comparison between extractive and abstractive methods and summarize their tradeoffs; recent related reviews include: Giarelis, N., Mastrokostas, C., & Karacapilidis, N. (2023). Abstractive vs. Extractive Summarization: An Experimental Review. Applied Sciences, 13(13), 7620; Shakil, H., Farooq, A., & Kalita, J. (2024). Abstractive text summarization: State of the art, challenges, and improvements. Neurocomputing, 603, 128255. (ii) explain why an abstractive approach is preferred for their task (e.g., required paraphrasing, compression or fluency needs). (iii) report one or more simple extractive baselines (e.g., TextRank, LexRank). Extractive baselines are cheap, often preserve factual content better, and are expected by readers as a minimum comparison; failing to include them weakens claims that abstractive methods are superior for the task. 

2. Inconsistent citing format when citing multiple references. (e.g.,  in line 26 you mark the references [14] [20] in separate square brackets, while in line 30 you mark them within the same [34, 57]). Furthermore, several markings are placed next to a word without using the proper spacing (e.g., line 28 CNN/DailyMail[59]). Lastly, despite using a numbered citation styling the authors use an alphabetically ordered list. This makes it impossible to follow the referenced works and consequently the validity of their claims. If the authors are unfamiliar with the proper citation styling, I suggest that they advise and adhere to the MDPI Reference List and Citations Style Guide.

3. The MultiNews dataset referenced in line 29 is not a single document summarization article as stated by the authors. The authors use only three dataset examples to claim that all previous works remain largely single-document, Western-centric, or rely on noisy, automatically-generated summaries. I suggest that they analyze this generalization further and make a clear distinction between these points, since the introduction is only half a page short.

4. The statistical analysis of the proposed dataset in Section 4 (counts of articles, sentences, words, tokens, category distribution, number of articles per year, etc.) is informative and adds a degree of transparency. However, it remains largely descriptive. Such statistics are standard in dataset introductions and, on their own, do not constitute a standalone research contribution as presented in the Introduction section. The authors could strengthen this section by adding deeper analytical insights such as comparative statistics with existing benchmark datasets. To claim the dataset analysis as a contribution, the authors need to go beyond surface-level counts.

5. The authors acknowledge the expertise of the annotators in several points in the manuscript. However, the manuscript alternates several times between the terms volunteers and annotators without clarifying whether these refer to the same group. Moreover, the large number of annotators raises questions regarding their alleged expertise. The manuscript provides no specific details about this training process and  To assess the annotation quality and reproducibility, the paper should provide more details about the training procedure (e.g., duration, content, examples used, supervision format) and evaluation of annotator readiness (e.g., qualification tests, pilot rounds, accuracy thresholds), and quality control measures applied during data collection (e.g., the standards that were set for a valid summary). I also suggest including quantitative or descriptive statistics on the annotators/volunteers backgrounds (e.g., language proficiency, education, or domain expertise).

6. The proposed dataset includes news articles spanning a time period of 25 years. The authors claim that the annotated summaries underwent a fact-checking process. However, a temporal validity issue regarding the factual consistency and therefore the dataset’s credibility needs to be addressed. Specifically, the manuscript fails to explain how factual verification was handled across such a long period, since facts and entities may evolve or become outdated. Relying solely on annotators’ intuition or general knowledge cannot ensure factual accuracy or reproducibility of the dataset. In order to strengthen the credibility of the dataset, I suggest that the authors clarify if any sources or databases were consulted during verification, what was the fact-verification protocol,  and how factual disagreements were handled.

Author Response

We thank the reviewer for their careful reading and detailed suggestions, which have helped us improve the manuscript’s rigor and clarity.

Comment 1
In the Related Work section, there seems to be an important gap in terms of setting the proper background regarding key summarization approaches, particularly the distinction between abstractive and extractive summarization. A clear explanation of these two methods is essential for readers to fully understand the nuances of your study, especially given the context of multi-document summarization. The current manuscript evaluates only abstractive systems and lacks a discussion justifying that choice. The authors should: (i) describe the comparison between extractive and abstractive methods and summarize their tradeoffs; recent related reviews include: Giarelis, N., Mastrokostas, C., & Karacapilidis, N. (2023). Abstractive vs. Extractive Summarization: An Experimental Review. Applied Sciences, 13(13), 7620; Shakil, H., Farooq, A., & Kalita, J. (2024). Abstractive text summarization: State of the art, challenges, and improvements. Neurocomputing, 603, 128255. (ii) explain why an abstractive approach is preferred for their task (e.g., required paraphrasing, compression or fluency needs). (iii) report one or more simple extractive baselines (e.g., TextRank, LexRank). Extractive baselines are cheap, often preserve factual content better, and are expected by readers as a minimum comparison; failing to include them weakens claims that abstractive methods are superior for the task. 

Response 1
We sincerely thank the reviewer for this constructive and thoughtful feedback. We have fully addressed all three concerns by adding substantial new content and clarity to the manuscript:

(i) Abstractive vs. Extractive Background — Section 2.2 Added:
We have introduced a new Section 2.2, “Abstractive vs. Extractive Summarization Paradigms” (lines 120–138), which now provides:

  • Clear definitions distinguishing extractive approaches (sentence selection) from abstractive approaches (rephrasing/compression)

  • Extractive methods excel at factuality and risk reduction but may yield fragmentary summaries, while abstractive methods improve coherence and conciseness but require enhanced quality control

  • Explicit discussion and citation of recent surveys, including Giarelis et al. (2023) and Shakil et al. (2024), as advised by the reviewer

(ii) Justification for Abstractive Approach:
Within Section 2.2, we now explicitly articulate the rationale for choosing abstractive annotation in NewsSumm:

  • News articles within a cluster frequently include redundant or overlapping information, making paraphrasing and synthesis essential

  • Multi-source input requires narrative coherence and contextualization, which are best achieved by abstractive strategies

  • Readers and downstream applications require fluent, concise, and non-redundant summarization

  • NewsSumm’s protocol mandates compression, paraphrasing, and 50–250 word summaries to enable these goals

(iii) Extractive Baselines Added — New Section 6.4 and Table 10:
To ensure complete transparency and rigor, Section 6.4 (“Extractive Baseline Comparison,” lines 394–406) and Table 10 now report benchmark results for two classic extractive algorithms (LexRank, TextRank).

  • These baselines, implemented as recommended, consistently underperform compared to neural abstractive models (by 15–17 ROUGE-L points), reinforcing the value and necessity of abstractive annotation for this dataset

  • The performance gap also demonstrates that NewsSumm’s summaries go beyond sentence selection—requiring genuine abstraction for high-quality outputs

Overall, these revisions provide the precise methodological context, justification, and empirical comparisons the reviewer requested, and should significantly improve the manuscript’s clarity and value for the community.

Comment 2
Inconsistent citing format when citing multiple references. (e.g., in line 26 you mark the references [14] [20] in separate square brackets, while in line 30 you mark them within the same [34, 57]). Furthermore, several markings are placed next to a word without using the proper spacing (e.g., line 28 CNN/DailyMail[59]). Lastly, despite using a numbered citation styling the authors use an alphabetically ordered list. This makes it impossible to follow the referenced works and consequently the validity of their claims. If the authors are unfamiliar with the proper citation styling, I suggest that they advise and adhere to the MDPI Reference List and Citations Style Guide.

Response 2
We sincerely thank the reviewer for identifying these important formatting inconsistencies. We have now meticulously revised the manuscript to employ strict MDPI numbered citation style throughout, fully resolving all related concerns.

Comment 3
The MultiNews dataset referenced in line 29 is not a single document summarization article as stated by the authors. The authors use only three dataset examples to claim that all previous works remain largely single-document, Western-centric, or rely on noisy, automatically-generated summaries. I suggest that they analyze this generalization further and make a clear distinction between these points, since the introduction is only half a page short.

Response 3
We appreciate the reviewer’s insightful comments regarding generalization and clarity in our manuscript. The original text mistakenly grouped MultiNews with single-document limitations, which was misleading. To clarify, we have added two comprehensive paragraphs (lines 29–40, 44–53 and 94-104) that distinguish three key limitations in existing summarization benchmarks: (i) document scope—where single-document datasets (such as CNN/DailyMail and XSum) are limited to individual articles and lack multi-source synthesis, while multi-document datasets like MultiNews address scale but use auto-generated rather than human-written summaries; (ii) annotation quality—highlighting that MultiNews relies on automatically generated content from metadata or heuristics, introducing noise and factual errors, whereas NewsSumm uniquely provides professionally authored, human-annotated summaries; and (iii) geographic and cultural bias—where most benchmarks, both SDS and MDS, reflect Western English, embedding cultural assumptions from North America and Europe. NewsSumm is introduced as the first large-scale human-annotated MDS resource for Indian English, uniquely addressing these gaps. This restructuring ensures a clear distinction between document scope, annotation quality, and geographic bias, and makes explicit the unique contribution of NewsSumm in tackling these limitations.

Comment 4
The statistical analysis of the proposed dataset in Section 4 (counts of articles, sentences, words, tokens, category distribution, number of articles per year, etc.) is informative and adds a degree of transparency. However, it remains largely descriptive. Such statistics are standard in dataset introductions and, on their own, do not constitute a standalone research contribution as presented in the Introduction section. The authors could strengthen this section by adding deeper analytical insights such as comparative statistics with existing benchmark datasets. To claim the dataset analysis as a contribution, the authors need to go beyond surface-level counts.

Response 4
We appreciate the reviewer’s comment regarding the depth of our statistical analysis. In response, we have added Section 4.6, “Analytical Insights of NewsSumm” (lines 286–299), which moves well beyond descriptive statistics to provide richer comparative and empirical analysis. First, the comparative analysis demonstrates that NewsSumm uniquely combines a large scale (over 317,000 articles), broad topical coverage (20+ categories), a lengthy temporal span (2000–2025), and 100% human-generated summaries—setting it apart from any existing MDS benchmark (see Section 5.1, Table 5). Second, we present a detailed compression ratio analysis; NewsSumm’s compression ratio (3.58 words, 3.55 tokens) places it between extractive datasets (such as CNN/DailyMail) and more abstractive resources (such as XSum), quantifying the dataset’s modeling challenge and informativeness. Third, our linguistic and cultural analysis (Table 6) quantifies unique aspects of Indian English, including vocabulary divergence (e.g., "crore" vs. "million," "ration card" vs. "welfare"), code-switching rates (8–12% Hindi/Urdu mix), and references to political institutions (e.g., "Lok Sabha"), none of which appear in prior Western datasets. Finally, empirical validation in Section 6.3 shows a ~10 ROUGE-L point gap between Western-trained and NewsSumm-fine-tuned PEGASUS models, underscoring the necessity and impact of culturally specific datasets. Overall, these new analyses establish NewsSumm as not merely another dataset, but a methodologically grounded and empirically validated resource that fills crucial gaps in culturally representative, large-scale abstractive summarization.

Comment 5
The authors acknowledge the expertise of the annotators in several points in the manuscript. However, the manuscript alternates several times between the terms volunteers and annotators without clarifying whether these refer to the same group. Moreover, the large number of annotators raises questions regarding their alleged expertise. The manuscript provides no specific details about this training process and To assess the annotation quality and reproducibility, the paper should provide more details about the training procedure (e.g., duration, content, examples used, supervision format) and evaluation of annotator readiness (e.g., qualification tests, pilot rounds, accuracy thresholds), and quality control measures applied during data collection (e.g., the standards that were set for a valid summary). I also suggest including quantitative or descriptive statistics on the annotators/volunteers backgrounds (e.g., language proficiency, education, or domain expertise).

Response 5
We sincerely thank the reviewer for emphasizing the critical importance of annotator transparency. In response, we have significantly expanded Section 3.2 to provide comprehensive information on annotator expertise, training, and demographics. First, we clarify terminology by explicitly stating (lines 188-199) that “annotators” and “volunteers” refer to the same cohort—over 14,000 trained individuals coordinated by the Suvidha Foundation—thus eliminating ambiguity. Second, we now detail the training procedure: all annotators completed a structured 7–11 hour program consisting of orientation, in-depth annotation manuals, 10–15 hands-on practice tasks, and direct supervision with feedback. Advancement required at least 90% accuracy on gold-standard summaries, and ongoing weekly calibration sessions minimized annotation drift. Third, we define seven explicit quality standards for summaries (lines 201–210), including length constraints, non-trivial abstraction, coverage, factual agreement, neutrality, and high inter-annotator agreement). Fourth, Table 1 provides full demographic and background statistics: 61% of annotators hold university degrees, 34% have prior journalism/writing experience, 91% possess high English proficiency, 80% are fluent in Hindi, and their geographic distribution spans 21 Indian states—establishing them as an educated, linguistically skilled, domain-aware group. Finally, our quality control and temporal validity protocol (lines 201–210) ensures robust fact verification: all summaries are strictly grounded in article content, ambiguities are cross-checked using archival sources, senior editors review edge cases, and annotators are required to avoid hindsight bias. We conduct annual blind audits (5% per year) to ensure both accuracy and the absence of retrospective distortion. Together, these measures demonstrate strong annotator credibility and guarantee dataset reproducibility, and transparency.

Comment 6
The proposed dataset includes news articles spanning a time period of 25 years. The authors claim that the annotated summaries underwent a fact-checking process. However, a temporal validity issue regarding the factual consistency and therefore the dataset’s credibility needs to be addressed. Specifically, the manuscript fails to explain how factual verification was handled across such a long period, since facts and entities may evolve or become outdated. Relying solely on annotators’ intuition or general knowledge cannot ensure factual accuracy or reproducibility of the dataset. In order to strengthen the credibility of the dataset, I suggest that the authors clarify if any sources or databases were consulted during verification, what was the fact-verification protocol, and how factual disagreements were handled.

 

Response 6
We greatly appreciate the reviewer’s attention to the challenge of ensuring factual integrity across a 25-year dataset. In response, we have implemented and documented (lines 201–210) a rigorous, date-stamped fact-verification protocol to safeguard temporal validity. Annotators never rely solely on memory or intuition; instead, every fact is cross-checked using contemporaneous archival sources—such as the Internet Archive’s Wayback Machine, India’s PIB, PRS Legislative Research, official government portals, and news archives corresponding to the original publication date. All summary content is strictly grounded in the corresponding source article, with time-sensitive claims flagged for senior editorial review. Any ambiguous or disputed statements are escalated and resolved using verified, timestamped documentation from these archival sources; when verification is not possible, problematic summaries are revised or excluded. Annotators are explicitly instructed to avoid any retrospective bias: facts must be accurate as of the article’s date—never supplemented with later developments. We further enforce this through a 5% annual QA audit to verify compliance. This multifaceted approach ensures complete traceability, rules out “hindsight” errors, and enables future researchers to fully reproduce the fact-checking workflow. Our protocol, combining date-matched archival evidence with senior editor oversight and documented decisions, provides robust factual consistency across the entire 25-year NewsSumm corpus.

Reviewer 2 Report

Comments and Suggestions for Authors

In this article, the authors present a new large text summarization database, that contains articles from Indian English. Generally, I find the article well written, however, I have a few comments.

The authors explain that this new dataset contains text in Indian English and they also give examples of the differences between India English and western English. The performace of the models is the evaluated using this dataset as training/fine-tuning data and test data. It would be interesting to see, how systems trained only on western English perform in this test set. This would give further emphasis on the importance of language/cultural specific datasets.

The ending of the sentence from lines 79-81 does not seem completely clear for me. "..., with others like CRD and QMSum" ... what? Did the authors mean to say that CRD and QMSum are somehow different from the datasets mentioned in the first part of the sentence.

Figure 1 does not present what is described in text. It is just a graphical listing of the datasets. The figure could be improved, by changing it for the datasets to be set on a timeline, with colors indicating annotation strategies, size of the circles indicating the size of the datasets...

In line 248, did the authors mean figure 3?

Figure 3 may be better placed above heading "3.4 Alignment with Global Standards"

Also there are some small inconsistencies between the description and what is in figure 3. One is "source" vs "newspaper", the other (more important) the figure says annotation ID, which is not in the description, and the description mentions URL, which is not in the figure.

I would suggest more consistency in table 2. In the part of totals its sentences, then words, then tokens, but in average its first words, then sentences, them tokens.

In figure 4, all bars after a length of 1250 are not visible anymore. Could a log scale for the number of articles give a more informative chart?

Line 122 says the the abstract are from 50 to 200 words, but figure 5 suggests 250.

Very important!!! The citing style is not correct. Referencing is done with numbers, but the references are listed without numbers.

 

 

 

 

 

Author Response

We greatly appreciate the reviewer’s time and detailed feedback, which have helped us substantially improve the manuscript’s clarity, consistency, and overall quality. Thank you for your valuable input and assistance.

Comment 1
The authors explain that this new dataset contains text in Indian English and they also give examples of the differences between India English and western English. The performance of the models is the evaluated using this dataset as training/fine-tuning data and test data. It would be interesting to see, how systems trained only on western English perform in this test set. This would give further emphasis on the importance of language/cultural specific datasets.

Response 1

Thank you for your valuable suggestion. In response, we have added Section 6.3 ("Cross-Lingual Performance Gap") to the manuscript, evaluating PEGASUS, BART, and T5 models pre-trained exclusively on Western datasets (e.g., CNN/DailyMail, XSum) without fine-tuning on NewsSumm. The results demonstrate a substantive ~10 ROUGE-L point performance gap, quantitatively validating the importance of culturally-specific datasets:

  • PEGASUS (Western-only): ROUGE-L = 29.4%

  • PEGASUS (NewsSumm fine-tuned): ROUGE-L = 39.6%

  • Performance gap: ~10 ROUGE-L points

These findings are now presented in Table 9 and discussed in Section 6.3 (Lines 383–393). Additionally, we have enriched the methodological depth of the work by adding Section 3.4 and Table 10 (Lines 223–232). For further insight into dataset motivation and scope, please also see Lines 63–66.

Comment 2
The ending of the sentence from lines 79-81 does not seem completely clear for me. "..., with others like CRD and QMSum" ... what? Did the authors mean to say that CRD and QMSum are somehow different from the datasets mentioned in the first part of the sentence.

Response 2
We thank the reviewer for highlighting the need for clarity. In response, we have revised lines 94–104 to explicitly distinguish CRD and QMSum from other MDS datasets. The revised text now reads:

"Two collections occupy distinct subdomains: CRD is centered on synthesizing opinions from conversational user and product reviews, while QMSum addresses query-focused meeting summarization. Unlike conventional news or scientific MDS datasets, both CRD and QMSum are either conversational or query-driven in nature and employ annotation protocols different from the standard multi-document summarization paradigm."

This revision makes the distinctions and dataset scopes clear to the reader.

Comment 3
Figure 1 does not present what is described in text. It is just a graphical listing of the datasets. The figure could be improved, by changing it for the datasets to be set on a timeline, with colors indicating annotation strategies, size of the circles indicating the size of the datasets…

Response 3

We have completely redesigned Figure 1 as a timeline matrix to directly address all reviewer concerns. The updated visualization presents datasets along a chronological x-axis (2002–2026), using bubble area (with a logarithmic scale for clarity) to represent dataset size. Color coding distinguishes annotation strategies: green for extractive, cyan for human-abstractive, gold for auto/weakly-supervised, and purple for mixed methods. Single-document summarization (SDS) datasets are clearly separated (upper rows) from multi-document summarization (MDS) datasets (lower rows). Notably, NewsSumm is highlighted with a bold black outline, underscoring its status as the largest human-annotated MDS dataset for Indian English. This redesign transforms Figure 1 from a simple listing into a comparative and analytical visualization, revealing both the evolution of dataset scale and annotation quality trends, while powerfully illustrating NewsSumm’s unique position. The figure now delivers immediate visual insights into the summarization dataset landscape and directly supports the manuscript’s central claims.

Comment 4
In line 248(It is line 148), did the authors mean figure 3?

Response 4
We thank the reviewer for catching this; the reference in line 148 should indeed be to Figure 3a-b, and this has now been corrected in the manuscript.

Comment 5
Figure 3 may be better placed above heading "3.4 Alignment with Global Standards ."Also there are some small inconsistencies between the description and what is in figure 3. One is "source" vs "newspaper", the other (more important) the figure says annotation ID, which is not in the description, and the description mentions URL, which is not in the figure.

Response 5

We have thoroughly addressed all three concerns as follows:

  1. Figure 3 has been repositioned to immediately follow Section 3.3 (Data Cleaning) and now precedes Section 3.4 (Alignment with Global Standards), as recommended (after line 215). Consistent terminology has been enforced—“source (newspaper)” is now used uniformly throughout Sections 3.3–3.4 and in all Figure 3 captions, resolving prior inconsistencies.

  2. Field Alignment: Figures 3a and 3b now clearly display all eight core data fields:

    • source (newspaper)

    • publication date

    • headline

    • article text

    • human summary

    • category/tags

    • URL (for source traceability; previously missing from figure)

    • annotator ID (for quality auditing; previously in figure but absent from text)

This ensures perfect correspondence between the figures and the text, addressing the reviewer’s concerns about mismatches and enhancing overall manuscript clarity.

Comment 6
I would suggest more consistency in table 2. In the part of totals its sentences, then words, then tokens, but in average its first words, then sentences, them tokens.

Response 6
We appreciate the reviewer's feedback regarding table organization. In response, we have comprehensively restructured Table 3 (formerly Table 2) to ensure a consistent and logical ordering across both sections. The Totals section now lists statistics in the order: Sentences → Words → Tokens, and the Averages section mirrors this ordering. This uniform structure significantly enhances readability and reduces cognitive effort for readers comparing dataset statistics.

Comment  7
In figure 4, all bars after a length of 1250 are not visible anymore. Could a log scale for the number of articles give a more informative chart?

Response 7
We appreciate the reviewer’s suggestion regarding Figure 4’s visibility. To address this, we have converted the y-axis from a linear to a logarithmic scale, which now makes the distribution of longer articles (≥1,250 words) clearly visible in the histogram. The revised figure caption (line 251) explicitly states: “Distribution of article lengths in words with a logarithmic y-axis. The log scale reveals the long-tail region (≥1,250 words) that is not visible under a linear scale.” This change provides a more informative visualization—letting readers see the true spectrum of article lengths in NewsSumm, from concise reports to in-depth investigative pieces—while preserving the statistical properties of the data.

Comment 8
Line 122 says the the abstract are from 50 to 200 words, but figure 5 suggests 250.

Response 8
We thank the reviewer for catching this inconsistency. The summary length range is now consistently reported as 50–250 words throughout the manuscript, accurately reflecting the full span from concise briefs to detailed summaries, and ensuring alignment between the text and Figure 5.

Comment 9
Very important!!! The citing style is not correct. Referencing is done with numbers, but the references are listed without numbers.

Response 9
This issue has been fully addressed: all references are now correctly numbered and consistently formatted, in line with the MDPI citation style—thank you for helping us improve the manuscript’s clarity.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The authors have fully addressed my comments on the previous version, so I suggest acceptance of this paper.

Reviewer 2 Report

Comments and Suggestions for Authors

My concerns have from the first review have been properly addressed.

I would maybe suggest only one cosmetic change. In Figure 1, there is some unused vertical space, while the circles are overlapping. Maybe the authors could try to increase the vertical spacing for circles in the same year to avoid overlapping, and the decide which option looks better.

 

Back to TopTop