1. Introduction
The three-dimensional (3D) printing technology allows the production of personalized 3D objects using a selected material with computer support. In the medical field, the creation of different tissues by incorporating living cells into the procedure also takes 3D printing technology to another level [
1]. In the past few years, the 3D printing applications used in dentistry have become widespread because of their usage ease and the accuracy they largely provide. Many procedures applied in dentistry such as orthodontic appliances, transparent plates, fixed prostheses, surgical and occlusal splints can be performed in a shorter time with the use of 3D printing [
2]. This invariably shortens the time spent by doctors in the chair, and more effective and comfortable treatments can be provided to patients.
The Internet provides a platform for sharing knowledge from a range of sources, on computers and smartphones, with fast and easy access, and without any cost. Thus, the internet has become a popular information source worldwide [
3,
4]. About half of the global population has access to the internet, with this figure reaching 90% or higher in many industrialized nations [
5]. Additionally, the internet usage rate is 86% in the United States [
6]. The internet is increasingly being used in many areas such as communication, shopping, education, and health [
7,
8].
In the past few years, the number of patients seeking health-related information online, including topics related to dentistry and orthodontics, has been on the rise [
9]. This increasing demand for health information with internet infrastructure has led to the creation of thousands of websites [
10]. Nevertheless, the overwhelming amount of information available on the internet leads to virtual information pollution, making it challenging for individuals to find accurate and trustworthy sources. Therefore, the information precision and credibility on the websites should be evaluated [
11]. This proliferation of information necessitates a critical evaluation of online sources, particularly for niche yet rapidly evolving fields like 3D printing in dentistry, where misinformation could have clinical implications. Consequently, assessing the quality and reliability of web-based information pertaining to the applications and technologies of 3D printing in dentistry becomes paramount. Three-dimensional printing is gradually integrating into digital dental workflows, with recent reviews describing expanding clinical applications across prosthodontics, implantology, orthodontics, and maxillofacial surgery [
12]. However, despite its widespread adoption and the broad range of dental devices it can fabricate, the clinical efficacy and long-term effectiveness of 3D-printed devices in many medical fields, including dentistry, remain under rigorous investigation. This necessitates ongoing research to optimize 3D printing strategies and materials for dental applications, addressing current limitations such as material assortment and post-processing requirements. Regarding costs, entry-level hardware prices have fallen, yet the total cost of ownership can still be substantial for smaller practices when ongoing expenses such as validated resins and finishing units, maintenance, software, training, and quality assurance are considered; survey data identify ‘high financial investment’ as a leading reason for non-adoption, although systematic reviews show that, for some prosthodontic workflows, digital or hybrid protocols can reduce chairside and laboratory time and be cost-effective at scale [
12].
This paper aims to evaluate the accuracy, comprehensiveness, and readability of online information regarding 3D printing applications within dentistry, given the increasing integration of digital technologies in modern dental practices and the rapid advancements in additive manufacturing. Therefore, the aim of this study is to assess the reliability, readability, and quality of online information sources related to ‘3D printing in dentistry’. We hypothesized that blog pages would achieve higher DISCERN scores than commercial pages, that readability would not differ meaningfully by host type, and that JAMA authorship and attribution would be less frequently satisfied on commercial pages.
2. Materials and Methods
This cross-sectional study evaluated the quality, transparency, and readability of publicly available websites that provide information about three-dimensional printing in dentistry. The study protocol, including eligibility criteria, classification rules, and scoring procedures, was defined before data collection and was applied in the same way to all pages.
A Google search was conducted using the exact term ‘3D printing in dentistry’. We restricted screening to Google to mirror typical user behavior and maximize reproducibility, noting that most patients begin health information seeking on general-purpose search engines. Sponsored links were excluded and duplicate URLs were collapsed prior to eligibility screening. The search was performed in an incognito browser window after clearing cache, cookies, and history to reduce personalization. The search language was English, and the location setting was United States. The first one hundred unique results were recorded for screening, as illustrated in the flow diagram.
Webpages were eligible if they were freely accessible to the public, primarily discussed applications, workflows, or implications of three-dimensional printing within dentistry, and contained sufficient running text to allow readability analysis. Pages were excluded if they were duplicates, peer-reviewed journal articles, book chapters, social media posts, video-only links, pure advertisements or landing pages without substantive text, or technology pages that were not specific to dentistry. After screening, seventy-five webpages met the criteria and were included in the analysis (
Figure 1).
Each included page was classified by dominant purpose. “Blog” pages provided editorial or informational content intended to explain or compare options without a primary sales focus. “Commercial” pages belonged to companies, clinics, manufacturers, or vendors and promoted products or services. When a page mixed purposes, we assigned the category that reflected the majority of the main text and the page’s calls to action. In unclear cases, we consulted the “About” or “Company” section and classified on that basis.
For every eligible webpage the following information was extracted: the uniform resource locator, the host type (blog or commercial), and the publication or update date when available. Quality of information was appraised with the DISCERN instrument. DISCERN is a brief, structured instrument designed to appraise the quality of written patient information on treatment choices. It focuses on features that influence decision making, including reliability of the text, balanced presentation of benefits and harms, clarity about uncertainties, and explicit acknowledgment of alternative options. The instrument comprises 16 items. Items 1–8 assess reliability and items 9–15 assess the quality of information on treatment choices. Each item is scored from 1 to 5, where 1 indicates serious shortcomings and 5 indicates minimal shortcomings. Items 1–15 are summed to yield a total score ranging from 15 to 75, and item 16 provides an overall quality rating from 1 to 5. Scoring followed the official handbook guidance. While rating each webpage, we examined whether the text described benefits and harms in a balanced way, identified alternative treatments where relevant, and stated important areas of uncertainty. When a criterion was only partially met, intermediate scores were assigned. For reader-facing interpretation, total scores are also presented in descriptive bands from very poor to excellent. These bands are reported for context only; all statistical analyses were performed on raw DISCERN scores. Rater training and standardization were undertaken before data collection.
Reliability and transparency were further evaluated with the accountability benchmarks proposed by the Journal of the American Medical Association (JAMA). Operational definitions were specified a priori. Authorship was coded present only when the page named at least one responsible author together with professional credentials or role on the page itself. Corporate or departmental authorship counted only when a responsible person, team, or committee was explicitly identified. A site-wide footer listing the organization alone did not satisfy authorship. Attribution was coded present only when the page provided explicit references or direct links to identifiable external sources such as primary studies, clinical guidelines, textbooks, or authoritative institutional pages. Generic phrases like “studies show” without a citation, or links that point only to internal marketing pages, were not considered attribution. Disclosure was coded present when the page clearly stated site ownership and any advertising or commercial policy in language understandable to a typical reader. Cookie notices and generic privacy banners were not counted as disclosure unless they explicitly described ownership or commercial relationships relevant to the content. Currency was coded present when the page displayed an explicit publication date or a visible “last updated” date on the page under review. An auto-generated copyright year in the footer did not qualify. If both publication and update dates were present, we recorded the most recent date. All four benchmarks were recorded as present or absent for each page using these rules. Ambiguous or incomplete cases were coded absent to avoid overestimation. Elements were assessed on the page itself without navigating to separate site sections, and the location of each positive finding was noted for audit trail and consistency. The same trained examiner applied the benchmarks to all pages following a brief calibration phase that used exemplar pages to anchor judgments.
We computed the Flesch Reading Ease Score (FRES). using the standard specification in which average sentence length and the average number of syllables per word determine the score. Values range from 0 to 100 and higher values indicate easier reading. As a practical guide, scores around 60–70 are typical for middle-school texts, whereas scores near 30–50 are considered difficult for lay readers. We report group means and confidence intervals to contextualize differences by host type.
We computed the Flesch–Kincaid Grade Level (FKGL) on the same cleaned text. This index translates average sentence length and average syllables per word into an approximate United States school grade level. For example, a value near 8 suggests suitability for grade-8 readers, while values around 12–13 imply high-school to college-level demand. We interpret group differences relative to commonly recommended targets for patient-facing materials.
For very long webpages, we analyzed a standardized segment of approximately 400–600 consecutive words drawn from the main body to ensure comparability across pages. All readability calculations followed the same pre-processing steps and were performed by the same examiner following a pre-specified protocol.
All assessments were carried out by a single examiner who studied the DISCERN handbook and piloted the scoring on ten webpages that were not part of the final sample to standardize judgments such as balance, discussion of benefits and risks, and reporting of uncertainties and alternatives. Because only one examiner performed the ratings, inter-rater reliability could not be calculated and is acknowledged as a limitation. The study analyzed information that is openly available on the Internet and did not involve human participants.
Descriptive statistics were calculated for all variables. Comparisons between blog pages and commercial pages were performed using either the independent-t test or the Mann–Whitney U test for continuous outcomes after assessment of normality with the Shapiro–Wilk test and using the chi-square test or the Fisher exact test for categorical outcomes such as the presence of the Journal of the American Medical Association benchmarks. The significance level was set at 0.05. Statistical analyses were conducted with statistical software (SPSS Inc., version 21 IBM, Chicago, IL, USA).
3. Results
Among the seventy-five eligible webpages, thirty-three (44.0%) were classified as blogs and forty-two (56.0%) as commercial pages (
Figure 2). Mean quality scores favored blogs across all DISCERN domains (
Table 1). For DISCERN Part 1 (reliability), blogs scored 30.24 ± 4.78 versus 23.50 ± 7.40 for commercial pages (
p < 0.001), and for Part 2 (quality of treatment information) the corresponding values were 23.85 ± 5.02 versus 19.93 ± 6.49 (
p = 0.006) (
Figure 3). Consequently, the total DISCERN score was higher for blogs (54.09 ± 8.79) than for commercial pages (43.43 ± 13.06;
p < 0.001). Overall quality (DISCERN item 16) showed a similar pattern (3.55 ± 0.94 vs. 2.83 ± 1.06;
p = 0.005) (
Figure 4).
Readability did not differ meaningfully between groups (
Table 1). The Flesch Reading Ease Score averaged 41.13 ± 11.32 for blogs and 39.18 ± 11.73 for commercial pages (
p = 0.996), while the Flesch–Kincaid Grade Level averaged 12.60 ± 2.35 and 13.04 ± 2.71, respectively (
p = 0.753). Consistent with these findings, effect sizes were negligible, indicating that both groups required similarly high reading levels.
Evaluation of the JAMA benchmarks highlighted a marked difference in authorship (
Table 2). Authorship details were present in 27 of 33 blog pages (81.8%) but only 11 of 42 commercial pages (26.2%;
p < 0.001). In contrast, attribution or sources remained low across both categories (blogs 8/33, 24.2%; commercial 4/42, 9.5%;
p = 0.084). Disclosure was almost universally provided (blogs 31/33, 93.9%; commercial 40/42, 95.2%;
p = 1.000), and currency was high in both groups (blogs 30/33, 90.9%; commercial 39/42, 92.9%;
p = 1.000).
Figure 5 visualizes the distribution of benchmark compliance, while detailed counts appear in
Table 2.
Overall, blogs provided consistently higher information quality than commercial pages without any corresponding advantage in readability. The combination of high disclosure and currency with limited authorship and source attribution suggests that most pages are timely and transparent about ownership, yet frequently under-document their evidence base.
4. Discussion
The development of 3D printers has contributed to the formation of new perspectives in digital dentistry with the advancement of technology. The 3D printers eliminate the problems of working with the laboratory and provide faster and more practical results than the traditional methods. Thus, the use of 3D printers is becoming widespread among dentists. The areas of use of 3D printers in dentistry, together with fast, low-error production and personalization in the design, are of great importance in our field [
13,
14,
15].
Utilization of the internet for health information retrieval for people is increasing by the day. The main reasons for this situation are people today care more about their health, and it is easy and almost cost-free to search on the internet. Thus, the dialog between the doctor and the patient is changing [
16]. However, it should not be forgotten that the accuracy and quality of the data obtained from the internet may not be high [
17]. The data available on the internet is not subject to any inspection while it is presented to us. For this reason, the DISCERN and JAMA criteria were used in this research aims to assess the reliability, readability, and accuracy of the information available on the websites.
When readers do not understand medical terms, they are more likely to misjudge benefits and risks, to recall less information, and to disengage from recommended actions. The added cognitive load also reduces perceived credibility and increases the chance of turning to less reliable sources. Using plain language, short definitions, and simple visuals improves comprehension and supports informed decisions [
10]. Beyond specialized terms and long sentences, several qualitative features can depress readability. Dense paragraphs with few headings, small fonts with low contrast, cluttered layouts with heavy navigation or advertising, complex or unlabeled tables and figures, inconsistent terminology, and the absence of clear summaries all make content harder to process. Unexplained acronyms and brand names, abrupt switches between technical detail and marketing language, and weak information architecture can further burden readers and obscure the main message. To address these issues, we recommend a plain language redesign centered on one main message, short paragraphs organized with descriptive headings, everyday wording with brief in text definitions of key terms, and simple visuals with informative captions. Summary boxes that state indications, benefits, risks, and alternatives, along with a brief ‘what to do next’ section, help readers act on the information. Consistent terminology across the page, adequate font size and contrast, and streamlined page elements also support better comprehension and navigation [
17,
18].
This study shows that publicly available webpages on 3D printing in dentistry are, on average, of only moderate quality and are written at a level that is hard for lay readers. Blog-type pages score higher than commercial pages on DISCERN because they more often present a balanced view, discuss benefits and risks, and note alternatives. In short, editorial content tends to be better than marketing copy. This pattern is consistent with prior reviews of online dental and medical information, which also find wide variation and generally below-ideal quality [
19,
20,
21].
Applying JAMA accountability benchmarks revealed a clear transparency gap. Most pages provided a disclosure and showed a recent update date; however, few identified a responsible author and even fewer cited their sources patterns consistent with reports from other specialties [
3,
22]. Two straightforward fixes would address this gap; list the author(s) with their credentials or role and link them to primary studies or professional guidance whenever a treatment claim is made. These practices also meet DISCERN expectations for presenting benefits, risks, and uncertainties with appropriate references [
23]. Many websites do not cite sources for practical and organizational reasons, including marketing oriented brevity, lack of editorial policies that require references, avoidance of external links, reliance on secondary summaries, and templates without a references field; these factors align with the low authorship and attribution rates reported across specialties [
18].
Readability of websites can be objectively measured using the FRES [
24]. In a study conducted by Uzunçıbuk et al. evaluating the readability of online orthodontic information, the average Flesch Reading Ease Score (FRES) was reported as 58.60 [
25]. Meade et al., the average FRES was found to be 59.81, which is in the very difficult readability category [
26]. In our study, average FRES was about 40, and FKGL was around 13. In other words, both blogs and commercial pages expect readers to have high-school to college-level skills. This matches large studies showing that online patient materials are often written above recommended levels [
27]. Major organizations advise writing for a sixth- to eighth grade reading level. The Centers for Disease Control and Prevention’s Clear Communication Index suggests practical steps: state one main message, use everyday words, add visuals to support key points, and clearly present benefits and risks [
18]. Readability for both host types was low by Flesch standards (blogs mean FRES −41; commercial −39), and FKGL estimates were approximately 12th–13th-grade—well above the sixth- to eighth-grade targets recommended for public-facing health materials; applying plain-language checklists (e.g., CDC, Clear Communication Index) can help reduce this gap [
18].
Low readability can weaken the clinical connection by increasing time spent clarifying basic concepts, complicating shared decision making, and reducing adherence to instructions (e.g., appliance wear, denture hygiene, and post-operative care). Recent work shows that tailoring and simplifying patient materials improves usability and adherence in dental settings, while randomized trials indicate that plain-language editing tools can make written health information easier to understand without sacrificing accuracy. Applying structured checklists (e.g., the CDC, Clear Communication Index) when creating or curating clinic-linked webpages may therefore reduce counseling burden and support more effective patient actions [
18,
28].
Although our findings are consistent with broader reports of variable but often suboptimal quality and low readability, this pattern is unlikely to be uniform worldwide. Studies in non-English contexts (e.g., Arabic-language dental topics) report differing quality–readability profiles, and cross-language comparisons show that average reading-grade estimates can diverge across languages when using language-specific formulas. In addition, cross-country indicators point to variation in digital health use and digital health literacy, which likely shapes both the information patients encounter and how well they understand it. These observations support extending future work to multiple languages, platforms, and regions [
29].
Blogs generally perform better than commercial websites because they use an explanatory tone and present options side by side, while reporting benefits, risks, and alternatives more consistently, a pattern reflected in DISCERN scores and documented in several dentistry studies [
23,
30]. In contrast, commercial pages are built for promotion, so they emphasize advantages and often omit limitations or competing options, which has been linked to weaker accountability and missing source attribution in audits across specialties [
22,
31]. Despite these differences in quality, both formats remain hard to read because they rely on technical terms about printing technology, materials and workflow, and they frequently use long sentences, which together raise the reading level to high school or higher [
3,
27]. Authors should write in plain language, present one clear main message, prefer everyday words, use simple figures to support key points, and state benefits and risks explicitly to improve access and understanding [
18]. Overall, information quality and readability are related but distinct goals, and each requires a targeted strategy in content design and editorial practice. The approximately 11-points-higher DISCERN total observed for blogs (54) compared with commercial pages (43) indicates a substantive improvement in how treatment information is framed for readers. On a 15-item, 1–5 scale this is 0.7 points per item, indicating more consistent attention to benefits and harms, alternatives, and uncertain-ty—the core constructs described in the DISCERN handbook. In commonly used descriptive bands, this magnitude moves content from ‘fair’ toward ‘good’ quality, a shift likely to improve clarity and balance for readers [
18].
These findings have clear clinical and educational implications. Clinicians can reduce time spent correcting misunderstandings by directing patients to vetted plain language explainers on dental three-dimensional printing. Professional societies and academic departments should publish open pages that meet the JAMA benchmarks and are designed to meet readability targets, using clear narrative, labeled visuals, and short explainer videos. Industry should name responsible authors, cite sources, and add plain language summaries and simple benefit–risk tables to support informed choice. These steps align with guidance on accountability and patient communication [
20,
26].
We chose DISCERN and the JAMA benchmarks because they match our research question on treatment related webpages and provide results that are easy to compare across sites. DISCERN rates how well a page presents benefits and harms, reports alternatives, and acknowledges uncertainty, so it captures the quality of treatment information directly. The JAMA benchmarks add basic transparency checks for authorship, sources, disclosure, and dates, so readers can see who is responsible for the content and whether it is traceable and up to date. Together, these tools provide complementary quantitative outputs that align with our design and pair well with FRES and FKGL readability scores. Other tools serve different aims. PEMAT focuses on understandability and actionability. IPDAS is intended for formal decision aids that compare options and probabilities. For studies centered on communication quality and transparency rather than decision aids, DISCERN plus JAMA offers the most direct fit [
32,
33].
The findings partially supported our hypothesis. Blog pages achieved higher DISCERN scores than commercial pages across Part 1, Part 2, and the total score, and readability metrics did not differ meaningfully by host type. However, the expectation that JAMA authorship and attribution would be less often satisfied on commercial pages was not supported. Both groups showed high disclosure and currency but similarly low authorship identification and source attribution.
Methodological strengths include a protocol defined in advance, the combined use of DISCERN to assess treatment information and the JAMA benchmark for transparency, and analysis with two measures of reading ease. Despite these strengths, several limitations warrant caution. First, the study relied on one search engine and one language, so the findings may not generalize across regions or audiences. Second, one examiner rated all pages, and agreement between independent rates was not measured. Third, measures of reading ease estimate text difficulty rather than understanding, and page layout, type style, and the use of images or video can affect comprehension in ways not captured by these measures.