Next Article in Journal
Artificial Intelligence in Corneal Drug Delivery Systems
Previous Article in Journal
Software Applications in Biomedicine: A Narrative Review of Translational Pathways from Data to Decision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

5 Years of BioMedInformatics: The Impact of Artificial Intelligence

by
Alexandre G. de Brevern
1,2
1
Université Paris Cité and Université de la Réunion, INSERM, EFS, BIGR U1134, DSIMB Bioinformatics Team, F-75015 Paris, France
2
Université Paris Cité and Université de la Réunion, INSERM, EFS, BIGR U1134, DSIMB Bioinformatics Team, F-97715 Saint Denis Message, France
BioMedInformatics 2026, 6(2), 10; https://doi.org/10.3390/biomedinformatics6020010
Submission received: 21 January 2026 / Accepted: 13 February 2026 / Published: 25 February 2026

Abstract

BioMedInformatics is an international, peer-reviewed, open access journal that covers all areas of biomedical informatics, computational biology, and medicine. Established in 2021, the journal is now five years old and reflects the evolution of the field through its consistent thematic focus on Artificial Intelligence (AI)-driven diagnosis and prediction, with a particular emphasis on translational clinical decision support and biomedical signal and imaging analysis. Despite the predominance of AI-related topics, classical bioinformatics remains a major focus, with a particular emphasis on the discovery of biomarkers and the development of data resources. This editorial summarises this evolution, which accurately reflects the field as a whole.

1. BioMedInformatics

The very first article in BioMedInformatics was entitled “A New Journal for the New Decade to Publish Biomedical Informatics Research.” Dr. Jörn Lötsch highlighted the explosion of bioinformatics approaches in the biomedical field in recent years and therefore the importance of this new journal [1]. He emphasised that biomedical research increasingly relies on digital and computational approaches that have evolved from supportive tools into a core scientific discipline. This shift enables the transition from predominantly hypothesis-driven research to data-driven biomedical discovery, driven by advances in computational intelligence, machine learning, and advanced statistical methods [2].

2. Artificial Intelligence

The last few years have witnessed a significant scientific revolution with the dramatic entry of what is commonly referred to as “Artificial Intelligence” (AI) into all scientific disciplines. This term is often misunderstood because it encompasses several very different approaches. Indeed, without going back as far as Pascal’s machines, the first definitions of AI can be found in 1943 with McCulloch and Pitts defining the formal neuron [3]. We might also consider Rosenblatt’s research in 1958 concerning a perceptron composed of electronic components [4], as well as the machine learning (ML) approaches of Bryson and Ho and their backpropagation advancement in 1969 [5], which would only truly be completed in the 1980s with artificial neural networks (ANNs) [6], or the seminal developments of Vapnik, which introduced support vector machines (SVMs) [7]. In reality, however, most scientists associate AI with recent deep learning (DL) approaches, which led to the development of AlphaFold [8,9] by DeepMind, earning its creators the Nobel Prize in Chemistry in 2024 [10,11]. The journal BioMedInformatics has not escaped this wave sweeping through our discipline. Given this journal’s fifth anniversary, now seems like a good opportunity to analyse the impact of this revolution on this young journal, as well as to examine the themes that are being highlighted.

3. Methodology

A list of all papers and citations published by BioMedInformatics was found (and downloaded) at https://www.mdpi.com/journal/biomedinformatics (accessed on 1 January 2025). Affiliations were taken from the different manuscripts. Some parsing and analyses were performed using ChatGPT 5.2 and Gemini 3, but the results were primarily performed and re-checked manually. R version 4.5.2 [12] was used for analyses and Figure generation using “ggplot2” [13] and “Rcartogram” [14].

4. Overview

A total of 334 articles were published by BioMedInformatics between 2021 and 2025. Three were removed from this analysis, comprising one retracted article, its retraction statement, and a list of acknowledgments. The number of articles published has increased over the years, stabilising at around 70 articles per year (13 in 2021, 49 in 2022, 70 in 2023, 127 in 2024, and 72 in 2025). Only the beginning of 2024 exceeded the current quarterly average.
BioMedInformatics offers a large number of possible publication types. However, two of them comprise the majority: articles (78.9% of all publications: 261 papers) and journals (14.8%: 49 papers). Far behind are editorials (1.8%: 6 papers) and brief reports (1.2%: 4 papers). The other eight types of publications (including technical notes, communications, and perspectives) are more anecdotal and account for only one or two publications. The number of reviews is limited in the first year (but the total number of papers is also limited, i.e., 1 in 17); they represent 20% of publications from 2022 to 2024 and experience a sharp decrease in 2025 (7.5%: 5 reviews). This is reflected in the percentage of classic articles, which fluctuates between 68.6% and 77.6% and reaches 93.1% in 2025 due to the decrease in the number of reviews (see Table 1A and Figure S1).

5. Impact of AI on BioMedInformatics

The abstracts and keywords of these papers were provided to ChatGPT 5.2 and Gemini 3 to distinguish between AI and non-AI papers. Due to discrepancies between the different methods and the impact of some prompts, the classification was manually corrected, and the final classification came to four groups of AI papers and four groups of non-AI papers. Each theme was then grouped into four clusters based on the main scientific contribution of the article (i.e., not simply based on the abstract and keywords).
Among the AI articles, group A1 comprises studies in which machine learning or deep learning is primarily applied to medical images or physiological signals. Typical data include radiology or pathology images, as well as time series such as ECGs or EEGs. Tasks include event detection, classification, segmentation and recognition. Group A2 includes AI work focused on clinical prediction and decision support. The main goal here is to predict outcomes, stratify risks, or support clinical decisions based on individual patient data. Group A3 comprises methodologically oriented AI articles where the main contribution lies in the AI methodology itself—for example, comparative model evaluation, comparisons, explainability, robustness, or evaluation practices, rather than in a specific medical use case. Group A4 corresponds to AI applied to biological and omics data, including biomarker discovery, feature selection, and biological pattern detection using learned models.
For articles that are not directly related to AI, Group N1 covers classical bioinformatics and computational biology. This includes sequence–structure–function analyses, evolutionary studies, and mechanistic modelling using deterministic or physics-based approaches. Group N2 includes databases, datasets and FAIR/infrastructure contributions whose primary value lies in creating, curating and standardising resources and facilitating data reuse. Group N3 comprises statistical and mathematical studies that rely on classical inference or analytical modelling without training machine learning models. This includes non-AI-related signal processing and hypothesis-driven quantitative work. Finally, group N4 encompasses software tools, pipelines, workflows and applied biomedical or clinical analyses that do not use AI models as a primary method. The emphasis here is on practical implementation and reproducibility, or on producing non-AI-related applied results.
Overall, the A1–A4 and N1–N4 structure provides a straightforward representation of the review’s content. The AI groups cover imaging/signals, clinical prediction, methods/explainability, and biology/biomarkers, while the non-AI groups cover the review’s foundations in bioinformatics, resources/FAIR principles, rigorous quantitative methods, and software/workflow. This allows for year-on-year comparisons and the sharing of consistent definitions. However, the impact of human intervention on an article’s primary category should not be overlooked. The various AI approaches achieved over 90% consensus, but the final decision was mine. I apologise in advance if this biases the analysis slightly.
Unintentionally, this classification is strongly reflected in the analysis of the domain carried out by Professor Carson K. Leung in his editorial entitled “Biomedical Informatics: State of the Art, Challenges, and Opportunities” [15]. He clearly describes all AI clusters (A1 to A4) and non-AI clusters N1 and N3, with only N2 (FAIR databases/resources) not being a primary focus (with no primary contribution regarding the building/maintenance of a resource). As for N4 (software tools/pipelines as primary contribution), the tools were mentioned as examples (e.g., an implementation on GitLab).
Similarly, Professor M. Michael Gromiha and his colleagues, in their editorial “From Code to Cure: The Impact of Artificial Intelligence in Biomedical Applications,” [16] focus on groups A2/A4 (biomedical prediction and omics), with a strong A3 component (emphasis on explainable AI) and clear examples of category A1. Their section dedicated to “Challenges” (bias, overfitting/underfitting, interpretability) is noteworthy, and a comprehensive section on explainable AI, focusing on SHAP/LIME and the transparency issues associated with the use of “black box” models, is essential.
Relatedly, in the editorial “Research on the Application and Interpretability of Predictive Statistical Data Analysis Methods in Medicine,” [17] Professor Pentti Nieminen focuses on three of the clusters: (i) A3 (AI Methodology/Explainability/Evaluation Practices), with interpretability, the black box problem, XAI methods, the importance of features, visual explanations, parallel/mixed expert approaches, and model evaluation/communication; (ii) A2 (Clinical Prediction/Decision Support), which is addressed as the target application context (diagnosis/prognosis, outcome prediction, treatment options, clinical implementation, and acceptance); and (iii) N3 (Statistics/Classical Inference and Presentation, Meta-Analysis/Effect Sizes). He emphasises that clinical research often aims to understand rather than simply predict.
Table 1B presents the distribution of these different clusters (see also Figure S2). The thematic balance is remarkably stable between 2021 and 2025: articles on AI are slightly in the majority (54.38% vs. 45.62%), their annual share remaining within a narrow range (approximately 50% to 62%). The only notable shift is observed in 2023, where articles on AI and those outside AI are equivalent (50/50). However, this is more of a transitional equilibrium than a lasting trend as AI regains its slight majority in 2024–2025.
Within the journal, the main area of AI publication is domain A1 (images/signals). It accounts for the largest number of articles (80; 24.17%), and despite the journal’s growth, it has remained generally stable, hovering around 22–26% between 2022 and 2025. In other words, despite the increasing volume of publications, BioMedInformatics continues to regularly publish work on machine learning in imaging and signals rather than focusing on another key theme. Domain A2 (clinical prediction/decision support) is the second major pillar of AI with 53 articles (16.01%), and it has shown similar year-on-year stability (around 15%). Together, domains A1 and A2 account for around 40% of all articles over this five-year period. This indicates that the journal’s AI identity is based primarily on applied, clinically grounded machine learning (e.g., diagnostic/prognostic modelling and image/signal analysis) rather than purely methodological AI.
At the same time, Table 1B shows that the AI theme is expanding beyond the core A1/A2. A3 (the methodology, evaluation, and explainability of AI) was absent in 2021 but became a recurring component from 2022 onwards, reaching around 7–8% between 2023 and 2025. Similarly, A4 (omics/biological AI) emerges after 2021, stabilising at around 6–8% between 2023 and 2025. This trend reflects diversification: BioMedInformatics is no longer solely publishing work on applied clinical AI but is increasingly welcoming research on the evaluation and interpretation of models (A3), as well as use cases in molecular biology and omics (A4). These two areas often signal the maturation of a specialised AI journal, as previously noted in the editorials [1,15,16,17,18].
In non-AI-related fields, the most important message is one of balance rather than decline, which is perhaps unexpected. The three main pillars—N1 (classical bioinformatics/computational biology), N2 (datasets/resources/FAIR infrastructure), and N3 (statistics/mathematical inference)—are all substantial, and their weight remains similar throughout the period (approximately 12–14% each). Thus, BioMedInformatics is not limited to AI. On the contrary, it maintains a mixed ecosystem in which AI coexists with mechanistic/structure–function bioinformatics, resource development, and data management, and traditional quantitative/statistical studies. The smallest category, N4 (non-AI applied tools, pipelines and analytics), remains consistently present but is the least represented of the non-AI groups (at around 7% in total), suggesting that the journal’s non-AI contributions are more often articles focusing on science, results, resources, and statistics than they are articles primarily focused on software engineering and tooling.
Notably, 2024 was a year of expansion, with a record volume of 127 articles. More importantly, however, it represented large-scale development rather than a thematic reorientation. Almost all research groups saw an increase in their absolute number in 2024, while their relative share remained comparable. Overall, BioMedInformatics has a stable, application-oriented AI core (A1/A2) and is progressively strengthening its methodological/XAI (A3) and omics (A4) AI poles. The journal maintains a solid non-AI base (N1–N3), which ensures pluralism rather than exclusivity.

6. Author Locations

The authors of these articles are affiliated with 60 countries, each of which makes a distinct contribution (see Figure 1A). Most publications are linked to a single country (72.4%), followed by two countries (20.3%) and three countries (5.8%). Only six articles concern four or more countries (1.8%).
Thirteen countries published more than 10 articles. The United States is the leading contributor, with 94 articles, nearly three times more than the second-largest contributor, Greece (34). Two other countries have more than 30 articles: the United Kingdom (32) and Canada (31); and two have more than 20: Germany (26) and Italy (23). Japan (18) is the leading Asian country, followed by Portugal (17) and France (14). Australia (13) is the leading Oceanian country, followed by Bangladesh (11), India (11), Spain (11), and South Korea (10). Morocco and the United Arab Emirates, with 8 articles each, are the main contributors from Africa and the Middle East. China published 6 articles, 4 countries are associated with 5 articles (Brazil (the only country from South America), Ireland, Malaysia, and Nigeria), 5 with 4 (Denmark, Egypt, Iraq, Saudi Arabia, and South Africa), 3 with 3 (Austria, Mexico and Pakistan), 11 with 2 (Belgium, Cyprus, Finland, Georgia, Jordan, Netherlands, Norway, Poland, Serbia, Sri Lanka, and Uganda), and 20 with only 1 (Albania, Armenia, Bosnia, Botswana, Cambodia, Democratic Republic of Congo, Czech Republic, Ghana, Indonesia, Israel, Lebanon, Nepal, New Zealand, Philippines, Romania, Switzerland, Thailand, Tunisia, Turkey, and Vietnam).

7. Citations

The next question, of course, concerns the influence of AI-related topics on citations. AlphaFold’s article [8] was so successful—with 45,188 citations according to Google Scholar and 27,959 according to Web of Science as of 20 January 2026—that it is a reasonable hypothesis that the subsequent explosion in the number of AI articles explains their very high citation counts. BLAST’s original article, published in 1990 [19], still has 129,919 citations according to Google Scholar and 79,986 according to Web of Science as of 20 January 2026. One difficulty with this type of analysis is choosing a database for citation evaluation, as different calculation methods can lead to substantial differences in results (the two previous examples show differences of more than 50%). There are also substantial differences in the performance of search systems, which limits their usefulness in systematic searches [20]. I therefore kept the values from Scopus and updated them daily on the site, with Scopus being a good compromise in terms of journal searches and journal type [21]. The research focused on the period 2021–2024 (see Table 2 and Figure 2); the year 2025 is still too recent.
Figure 1B,C show the distribution of AI and non-AI publications. All of the major contributing countries (those with 10 or more publications) contributed to both AI and non-AI publications. The USA provided a balanced set of publications (49% AI and 51% non-AI). The country with the highest percentage of AI manuscripts was Portugal (71%), followed by South Korea (70%), Bangladesh (69%), Spain (64%) and the UK (63%). Canada had the highest proportion of non-AI manuscripts (58%), followed by India (55%). The other countries had very balanced sets. BioMedInformatics has an average publication rate of 14.8% for reviews. Japan has a rate of 28% (5 reviews), India of 27% (3 reviews), and Canada and Italy of 26% (8 and 6 reviews, respectively), followed by Greece (24%, 8 reviews). The USA published 10 reviews, representing only 11% of its total publications. France published only one review, and Germany and Spain none (out of 14, 26, and 11 publications, respectively).
Thus, 259 publications are considered. As seen in Figure 2A, the citation distribution follows an extreme value law, with peaks at over 100 citations (see Table 2A). On average, publications have 8.87 citations, with little difference between AI and non-AI publications (9.35 vs. 8.31 citations). The median citation numbers (Ncit) are 4 for all publications, 5 for AI publications, and 4 for non-AI publications, with the third quartile numbers being 9 for non-AI publications and 10.2 for AI publications.
For the period (see Figure 2B and Table 2B), 194 articles were published with a mean Ncit of 6.76 (median value of 4), while reviews were associated with a significantly higher Ncit (19.3, i.e., 3 times more) and a median Ncit value of 10 (2.5 times higher). The other categories are too diverse to be properly compared.
The slight difference observed between AI and non-AI publications is reflected in the articles and reviews within each category (see Table 2C). On average, AI articles are cited 7.6 times, compared to 5.6 times for non-AI articles. The difference is even more pronounced in reviews: the average Ncit is 22.3 for AI reviews versus 17.2 for non-AI reviews. While the overall difference is not extremely high, the difference for reviews is clear and significant (median Ncit value of 14 for AI reviews versus 8 for non-AI reviews). The full dataset can be seen in Figure 2C, as well as in the zoomed-in view in Figure 2D. An initial comparison of journals between AI and non-AI may be visually misleading due to the presence of a journal with an extremely high citation count (127 citations), which distorts the violin plot. The zoomed-in view in Figure 2D enables a more accurate comparison.
The variations between countries are much greater than those between categories (see Figure S3 and Table S1 for an analysis of the 10 strongest contributors). The UK, Australia and Japan have articles with an Ncit greater than 10 (10.7 for 19 articles, 10.5 for 6 articles, and 10.2 for 13 articles, respectively), while Greece, France, and Portugal have an Ncit value of less than 5 (3.8 for 16 articles, 4.6 for 8 articles, and 4.7 for 11 articles, respectively). The imbalance is even more pronounced for reviews: only Canada and Australia have an Ncit value above 20 (29.9 and 20.5 for seven and two journals, respectively). Meanwhile, Japan, Portugal, Italy, the United States, and Greece all have Ncit values below 15 (5.0 for 4 journals, 6.3 for 3 reviews, 13.2 for 6 reviews, 14.9 for 8 reviews, and 14.9 for 7 reviews, respectively).

8. Some Striking Examples

From this large mass of quality articles, it is difficult to select specific examples. However, taking into account both the impact of certain articles with certain classifications and their countries of origin, which I have tried to make as precise as possible, the following are some examples of interest.
The first AI example (A1) is the study by Muhammad Turab and Sonain Jamil entitled “A Comprehensive Survey of Digital Twins in Healthcare in the Era of the Metaverse” (Norway and South Korea) [22]. This review provides an overview of digital twins in healthcare, emphasising their integration into the metaverse. Rather than focusing on a specific clinical task, it reviews architectures, platforms, datasets, and enabling technologies. The study highlights the pivotal role of AI in data fusion, simulation, prediction and decision support for applications such as personalised medicine, telemedicine and virtual training. The study focuses on systemic technologies such as the Internet of Things, big data analytics, cloud/edge computing, augmented/virtual reality, and blockchain and addresses key datasets and cross-cutting challenges such as interoperability, data privacy, ethics, and validation. This comprehensive, application-oriented approach enables the implementation of large-scale health systems and corresponds to AI applied to medical images, signals, and complex health environments [22].
The second AI example (A2) is the study by Ahmed and coworkers entitled “Enhancing Brain Tumor Classification with Transfer Learning across Multiple Classes: An In-Depth Analysis” (Bangladesh, USA and UK) [23]. The article in question proposes a data-driven framework which applies machine learning models to structured clinical and biomedical data in order to predict patient outcomes and support clinical decision-making processes. Rather than focusing on analysing raw images or signals, the primary focus is on integrating patient-specific characteristics (such as clinical variables, biomarkers, or derived indicators) to perform risk stratification, prognostic estimation, or outcome prediction. The authors evaluate several predictive models, emphasising performance indicators relevant to clinical utility, such as accuracy, sensitivity, specificity and robustness, across cohorts. Particular attention is paid to the validation, interpretability and potential integration of the models into clinical workflows, with a focus on how the predictions could support clinicians rather than replace them. Overall, the main contributions of this article are its focus on clinical prediction, decision support, patient-level modelling, risk assessment and translational applicability [23].
The third AI example (A3) is the review by Ramalhete and coworkers entitled “Revolutionizing Kidney Transplantation: Connecting Machine Learning and Artificial Intelligence with Next-Generation Healthcare—From Algorithms to Allografts” (Portugal) [24]. This study examines the application of Artificial Intelligence and machine learning throughout the kidney transplantation process, with a focus on predictive modelling, risk stratification and decision support rather than a single clinical dataset or task. A wide range of modelling approaches are critically compared, including random forests, gradient boosting, neural networks, survival models, and deep learning, which are used for donor–recipient matching, graft rejection prediction, organ non-use decision-making, and long-term survival analysis. The performance, interpretability, robustness and clinical integration of the models are evaluated, with emphasis placed on metrics such as AUC-ROC, calibration, feature importance and external validation across different cohorts. The article also addresses methodological challenges such as data heterogeneity, class imbalance, bias, and explainability, as well as the need for transparent and ethically responsible AI frameworks in transplant medicine. The article encompasses predictive modelling, comparative model evaluation, risk stratification, interpretability, validation, decision support systems and AI governance in healthcare [24].
The fourth AI example (A4) is Lötsch et al.’s investigation entitled “Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients” (Germany) [25]. It focuses on XAI as a framework, discussing topics such as model transparency, interpretability and trustworthiness, as well as the differences between symbolic and sub-symbolic models and the evaluation of explanation strategies. It does not propose or validate a new clinical prediction model or biological discovery. Using illustrative examples (e.g., CART vs. SVM, random forests and LIME), the paper compares AI methods and explainability properties, emphasising evaluation practices, conceptual definitions and methodological trade-offs. With a cross-domain scope within biomedicine that addresses physicians, patients, and regulators, the paper’s methodology-centric orientation is further underlined, as opposed to focusing on a single medical use case [25]. Moreover, in a second paper, Lötsch and Ultsch expand on these ideas by proposing methodological advances that improve model interpretability and transparency, thereby reinforcing the foundations required for trustworthy AI-based decision support in biomedical applications [26].
This first non-AI (N1) example is “Application of Standardized Regression Coefficient in Meta-Analysis” by Pentti Nieminen (Finland) [27]. It provides a methodological review of using standardised regression coefficients as effect size measures to synthesise results from multivariate studies in meta-analyses. It addresses inconsistencies in presenting statistical results in biomedical research. Professor Nieminen provides formal statistical formulations, conversion rules and examples of applications to harmonise regression coefficients, correlations, and mean differences within a comparable framework. It is properly illustrated using longitudinal studies that link childhood BMI to carotid intima-media thickness in adulthood. Drawing on classical biostatistics, epidemiology, and research synthesis, this work prioritises rigour, interpretability, and reproducibility over algorithmic learning or predictive modelling [27]. Interestingly, in a subsequent editorial, Pentti Nieminen explicitly applies this perspective to predictive and AI-based models. He argues that interpretability, transparency and “effect size”-type explanations are vital for clinical adoption and evidence-based medicine [17].
The second example of non-AI (N2) is the article by Mayuri et al. “Identification of potent inhibitors of the FTO protein (a protein associated with fat mass and obesity) by hybrid procedures based on deep learning” (India) [28]. The article presents a large-scale (and classical) in silico screening framework centred on the systematic exploration of public chemical databases (ZINC) and experimentally solved protein structures (PDB) to identify potential FTO protein inhibitors. The study relies heavily on reusing, integrating and comparatively evaluating existing biomedical resources, combining chemical libraries, structure repositories and standardised simulation protocols (molecular docking, molecular dynamics and MM-PBSA) within a reproducible computing pipeline. The main keywords are database-driven screening, IT infrastructure, public molecular resources, reproducible workflows, and resource-driven drug discovery, all of which characterise its main contribution. Interestingly, even though the title references Deep Learning, this only corresponds to the occasional use of one tool among many other classic examples [28].
The third example of non-AI (N3) is the article by Katakis and coworkers entitled “Generation of Musculoskeletal Ultrasound Images with Diffusion Models” (Greece) [29]. This article presents a computational workflow for generating realistic musculoskeletal ultrasound images using diffusion models. The focus is on data augmentation, image quality assessment and establishing reproducible evaluation protocols rather than clinical deployment or biological discovery. A significant part of the study involves the comparative evaluation and validation of the generated data using recognised quantitative metrics (PSNR, SSIM, LPIPS and FID), histogram analyses and feature space visualisations, which emphasise the methodological rigour of the proposed pipeline [29].
The last example of non-AI (N4) is the article by Bibbò and colleagues, entitled “AR Platform for Indoor Navigation: New Potential Approach Extensible to Older People with Cognitive Impairment” (Italy and India) [30]. This paper presents the design, development and preliminary evaluation of a smartphone-based, augmented reality indoor navigation system. The system is intended to support older adults with mild cognitive impairment and Alzheimer’s disease by improving their ability to navigate independently in indoor environments. The primary contributions are technological and infrastructural, emphasising system architecture, usability, and the integration of existing platforms (Unity, Vuforia, and Matterport) and assistive workflows, rather than algorithmic innovation or data-driven learning. The applied digital health system include assistive technology, augmented reality, indoor navigation, cognitive impairment, usability evaluation, Internet of Things (IoT)-enabled healthcare environments, and caregiver support [30].

9. Editorial Synthesis

BioMedInformatics (ISSN: 2673-7426) bridges the gap between medical science and new developments in bioinformatics in all its forms. This young journal had an excellent start under the editorship of Professor Jörn Lötsch (Goethe-Universität Frankfurt am Main, Germany) [1]. I succeeded him in March 2023 [18]. BioMedInformatics has been indexed in SCOPUS since mid-October 2023. It has a Scopus CiteScore of 3.4 (2025) and is Q1 in Health Professions (miscellaneous), Q2 in Medicine (miscellaneous), and Q2 in Computer Science (miscellaneous). The journal is particularly rigorous in evaluating submitted manuscripts, with a rejection rate of 81% in 2025.
The survey also underlined the specific problem of FAIR when medical data are used. FAIR stands for “Findable,” “Accessible,” “Interoperable,” and “Reusable” for both humans and machines [31]. “Findable” means having persistent identifiers (e.g., a DOI) and rich metadata and being indexed and queryable. “Accessible” means that the data and metadata can be retrieved via standard protocols, even if access is controlled. “Interoperable” means that the data and metadata use shared formats, vocabularies, and standards, enabling different systems to work together. “Reusable” means having clear licences, established provenance, and sufficient detail and quality to allow for proper use. FAIR does not mean “open”: data can comply with FAIR principles even if access is restricted (e.g., clinical data), provided that the metadata and access conditions are clear and standardised [32].
In BioMedInformatics, all studies are conducted in accordance with Ethics Committee guidelines (authors must provide official documentation). For most articles, the data are available in a public repository accessible via the provided URL. However, the data presented in this study will remain available upon request to the corresponding author, or a cleaned dataset will be made available to those concerned upon request. Data not covered by a confidentiality agreement will not be released as this would not be fully compliant with the FAIR principles. Similarly, the research software field is characterised by the widespread use of free and open-source software, as well as close ties to the broader open science movement. The well-established FAIR principles for research data must be adapted for research software under the acronym FAIR4RS. These principles are divided into five categories: Category 1—the development of the software according to standards and best practices; Category 2—the inclusion of metadata; Category 3—the provision of a licence; Category 4—the sharing of software in a repository; and Category 5—registration in a registry [32]. A significant number of articles do not provide the software component (Category 4), and authors and institutions should consider this from the outset of their submission [33]. It is as important for classical bioinformatics approaches as it is for AI approaches.
This short analysis had shown that AI papers cluster predominantly around imaging/signals and clinical prediction, with a steady increase in methodological and biomarker-oriented AI publications. However, and perhaps surprisingly for some researchers, non-AI papers show a strong foundational core in bioinformatics and data resources, complemented by sustained software, workflow, and clinical contributions. The parallel four-cluster structure highlights the balanced and complementary evolution of AI-driven innovation and non-AI foundational science. Typically, reviews have a deeper impact (in terms of citations), especially in AI thematics, but on average, AI and non-AI papers are quite similarly cited. Thus, AI is becoming more and more essential year on year but not at the expense of other approaches.
As is the case for most scientific journals, the journal’s readership is concentrated in a few countries, despite its wide geographical distribution. International collaboration is modest, with the majority of articles originating from a single country. There are marked national differences: some countries focus heavily on AI, while others less so; some countries publish more review articles; and average citations vary considerably from one country to another, reflecting differing local research policies.
I would like to take this opportunity to thank all the researchers who have placed their trust in BioMedInformatics since its inception. I would also like to thank all the editors, reviewers, and staff members at BioMedInformatics, without whom this success would not have been possible. This is only the beginning!

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/biomedinformatics6020010/s1: Figure S1. Distribution of the number of AI and non-AI publications by year; Figure S2. Distribution of the number of publications classified as article, review and other according to year; Figure S3. Violin plot distribution of the articles or reviews with their number of citations (for the USA, Greece, UK and Canada); Table S1. Citations per country.

Funding

This work was supported by the France 2030 program through the Idex Université Paris Cité (ANR-18-IDEX-0001_GREx).

Data Availability Statement

The raw data is provided in Supplementary Materials (de_Brevern_2026_BioMedInformatics_sup.xls).

Acknowledgments

The author wishes to express sincere gratitude to Jöhn Lötsch at the University of Frankfurt am Main, the former Editor-in-Chief, for his efforts during the founding and early stages of the journal. BioMedInformatics has made excellent progress in recent years, remaining on the track he set. Success would also be impossible without the day-to-day efforts of the editorial team at https://www.mdpi.com/journal/biomedinformatics/editors (accessed on 31 December 2025). Their dedication and dynamism have been instrumental in making BioMedInformatics what it is today. Thank you for everything.

Conflicts of Interest

The author is the Editor-In-Chief of BioMedInformatics. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Lötsch, J. Biomedinformatics: A new journal for the new decade to publish biomedical informatics research. BioMedInformatics 2021, 1, 1–5. [Google Scholar] [CrossRef]
  2. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  3. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  4. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef] [PubMed]
  5. Bryson, A.E.; Ho, Y.-C. Applied Optimal Control: Optimization, Estimation, and Control; Blaisdell Pub. Co.: Waltham, MA, USA, 1969. [Google Scholar]
  6. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  7. Vapnik, V.; Golowich, S.E.; Smola, A. Support vector method for function approximation, regression estimation and signal processing. In Proceedings of the 10th International Conference on Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 1996; pp. 281–287, 287p. [Google Scholar]
  8. Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly accurate protein structure prediction with alphafold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef]
  9. Tourlet, S.; Radjasandirane, R.; Diharce, J.; de Brevern, A.G. Alphafold2 update and perspectives. BioMedInformatics 2023, 3, 378–390. [Google Scholar] [CrossRef]
  10. Aguéro-Pizzolo, S.; Bettler, E.; Gouet, P. Nobel prize in chemistry 2024: David baker, demis hassabis et john m. Jumper. The revolution of artificial intelligence in structural biology. Med. Sci. 2025, 41, 367–373. [Google Scholar]
  11. de Brevern, A.G. Should we expect a second wave of alphafold misuse after the nobel prize? BioMedInformatics 2024, 4, 2306–2308. [Google Scholar] [CrossRef]
  12. The R Core Team. R: A Language and Environment for Statistical Computing; The R Core Team: Vienna, Austria, 2025. [Google Scholar]
  13. Wickham, H. Ggplot2: Elegant Graphics for Data Analysis; Springer: Cham, Switzerland, 2009; Volume VIII, p. 213. [Google Scholar]
  14. Lang, D.T. Rcartogram: Interface to Mark Newman’s Cartogram Software. 2020. Available online: https://github.com/omegahat/Rcartogram (accessed on 31 December 2025).
  15. Leung, C.K. Biomedical informatics: State of the art, challenges, and opportunities. BioMedInformatics 2024, 4, 89–97. [Google Scholar] [CrossRef]
  16. Gromiha, M.M.; Preethi, P.; Pandey, M. From code to cure: The impact of artificial intelligence in biomedical applications. BioMedInformatics 2024, 4, 542–548. [Google Scholar] [CrossRef]
  17. Nieminen, P. Research on the application and interpretability of predictive statistical data analysis methods in medicine. BioMedInformatics 2024, 4, 321–325. [Google Scholar] [CrossRef]
  18. de Brevern, A.G. Biomedinformatics, the link between biomedical informatics, biology and computational medicine. BioMedInformatics 2024, 4, 1–7. [Google Scholar] [CrossRef]
  19. Altschul, S.F.; Gish, W.; Miller, W.; Myers, E.W.; Lipman, D.J. Basic local alignment search tool. J. Mol. Biol. 1990, 215, 403–410. [Google Scholar] [CrossRef]
  20. Gusenbauer, M.; Haddaway, N.R. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of google scholar, pubmed, and 26 other resources. Res. Synth. Methods 2020, 11, 181–217. [Google Scholar] [CrossRef] [PubMed]
  21. Mongeon, P.; Paul-Hus, A. The journal coverage of web of science and scopus: A comparative analysis. Scientometrics 2016, 106, 213–228. [Google Scholar] [CrossRef]
  22. Turab, M.; Jamil, S. A comprehensive survey of digital twins in healthcare in the era of metaverse. BioMedInformatics 2023, 3, 563–584. [Google Scholar] [CrossRef]
  23. Ahmmed, S.; Podder, P.; Mondal, M.R.H.; Rahman, S.M.A.; Kannan, S.; Hasan, M.J.; Rohan, A.; Prosvirin, A.E. Enhancing brain tumor classification with transfer learning across multiple classes: An in-depth analysis. BioMedInformatics 2023, 3, 1124–1144. [Google Scholar] [CrossRef]
  24. Ramalhete, L.; Almeida, P.; Ferreira, R.; Abade, O.; Teixeira, C.; Araújo, R. Revolutionizing kidney transplantation: Connecting machine learning and artificial intelligence with next-generation healthcare—From algorithms to allografts. BioMedInformatics 2024, 4, 673–689. [Google Scholar] [CrossRef]
  25. Lötsch, J.; Kringel, D.; Ultsch, A. Explainable artificial intelligence (xai) in biomedicine: Making ai decisions trustworthy for physicians and patients. BioMedInformatics 2022, 2, 1–17. [Google Scholar] [CrossRef]
  26. Lötsch, J.; Ultsch, A. Enhancing explainable machine learning by reconsidering initially unselected items in feature selection for classification. BioMedInformatics 2022, 2, 701–714. [Google Scholar] [CrossRef]
  27. Nieminen, P. Application of standardized regression coefficient in meta-analysis. BioMedInformatics 2022, 2, 434–458. [Google Scholar] [CrossRef]
  28. Mayuri, K.; Varalakshmi, D.; Tharaheswari, M.; Somala, C.S.; Priya, S.S.; Bharathkumar, N.; Senthil, R.; Kushwah, R.B.S.; Vickram, S.; Anand, T.; et al. Identifying potent fat mass and obesity-associated protein inhibitors using deep learning-based hybrid procedures. BioMedInformatics 2024, 4, 347–359. [Google Scholar]
  29. Katakis, S.; Barotsis, N.; Kakotaritis, A.; Tsiganos, P.; Economou, G.; Panagiotopoulos, E.; Panayiotakis, G. Generation of musculoskeletal ultrasound images with diffusion models. BioMedInformatics 2023, 3, 405–421. [Google Scholar] [CrossRef]
  30. Bibbò, L.; Bramanti, A.; Sharma, J.; Cotroneo, F. Ar platform for indoor navigation: New potential approach extensible to older people with cognitive impairment. BioMedInformatics 2024, 4, 1589–1619. [Google Scholar] [CrossRef]
  31. Wilkinson, M.D.; Dumontier, M.; Aalbersberg, I.J.; Appleton, G.; Axton, M.; Baak, A.; Blomberg, N.; Boiten, J.W.; da Silva Santos, L.B.; Bourne, P.E.; et al. The fair guiding principles for scientific data management and stewardship. Sci. Data 2016, 3, 160018. [Google Scholar] [CrossRef]
  32. Patel, B.; Soundarajan, S.; Ménager, H.; Hu, Z. Making biomedical research software fair: Actionable step-by-step guidelines with a user-support tool. Sci. Data 2023, 10, 557. [Google Scholar] [CrossRef] [PubMed]
  33. Jensen, E.A.; Katz, D.S. Awareness of fair and fair4rs among international research software funders. Sci. Data 2025, 12, 627. [Google Scholar] [CrossRef]
Figure 1. Distribution of the number of publications in BioMedInformatics, (A) by country, (B) for AI publications, and (C) for non-AI publications. The figures were created using the R 4.5.2 [12] software package (http://CRAN.R-project.org/, accessed on 31 December 2025) and the libraries “ggplot2” [13] (https://cran.r-project.org/package=ggplot2, accessed on 1 December 2025) and “Rcartogram” [14] (https://github.com/omegahat/Rcartogram, accessed on 1 December 2025).
Figure 1. Distribution of the number of publications in BioMedInformatics, (A) by country, (B) for AI publications, and (C) for non-AI publications. The figures were created using the R 4.5.2 [12] software package (http://CRAN.R-project.org/, accessed on 31 December 2025) and the libraries “ggplot2” [13] (https://cran.r-project.org/package=ggplot2, accessed on 1 December 2025) and “Rcartogram” [14] (https://github.com/omegahat/Rcartogram, accessed on 1 December 2025).
Biomedinformatics 06 00010 g001
Figure 2. Distribution of citations in BioMedInformatics for (A) AI and non-AI publications and (B) articles, reviews, and other publications. A zoom is applied between 0 and 50 citations for clarity. (C) Violin plot for AI and non-AI publications and for articles and reviews. (D) Zoom between 0 and 50 citations for clarity. The figures were created using the R [12] software package and the library “ggplot2” [13].
Figure 2. Distribution of citations in BioMedInformatics for (A) AI and non-AI publications and (B) articles, reviews, and other publications. A zoom is applied between 0 and 50 citations for clarity. (C) Violin plot for AI and non-AI publications and for articles and reviews. (D) Zoom between 0 and 50 citations for clarity. The figures were created using the R [12] software package and the library “ggplot2” [13].
Biomedinformatics 06 00010 g002
Table 1. Analysis of articles published in BioMedInformatics: (A) Distribution of standard articles, reviews, and other types of articles. (B) Distribution of AI and non-AI articles (four clusters each time). A1: Medical Imaging & Signal-Based Diagnosis (imaging (radiology, pathology, dermatology, physiological signals, e.g., ECG, EEG, wearables); A2: Clinical Prediction & Decision Support (prognosis, risk stratification, outcome prediction, AI-driven clinical decision support systems); A3: Methodological AI & Explainability (model development, benchmarking, explainable AI, method-centric rather than disease-centric studies); A4: AI for Biology & Biomarker Discovery (omics-driven AI, biomarker identification, biological interpretation using ML/DL); N1: Classical Bioinformatics & Computational Biology (sequence, structure, evolutionary and mechanistic studies); N2: Databases, Resources & FAIR Infrastructure (data resources, databases, standards, FAIR compliance); N3: Statistical, Mathematical & Methodological Studies (classical statistics, modelling, non-AI signal processing); and N4: Software Tools, Workflows & Clinical Studies (software, pipelines, reproducibility, non-AI biomedical and clinical analyses).
Table 1. Analysis of articles published in BioMedInformatics: (A) Distribution of standard articles, reviews, and other types of articles. (B) Distribution of AI and non-AI articles (four clusters each time). A1: Medical Imaging & Signal-Based Diagnosis (imaging (radiology, pathology, dermatology, physiological signals, e.g., ECG, EEG, wearables); A2: Clinical Prediction & Decision Support (prognosis, risk stratification, outcome prediction, AI-driven clinical decision support systems); A3: Methodological AI & Explainability (model development, benchmarking, explainable AI, method-centric rather than disease-centric studies); A4: AI for Biology & Biomarker Discovery (omics-driven AI, biomarker identification, biological interpretation using ML/DL); N1: Classical Bioinformatics & Computational Biology (sequence, structure, evolutionary and mechanistic studies); N2: Databases, Resources & FAIR Infrastructure (data resources, databases, standards, FAIR compliance); N3: Statistical, Mathematical & Methodological Studies (classical statistics, modelling, non-AI signal processing); and N4: Software Tools, Workflows & Clinical Studies (software, pipelines, reproducibility, non-AI biomedical and clinical analyses).
20212022202320242025Sum
occ.(%)occ.(%)occ.(%)occ.(%)occ.(%)occ.(%)
A.
Articles1184.623877.554868.579776.386793.0626178.85
Reviews17.69816.331217.142318.1156.944914.80
Others17.6936.121014.2975.5100.00216.34
B.
A1646.151326.531622.862822.051723.618024.17
A2215.38816.331014.292217.321115.285316.01
A300.0012.0457.14107.8768.33226.65
A400.00612.2445.7197.0968.33257.55
N1215.38714.291014.291713.39912.504513.60
N217.69612.241014.291814.17811.114312.99
N317.69510.201014.291511.81912.504012.08
N417.6936.1257.1486.3068.33236.95
AI861.542857.143550.006954.334055.5618054.38
Non-AI538.462142.863550.005845.673244.4415145.62
Sum.13 49 70 127 72 331
Table 2. Analysis of citations from BioMedInformatics publications, including the mean value, the quartiles Q1 and Q3, and therefore the median: (A) for AI and non-AI; (B) for the categories: articles, reviews, and others, (C) including the subdivision into AI and non-AI.
Table 2. Analysis of citations from BioMedInformatics publications, including the mean value, the quartiles Q1 and Q3, and therefore the median: (A) for AI and non-AI; (B) for the categories: articles, reviews, and others, (C) including the subdivision into AI and non-AI.
Type OccurrenceMeanQ1MedianQ3
A.
All 2598.87249
AI 1409.352510.2
non-AI 1198.31249
B.
Articles 1946.76248
Reviews 4419.341016.8
Others 216.571310
C.
AIArticles1097.64248
Reviews1822.34.751423.5
non-AIArticles855.62237
Reviews2617.24813
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

de Brevern, A.G. 5 Years of BioMedInformatics: The Impact of Artificial Intelligence. BioMedInformatics 2026, 6, 10. https://doi.org/10.3390/biomedinformatics6020010

AMA Style

de Brevern AG. 5 Years of BioMedInformatics: The Impact of Artificial Intelligence. BioMedInformatics. 2026; 6(2):10. https://doi.org/10.3390/biomedinformatics6020010

Chicago/Turabian Style

de Brevern, Alexandre G. 2026. "5 Years of BioMedInformatics: The Impact of Artificial Intelligence" BioMedInformatics 6, no. 2: 10. https://doi.org/10.3390/biomedinformatics6020010

APA Style

de Brevern, A. G. (2026). 5 Years of BioMedInformatics: The Impact of Artificial Intelligence. BioMedInformatics, 6(2), 10. https://doi.org/10.3390/biomedinformatics6020010

Article Metrics

Back to TopTop