Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (104)

Search Parameters:
Authors = Takahiro Ogawa

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1912 KB  
Article
Privacy-Aware Continual Self-Supervised Learning on Multi-Window Chest Computed Tomography for Domain-Shift Robustness
by Ren Tasai, Guang Li, Ren Togo, Takahiro Ogawa, Kenji Hirata, Minghui Tang, Takaaki Yoshimura, Hiroyuki Sugimori, Noriko Nishioka, Yukie Shimizu, Kohsuke Kudo and Miki Haseyama
Bioengineering 2026, 13(1), 32; https://doi.org/10.3390/bioengineering13010032 - 27 Dec 2025
Viewed by 260
Abstract
We propose a novel continual self-supervised learning (CSSL) framework for simultaneously learning diverse features from multi-window-obtained chest computed tomography (CT) images and ensuring data privacy. Achieving a robust and highly generalizable model in medical image diagnosis is challenging, mainly because of issues, such [...] Read more.
We propose a novel continual self-supervised learning (CSSL) framework for simultaneously learning diverse features from multi-window-obtained chest computed tomography (CT) images and ensuring data privacy. Achieving a robust and highly generalizable model in medical image diagnosis is challenging, mainly because of issues, such as the scarcity of large-scale, accurately annotated datasets and domain shifts inherent to dynamic healthcare environments. Specifically, in chest CT, these domain shifts often arise from differences in window settings, which are optimized for distinct clinical purposes. Previous CSSL frameworks often mitigated domain shift by reusing past data, a typically impractical approach owing to privacy constraints. Our approach addresses these challenges by effectively capturing the relationship between previously learned knowledge and new information across different training stages through continual pretraining on unlabeled images. Specifically, by incorporating a latent replay-based mechanism into CSSL, our method mitigates catastrophic forgetting due to domain shifts during continual pretraining while ensuring data privacy. Additionally, we introduce a feature distillation technique that integrates Wasserstein distance-based knowledge distillation and batch-knowledge ensemble, enhancing the ability of the model to learn meaningful, domain-shift-robust representations. Finally, we validate our approach using chest CT images obtained across two different window settings, demonstrating superior performance compared with other approaches. Full article
(This article belongs to the Special Issue Modern Medical Imaging in Disease Diagnosis Applications)
Show Figures

Figure 1

26 pages, 13544 KB  
Article
GeoJapan Fusion Framework: A Large Multimodal Model for Regional Remote Sensing Recognition
by Yaozong Gan, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa and Miki Haseyama
Remote Sens. 2025, 17(17), 3044; https://doi.org/10.3390/rs17173044 - 1 Sep 2025
Viewed by 1656
Abstract
Recent advances in large multimodal models (LMMs) have opened new opportunities for multitask recognition from remote sensing images. However, existing approaches still face challenges in effectively recognizing the complex geospatial characteristics of regions such as Japan, where its location along the seismic belt [...] Read more.
Recent advances in large multimodal models (LMMs) have opened new opportunities for multitask recognition from remote sensing images. However, existing approaches still face challenges in effectively recognizing the complex geospatial characteristics of regions such as Japan, where its location along the seismic belt leads to highly diverse urban environments and cityscapes that differ from those in other regions. To overcome these challenges, we propose the GeoJapan Fusion Framework (GFF), a multimodal architecture that integrates a large language model (LLM) and a vision–language model (VLM) and strengthens multimodal alignment ability through an in-context learning mechanism to support multitask recognition for Japanese remote sensing images. The GFF also incorporates a cross-modal feature fusion mechanism with low-rank adaptation (LoRA) to enhance representation alignment and enable efficient model adaptation. To facilitate the construction of the GFF, we construct the GeoJapan dataset, which comprises a substantial collection of high-quality Japanese remote sensing images, designed to facilitate multitask recognition using LMMs. We conducted extensive experiments and compared our method with state-of-the-art LMMs. The experimental results demonstrate that GFF outperforms previous approaches across multiple tasks, demonstrating its promising ability for multimodal multitask remote sensing recognition. Full article
(This article belongs to the Special Issue Remote Sensing Image Classification: Theory and Application)
Show Figures

Figure 1

12 pages, 937 KB  
Article
Anti-Bacterial, Anti-Viral, and Anti-Inflammatory Properties of Kumazasa Extract: A Potential Strategy to Regulate Smoldering and Inflammation
by Hideki Iwasaki, Shirol Gulam, Tomoji Maeda, Mineo Watanabe, Tokuko Takajo, Soh Katsuyama, Hiroaki Sano, Takanari Tominaga, Akio Ogawa, Ken-ichi Sako, Toru Takahashi, Takahiro Kawase, Takamitsu Tsukahara and Yoshikazu Matsuda
Medicina 2025, 61(9), 1511; https://doi.org/10.3390/medicina61091511 - 22 Aug 2025
Viewed by 933
Abstract
Background and Objectives: Kumazasa extract (KZExt) is a food product obtained by steam extraction of Kumazasa (Sasa senanensis and Sasa kurilensis) leaves under high temperature and pressure. It contains abundant polyphenols, including trans-p-coumaric acid and ferulic acid, as well [...] Read more.
Background and Objectives: Kumazasa extract (KZExt) is a food product obtained by steam extraction of Kumazasa (Sasa senanensis and Sasa kurilensis) leaves under high temperature and pressure. It contains abundant polyphenols, including trans-p-coumaric acid and ferulic acid, as well as xylooligosaccharides. In this study, we investigated the antibacterial, anti-viral, and anti-inflammatory effects of KZExt in vitro and in vivo. Materials and Methods: The anti-oxidant, antibacterial, and anti-viral effects of KZExt were assessed in vitro. Anti-oxidant activity was evaluated based on the scavenging of •OH, •O2 and 1O2. Antibacterial effects were assessed by determining the minimum inhibitory concentration (MIC) using a microdilution method. Anti-influenza activity was measured via plaque formation in MDCK cells. Anti-inflammatory effects were assessed by measuring interleukin (IL)-1β inhibition in lipopolysaccharide (LPS)-stimulated RAW264.7 cells. In vivo, KZExt was administered once (30 min before) in a formalin-induced inflammation model to evaluate pain-related behavior. In the LPS-induced inflammation model, KZExt was administered for five days before LPS injection. Behavioral changes and cytokine levels were assessed 24 h later via the open field test and cytokine quantification. Results: In vitro, KZExt showed antibacterial, anti-influenza, and anti-oxidant effects, and suppressed LPS-induced IL-1β production. In vivo, it significantly reduced the second phase of formalin-induced pain behavior. In the LPS model, although behavioral changes were unaffected, KZExt suppressed IL-6 and interferon-γ production. Conclusions: The antibacterial, anti-viral, and anti-inflammatory effects of KZExt were confirmed in vitro and in vivo. Notably, the anti-inflammatory effect suggests potential immunomodulatory activity. These findings indicate that KZExt may help suppress smoldering inflammation and inflammation associated with various diseases through its combined antibacterial, anti-viral, and immunomodulatory actions. Full article
(This article belongs to the Section Pharmacology)
Show Figures

Figure 1

18 pages, 3542 KB  
Article
Analysis of Model Merging Methods for Continual Updating of Foundation Models in Distributed Data Settings
by Kenta Kubota, Ren Togo, Keisuke Maeda, Takahiro Ogawa and Miki Haseyama
Appl. Sci. 2025, 15(9), 5196; https://doi.org/10.3390/app15095196 - 7 May 2025
Viewed by 2685
Abstract
Foundation models have achieved remarkable success across various domains, but still face critical challenges such as limited data availability, high computational requirements, and rapid knowledge obsolescence. To address these issues, we propose a novel framework that integrates model merging with federated learning to [...] Read more.
Foundation models have achieved remarkable success across various domains, but still face critical challenges such as limited data availability, high computational requirements, and rapid knowledge obsolescence. To address these issues, we propose a novel framework that integrates model merging with federated learning to enable continual foundation model updates without centralizing sensitive data. In this framework, each client fine-tunes a local model, and the server merges these models using multiple merging strategies. We experimentally evaluate the effectiveness of these methods using the CLIP model for image classification tasks across diverse datasets. The results demonstrate that advanced merging methods can surpass simple averaging in terms of accuracy, although they introduce challenges such as catastrophic forgetting and sensitivity to hyperparameters. This study defines a realistic and practical problem setting for decentralized foundation model updates, and provides a comparative analysis of merging techniques, offering valuable insights for scalable and privacy-preserving model evolution in dynamic environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 17459 KB  
Article
Enhancing Adversarial Defense via Brain Activity Integration Without Adversarial Examples
by Tasuku Nakajima, Keisuke Maeda, Ren Togo, Takahiro Ogawa and Miki Haseyama
Sensors 2025, 25(9), 2736; https://doi.org/10.3390/s25092736 - 25 Apr 2025
Viewed by 1067
Abstract
Adversarial attacks on large-scale vision–language foundation models, such as the contrastive language–image pretraining (CLIP) model, can significantly degrade performance across various tasks by generating adversarial examples that are indistinguishable from the original images to human perception. Although adversarial training methods, which train models [...] Read more.
Adversarial attacks on large-scale vision–language foundation models, such as the contrastive language–image pretraining (CLIP) model, can significantly degrade performance across various tasks by generating adversarial examples that are indistinguishable from the original images to human perception. Although adversarial training methods, which train models with adversarial examples, have been proposed to defend against such attacks, they typically require prior knowledge of the attack. These methods also lead to a trade-off between robustness to adversarial examples and accuracy for clean images. To address these challenges, we propose an adversarial defense method based on human brain activity data by hypothesizing that such adversarial examples are not misrecognized by humans. The proposed method employs an encoder that integrates the features of brain activity and augmented images from the original images. Then, by maximizing the similarity between features predicted by the encoder and the original visual features, we obtain features with the visual invariance of the human brain and the diversity of data augmentation. Consequently, we construct a model that is robust against adversarial attacks and maintains accuracy for clean images. Unlike existing methods, the proposed method is not trained on any specific adversarial attack information; thus, it is robust against unknown attacks. Extensive experiments demonstrate that the proposed method significantly enhances robustness to adversarial attacks on the CLIP model without degrading accuracy for clean images. The primary contribution of this study is that the performance trade-off can be overcome using brain activity data. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

21 pages, 28197 KB  
Article
Expert Comment Generation Considering Sports Skill Level Using a Large Multimodal Model with Video and Spatial-Temporal Motion Features
by Tatsuki Seino, Naoki Saito, Takahiro Ogawa, Satoshi Asamizu and Miki Haseyama
Sensors 2025, 25(2), 447; https://doi.org/10.3390/s25020447 - 14 Jan 2025
Cited by 2 | Viewed by 2300
Abstract
In sports training, personalized skill assessment and feedback are crucial for athletes to master complex movements and improve performance. However, existing research on skill transfer predominantly focuses on skill evaluation through video analysis, addressing only a single facet of the multifaceted process required [...] Read more.
In sports training, personalized skill assessment and feedback are crucial for athletes to master complex movements and improve performance. However, existing research on skill transfer predominantly focuses on skill evaluation through video analysis, addressing only a single facet of the multifaceted process required for skill acquisition. Furthermore, in the limited studies that generate expert comments, the learner’s skill level is predetermined, and the spatial-temporal information of human movement is often overlooked. To address this issue, we propose a novel approach to generate skill-level-aware expert comments by leveraging a Large Multimodal Model (LMM) and spatial-temporal motion features. Our method employs a Spatial-Temporal Attention Graph Convolutional Network (STA-GCN) to extract motion features that encapsulate the spatial-temporal dynamics of human movement. The STA-GCN classifies skill levels based on these motion features. The classified skill levels, along with the extracted motion features (intermediate features from the STA-GCN) and the original sports video, are then fed into the LMM. This integration enables the generation of detailed, context-specific expert comments that offer actionable insights for performance improvement. Our contributions are twofold: (1) We incorporate skill level classification results as inputs to the LMM, ensuring that feedback is appropriately tailored to the learner’s skill level; and (2) We integrate motion features that capture spatial-temporal information into the LMM, enhancing its ability to generate feedback based on the learner’s specific actions. Experimental results demonstrate that the proposed method effectively generates expert comments, overcoming the limitations of existing methods and offering valuable guidance for athletes across various skill levels. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 7993 KB  
Article
Multimodal Shot Prediction Based on Spatial-Temporal Interaction between Players in Soccer Videos
by Ryota Goka, Yuya Moroto, Keisuke Maeda, Takahiro Ogawa and Miki Haseyama
Appl. Sci. 2024, 14(11), 4847; https://doi.org/10.3390/app14114847 - 3 Jun 2024
Cited by 5 | Viewed by 2992
Abstract
Sports data analysis has significantly advanced and become an indispensable technology for planning strategy and enhancing competitiveness. In soccer, shot prediction has been realized on the basis of historical match situations, and its results contribute to the evaluation of plays and team tactics. [...] Read more.
Sports data analysis has significantly advanced and become an indispensable technology for planning strategy and enhancing competitiveness. In soccer, shot prediction has been realized on the basis of historical match situations, and its results contribute to the evaluation of plays and team tactics. However, traditional event prediction methods required tracking data acquired with expensive instrumentation and event stream data annotated by experts, and the benefits were limited to only some professional athletes. To tackle this problem, we propose a novel shot prediction method using soccer videos. Our method constructs a graph considering player relationships with audio and visual features as graph nodes. Specifically, by introducing players’ importance into the graph edge based on their field positions and team information, our method enables the utilization of knowledge that reflects the detailed match situation. Next, we extract latent features considering spatial–temporal interactions from the graph and predict event occurrences with uncertainty based on the probabilistic deep learning method. In comparison with several baseline methods and ablation studies using professional soccer match data, our method was confirmed to be effective as it demonstrated the highest average precision of 0.948, surpassing other methods. Full article
(This article belongs to the Collection Computer Science in Sport)
Show Figures

Figure 1

19 pages, 5522 KB  
Article
Multimodal Transformer Model Using Time-Series Data to Classify Winter Road Surface Conditions
by Yuya Moroto, Keisuke Maeda, Ren Togo, Takahiro Ogawa and Miki Haseyama
Sensors 2024, 24(11), 3440; https://doi.org/10.3390/s24113440 - 27 May 2024
Cited by 8 | Viewed by 6207
Abstract
This paper proposes a multimodal Transformer model that uses time-series data to detect and predict winter road surface conditions. For detecting or predicting road surface conditions, the previous approach focuses on the cooperative use of multiple modalities as inputs, e.g., images captured by [...] Read more.
This paper proposes a multimodal Transformer model that uses time-series data to detect and predict winter road surface conditions. For detecting or predicting road surface conditions, the previous approach focuses on the cooperative use of multiple modalities as inputs, e.g., images captured by fixed-point cameras (road surface images) and auxiliary data related to road surface conditions under simple modality integration. Although such an approach achieves performance improvement compared to the method using only images or auxiliary data, there is a demand for further consideration of the way to integrate heterogeneous modalities. The proposed method realizes a more effective modality integration using a cross-attention mechanism and time-series processing. Concretely, when integrating multiple modalities, feature compensation through mutual complementation between modalities is realized through a feature integration technique based on a cross-attention mechanism, and the representational ability of the integrated features is enhanced. In addition, by introducing time-series processing for the input data across several timesteps, it is possible to consider the temporal changes in the road surface conditions. Experiments are conducted for both detection and prediction tasks using data corresponding to the current winter condition and data corresponding to a few hours after the current winter condition, respectively. The experimental results verify the effectiveness of the proposed method for both tasks. In addition to the construction of the classification model for winter road surface conditions, we first attempt to visualize the classification results, especially the prediction results, through the image style transfer model as supplemental extended experiments on image generation at the end of the paper. Full article
(This article belongs to the Special Issue Deep Learning for Information Fusion and Pattern Recognition)
Show Figures

Figure 1

17 pages, 4996 KB  
Article
Trial Analysis of Brain Activity Information for the Presymptomatic Disease Detection of Rheumatoid Arthritis
by Keisuke Maeda, Takahiro Ogawa, Tasuku Kayama, Takuya Sasaki, Kazuki Tainaka, Masaaki Murakami and Miki Haseyama
Bioengineering 2024, 11(6), 523; https://doi.org/10.3390/bioengineering11060523 - 21 May 2024
Viewed by 1471
Abstract
This study presents a trial analysis that uses brain activity information obtained from mice to detect rheumatoid arthritis (RA) in its presymptomatic stages. Specifically, we confirmed that F759 mice, serving as a mouse model of RA that is dependent on the inflammatory cytokine [...] Read more.
This study presents a trial analysis that uses brain activity information obtained from mice to detect rheumatoid arthritis (RA) in its presymptomatic stages. Specifically, we confirmed that F759 mice, serving as a mouse model of RA that is dependent on the inflammatory cytokine IL-6, and healthy wild-type mice can be classified on the basis of brain activity information. We clarified which brain regions are useful for the presymptomatic detection of RA. We introduced a matrix completion-based approach to handle missing brain activity information to perform the aforementioned analysis. In addition, we implemented a canonical correlation-based method capable of analyzing the relationship between various types of brain activity information. This method allowed us to accurately classify F759 and wild-type mice, thereby identifying essential features, including crucial brain regions, for the presymptomatic detection of RA. Our experiment obtained brain activity information from 15 F759 and 10 wild-type mice and analyzed the acquired data. By employing four types of classifiers, our experimental results show that the thalamus and periaqueductal gray are effective for the classification task. Furthermore, we confirmed that classification performance was maximized when seven brain regions were used, excluding the electromyogram and nucleus accumbens. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

19 pages, 44215 KB  
Article
Algal Bed Region Segmentation Based on a ViT Adapter Using Aerial Images for Estimating CO2 Absorption Capacity
by Guang Li, Ren Togo, Keisuke Maeda, Akinori Sako, Isao Yamauchi, Tetsuya Hayakawa, Shigeyuki Nakamae, Takahiro Ogawa and Miki Haseyama
Remote Sens. 2024, 16(10), 1742; https://doi.org/10.3390/rs16101742 - 14 May 2024
Cited by 2 | Viewed by 1987
Abstract
In this study, we propose a novel method for algal bed region segmentation using aerial images. Accurately determining the carbon dioxide absorption capacity of coastal algae requires measurements of algal bed regions. However, conventional manual measurement methods are resource-intensive and time-consuming, which hinders [...] Read more.
In this study, we propose a novel method for algal bed region segmentation using aerial images. Accurately determining the carbon dioxide absorption capacity of coastal algae requires measurements of algal bed regions. However, conventional manual measurement methods are resource-intensive and time-consuming, which hinders the advancement of the field. To solve these problems, we propose a novel method for automatic algal bed region segmentation using aerial images. In our method, we use an advanced semantic segmentation model, a ViT adapter, and adapt it to aerial images for algal bed region segmentation. Our method demonstrates high accuracy in identifying algal bed regions in an aerial image dataset collected from Hokkaido, Japan. The experimental results for five different ecological regions show that the mean intersection over union (mIoU) and mean F-score of our method in the validation set reach 0.787 and 0.870, the IoU and F-score for the background region are 0.957 and 0.978, and the IoU and F-score for the algal bed region are 0.616 and 0.762, respectively. In particular, the mean recognition area compared with the ground truth area annotated manually is 0.861. Our study contributes to the advancement of blue carbon assessment by introducing a novel semantic segmentation-based method for identifying algal bed regions using aerial images. Full article
Show Figures

Figure 1

18 pages, 7280 KB  
Article
Analysis of Continual Learning Techniques for Image Generative Models with Learned Class Information Management
by Taro Togo, Ren Togo, Keisuke Maeda, Takahiro Ogawa and Miki Haseyama
Sensors 2024, 24(10), 3087; https://doi.org/10.3390/s24103087 - 13 May 2024
Cited by 1 | Viewed by 2516
Abstract
The advancements in deep learning have significantly enhanced the capability of image generation models to produce images aligned with human intentions. However, training and adapting these models to new data and tasks remain challenging because of their complexity and the risk of catastrophic [...] Read more.
The advancements in deep learning have significantly enhanced the capability of image generation models to produce images aligned with human intentions. However, training and adapting these models to new data and tasks remain challenging because of their complexity and the risk of catastrophic forgetting. This study proposes a method for addressing these challenges involving the application of class-replacement techniques within a continual learning framework. This method utilizes selective amnesia (SA) to efficiently replace existing classes with new ones while retaining crucial information. This approach improves the model’s adaptability to evolving data environments while preventing the loss of past information. We conducted a detailed evaluation of class-replacement techniques, examining their impact on the “class incremental learning” performance of models and exploring their applicability in various scenarios. The experimental results demonstrated that our proposed method could enhance the learning efficiency and long-term performance of image generation models. This study broadens the application scope of image generation technology and supports the continual improvement and adaptability of corresponding models. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 16066 KB  
Article
A Novel Frame-Selection Metric for Video Inpainting to Enhance Urban Feature Extraction
by Yuhu Feng, Jiahuan Zhang, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa and Miki Haseyama
Sensors 2024, 24(10), 3035; https://doi.org/10.3390/s24103035 - 10 May 2024
Cited by 1 | Viewed by 2523
Abstract
In our digitally driven society, advances in software and hardware to capture video data allow extensive gathering and analysis of large datasets. This has stimulated interest in extracting information from video data, such as buildings and urban streets, to enhance understanding of the [...] Read more.
In our digitally driven society, advances in software and hardware to capture video data allow extensive gathering and analysis of large datasets. This has stimulated interest in extracting information from video data, such as buildings and urban streets, to enhance understanding of the environment. Urban buildings and streets, as essential parts of cities, carry valuable information relevant to daily life. Extracting features from these elements and integrating them with technologies such as VR and AR can contribute to more intelligent and personalized urban public services. Despite its potential benefits, collecting videos of urban environments introduces challenges because of the presence of dynamic objects. The varying shape of the target building in each frame necessitates careful selection to ensure the extraction of quality features. To address this problem, we propose a novel evaluation metric that considers the video-inpainting-restoration quality and the relevance of the target object, considering minimizing areas with cars, maximizing areas with the target building, and minimizing overlapping areas. This metric extends existing video-inpainting-evaluation metrics by considering the relevance of the target object and interconnectivity between objects. We conducted experiment to validate the proposed metrics using real-world datasets from Japanese cities Sapporo and Yokohama. The experiment results demonstrate feasibility of selecting video frames conducive to building feature extraction. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

20 pages, 32920 KB  
Article
Expert–Novice Level Classification Using Graph Convolutional Network Introducing Confidence-Aware Node-Level Attention Mechanism
by Tatsuki Seino, Naoki Saito, Takahiro Ogawa, Satoshi Asamizu and Miki Haseyama
Sensors 2024, 24(10), 3033; https://doi.org/10.3390/s24103033 - 10 May 2024
Cited by 3 | Viewed by 1662
Abstract
In this study, we propose a classification method of expert–novice levels using a graph convolutional network (GCN) with a confidence-aware node-level attention mechanism. In classification using an attention mechanism, highlighted features may not be significant for accurate classification, thereby degrading classification performance. To [...] Read more.
In this study, we propose a classification method of expert–novice levels using a graph convolutional network (GCN) with a confidence-aware node-level attention mechanism. In classification using an attention mechanism, highlighted features may not be significant for accurate classification, thereby degrading classification performance. To address this issue, the proposed method introduces a confidence-aware node-level attention mechanism into a spatiotemporal attention GCN (STA-GCN) for the classification of expert–novice levels. Consequently, our method can contrast the attention value of each node on the basis of the confidence measure of the classification, which solves the problem of classification approaches using attention mechanisms and realizes accurate classification. Furthermore, because the expert–novice levels have ordinalities, using a classification model that considers ordinalities improves the classification performance. The proposed method involves a model that minimizes a loss function that considers the ordinalities of classes to be classified. By implementing the above approaches, the expert–novice level classification performance is improved. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 2731 KB  
Article
Krüppel-like Factor-4-Mediated Macrophage Polarization and Phenotypic Transitions Drive Intestinal Fibrosis in THP-1 Monocyte Models In Vitro
by Takuya Kanno, Takahito Katano, Takaya Shimura, Mamoru Tanaka, Hirotada Nishie, Shigeki Fukusada, Keiji Ozeki, Isamu Ogawa, Takahiro Iwao, Tamihide Matsunaga and Hiromi Kataoka
Medicina 2024, 60(5), 713; https://doi.org/10.3390/medicina60050713 - 26 Apr 2024
Cited by 2 | Viewed by 2876
Abstract
Background and Objectives: Despite the fact that biologic drugs have transformed inflammatory bowel disease (IBD) treatment, addressing fibrosis-related strictures remains a research gap. This study explored the roles of cytokines, macrophages, and Krüppel-like factors (KLFs), specifically KLF4, in intestinal fibrosis, as well [...] Read more.
Background and Objectives: Despite the fact that biologic drugs have transformed inflammatory bowel disease (IBD) treatment, addressing fibrosis-related strictures remains a research gap. This study explored the roles of cytokines, macrophages, and Krüppel-like factors (KLFs), specifically KLF4, in intestinal fibrosis, as well as the interplay of KLF4 with various gut components. Materials and Methods: This study examined macrophage subtypes, their KLF4 expression, and the effects of KLF4 knockdown on macrophage polarization and cytokine expression using THP-1 monocyte models. Co-culture experiments with stromal myofibroblasts and a conditioned medium from macrophage subtype cultures were conducted to study the role of these cells in intestinal fibrosis. Human-induced pluripotent stem cell-derived small intestinal organoids were used to confirm inflammatory and fibrotic responses in the human small intestinal epithelium. Results: Each macrophage subtype exhibited distinct phenotypes and KLF4 expression. Knockdown of KLF4 induced inflammatory cytokine expression in M0, M2a, and M2c cells. M2b exerted anti-fibrotic effects via interleukin (IL)-10. M0 and M2b cells showed a high migratory capacity toward activated stromal myofibroblasts. M0 cells interacting with activated stromal myofibroblasts transformed into inflammatory macrophages, thereby increasing pro-inflammatory cytokine expression. The expression of IL-36α, linked to fibrosis, was upregulated. Conclusions: This study elucidated the role of KLF4 in macrophage polarization and the intricate interactions between macrophages, stromal myofibroblasts, and cytokines in experimental in vitro models of intestinal fibrosis. The obtained results may suggest the mechanism of fibrosis formation in clinical IBD. Full article
(This article belongs to the Section Gastroenterology & Hepatology)
Show Figures

Figure 1

11 pages, 1332 KB  
Article
C-Reactive Protein-to-Albumin Ratio to Predict Tolerability of S-1 as an Adjuvant Chemotherapy in Pancreatic Cancer
by Naotake Funamizu, Akimasa Sakamoto, Takahiro Hikida, Chihiro Ito, Mikiya Shine, Yusuke Nishi, Mio Uraoka, Tomoyuki Nagaoka, Masahiko Honjo, Kei Tamura, Katsunori Sakamoto, Kohei Ogawa and Yasutsugu Takada
Cancers 2024, 16(5), 922; https://doi.org/10.3390/cancers16050922 - 25 Feb 2024
Cited by 7 | Viewed by 1925
Abstract
Adjuvant chemotherapy (AC) with S-1 after radical surgery for resectable pancreatic cancer (PC) has shown a significant survival advantage over surgery alone. Consequently, ensuring that patients receive a consistent, uninterrupted S-1 regimen is of paramount importance. This study aimed to investigate whether the [...] Read more.
Adjuvant chemotherapy (AC) with S-1 after radical surgery for resectable pancreatic cancer (PC) has shown a significant survival advantage over surgery alone. Consequently, ensuring that patients receive a consistent, uninterrupted S-1 regimen is of paramount importance. This study aimed to investigate whether the C-reactive protein-to-albumin ratio (CAR) could predict S-1 AC completion in PC patients without dropout due to adverse events (AEs). We retrospectively enrolled 95 patients who underwent radical pancreatectomy and S-1 AC for PC between January 2010 and December 2022. A statistical analysis was conducted to explore the correlation of predictive markers with S-1 completion, defined as continuous oral administration for 6 months. Among the 95 enrolled patients, 66 (69.5%) completed S-1, and 29 (30.5%) failed. Receiver operating characteristic curve analysis revealed 0.05 as the optimal CAR threshold to predict S-1 completion. Univariate and multivariate analyses further validated that a CAR ≥ 0.05 was independently correlated with S-1 completion (p < 0.001 and p = 0.006, respectively). Furthermore, a significant association was established between a higher CAR at initiation of oral administration and acceptable recurrence-free and overall survival (p = 0.003 and p < 0.001, respectively). CAR ≥ 0.05 serves as a predictive marker for difficulty in completing S-1 treatment as AC for PC due to AEs. Full article
Show Figures

Figure 1

Back to TopTop