Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (24)

Search Parameters:
Keywords = virtual view synthesis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4397 KB  
Article
Splatting the Cat: Efficient Free-Viewpoint 3D Virtual Try-On via View-Decomposed LoRA and Gaussian Splatting
by Chong-Wei Wang, Hung-Kai Huang, Tzu-Yang Lin, Hsiao-Wei Hu and Chi-Hung Chuang
Electronics 2025, 14(19), 3884; https://doi.org/10.3390/electronics14193884 - 30 Sep 2025
Viewed by 1057
Abstract
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and [...] Read more.
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and spatial consistency. Existing 3D VTON approaches commonly face challenges such as barriers to practical deployment, substantial memory requirements, and cross-view inconsistencies. To address these issues, we propose an efficient 3D VTON framework with robust multi-view consistency, whose core design is to decouple the monolithic 3D editing task into a four-stage cascade as follows: (1) We first reconstruct an initial 3D scene using 3D Gaussian Splatting, integrating the SMPL-X model at this stage as a strong geometric prior. By computing a normal-map loss and a geometric consistency loss, we ensure the structural stability of the initial human model across different views. (2) We employ the lightweight CatVTON to generate 2D try-on images, that provide visual guidance for the subsequent personalized fine-tuning tasks. (3) To accurately represent garment details from all angles, we partition the 2D dataset into three subsets—front, side, and back—and train a dedicated LoRA module for each subset on a pre-trained diffusion model. This strategy effectively mitigates the issue of blurred details that can occur when a single model attempts to learn global features. (4) An iterative optimization process then uses the generated 2D VTON images and specialized LoRA modules to edit the 3DGS scene, achieving 360-degree free-viewpoint VTON results. All our experiments were conducted on a single consumer-grade GPU with 24 GB of memory, a significant reduction from the 32 GB or more typically required by previous studies under similar data and parameter settings. Our method balances quality and memory requirement, significantly lowering the adoption barrier for 3D VTON technology. Full article
(This article belongs to the Special Issue 2D/3D Industrial Visual Inspection and Intelligent Image Processing)
Show Figures

Figure 1

25 pages, 1596 KB  
Review
A Survey of 3D Reconstruction: The Evolution from Multi-View Geometry to NeRF and 3DGS
by Shuai Liu, Mengmeng Yang, Tingyan Xing and Ran Yang
Sensors 2025, 25(18), 5748; https://doi.org/10.3390/s25185748 - 15 Sep 2025
Viewed by 5531
Abstract
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. [...] Read more.
Three-dimensional (3D) reconstruction technology is not only a core and key technology in computer vision and graphics, but also a key force driving the flourishing development of many cutting-edge applications such as virtual reality (VR), augmented reality (AR), autonomous driving, and digital earth. With the rise in novel view synthesis technologies such as Neural Radiation Field (NeRF) and 3D Gaussian Splatting (3DGS), 3D reconstruction is facing unprecedented development opportunities. This article introduces the basic principles of traditional 3D reconstruction methods, including Structure from Motion (SfM) and Multi View Stereo (MVS) techniques, and analyzes the limitations of these methods in dealing with complex scenes and dynamic environments. Focusing on implicit 3D scene reconstruction techniques related to NeRF, this paper explores the advantages and challenges of using deep neural networks to learn and generate high-quality 3D scene rendering from limited perspectives. Based on the principles and characteristics of 3DGS-related technologies that have emerged in recent years, the latest progress and innovations in rendering quality, rendering efficiency, sparse view input support, and dynamic 3D reconstruction are analyzed. Finally, the main challenges and opportunities faced by current 3D reconstruction technology and novel view synthesis technology were discussed in depth, and possible technological breakthroughs and development directions in the future were discussed. This article aims to provide a comprehensive perspective for researchers in 3D reconstruction technology in fields such as digital twins and smart cities, while opening up new ideas and paths for future technological innovation and widespread application. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

32 pages, 3256 KB  
Review
AI and Generative Models in 360-Degree Video Creation: Building the Future of Virtual Realities
by Nicolay Anderson Christian, Jason Turuwhenua and Mohammad Norouzifard
Appl. Sci. 2025, 15(17), 9292; https://doi.org/10.3390/app15179292 - 24 Aug 2025
Viewed by 3766
Abstract
The generation of 360° video is gaining prominence in immersive media, virtual reality (VR), gaming projects, and the emerging metaverse. Traditional methods for panoramic content creation often rely on specialized hardware and dense video capture, which limits scalability and accessibility. Recent advances in [...] Read more.
The generation of 360° video is gaining prominence in immersive media, virtual reality (VR), gaming projects, and the emerging metaverse. Traditional methods for panoramic content creation often rely on specialized hardware and dense video capture, which limits scalability and accessibility. Recent advances in generative artificial intelligence, particularly diffusion models and neural radiance fields (NeRFs), are examined in this research for their potential to generate immersive panoramic video content from minimal input, such as a sparse set of narrow-field-of-view (NFoV) images. To investigate this, a structured literature review of over 70 recent papers in panoramic image and video generation was conducted. We analyze key contributions from models such as 360DVD, Imagine360, and PanoDiff, focusing on their approaches to motion continuity, spatial realism, and conditional control. Our analysis highlights that achieving seamless motion continuity remains the primary challenge, as most current models struggle with temporal consistency when generating long sequences. Based on these findings, a research direction has been proposed that aims to generate 360° video from as few as 8–10 static NFoV inputs, drawing on techniques from image stitching, scene completion, and view bridging. This review also underscores the potential for creating scalable, data-efficient, and near-real-time panoramic video synthesis, while emphasizing the critical need to address temporal consistency for practical deployment. Full article
Show Figures

Figure 1

22 pages, 538 KB  
Article
Meaning in the Algorithmic Museum: Towards a Dialectical Modelling Nexus of Virtual Curation
by Huining Guan and Pengbo Chen
Heritage 2025, 8(7), 284; https://doi.org/10.3390/heritage8070284 - 17 Jul 2025
Cited by 2 | Viewed by 1557
Abstract
The rise of algorithm-driven virtual museums presents a philosophical challenge for how cultural meaning is constructed and critiqued in digital curation. Prevailing approaches highlight important but partial aspects: the loss of aura and authenticity in digital reproductions, efforts to maintain semiotic continuity with [...] Read more.
The rise of algorithm-driven virtual museums presents a philosophical challenge for how cultural meaning is constructed and critiqued in digital curation. Prevailing approaches highlight important but partial aspects: the loss of aura and authenticity in digital reproductions, efforts to maintain semiotic continuity with physical exhibits, optimistic narratives of technological democratisation, and critical technopessimist warnings about commodification and bias. Yet none provides a unified theoretical model of meaning-making under algorithmic curation. This paper proposes a dialectical-semiotic framework to synthesise and transcend these positions. The Dialectical Modelling Nexus (DMN) is a new conceptual structure that views meaning in virtual museums as emerging from the dynamic interplay of original and reproduced contexts, human and algorithmic sign systems, personal interpretation, and ideological framing. Through a critique of prior theories and a synthesis of their insights, the DMN offers a comprehensive model to diagnose how algorithms mediate museum content and to guide critical curatorial practice. The framework illuminates the dialectical tensions at the heart of algorithmic cultural mediation and suggests principles for preserving authentic, multi-layered meaning in the digital museum milieu. Full article
(This article belongs to the Special Issue Digital Museology and Emerging Technologies in Cultural Heritage)
Show Figures

Figure 1

31 pages, 6915 KB  
Review
Trends and Techniques in 3D Reconstruction and Rendering: A Survey with Emphasis on Gaussian Splatting
by Wenhe Chen, Zikai Li, Jingru Guo, Caixia Zheng and Siyi Tian
Sensors 2025, 25(12), 3626; https://doi.org/10.3390/s25123626 - 9 Jun 2025
Viewed by 7146
Abstract
Three-Dimensional Gaussian Splatting (3DGS), an important advancement in the field of computer graphics and 3D vision, has emerged to greatly accelerate the rendering process in novel views’ synthesis. Due to its ability to directly realize the real-time estimation of 3D shapes without neural [...] Read more.
Three-Dimensional Gaussian Splatting (3DGS), an important advancement in the field of computer graphics and 3D vision, has emerged to greatly accelerate the rendering process in novel views’ synthesis. Due to its ability to directly realize the real-time estimation of 3D shapes without neural networks, 3DGS has received a lot of attention in the fields of robotics, urban mapping, autonomous navigation, and virtual reality/augmented reality. In view of the growing popularity of 3DGS, we conduct a systematic review of the relevant literature. We begin by surveying existing work on 3D reconstruction and rendering, outlining the historical development and recent advances from both foundational and innovation-driven perspectives. Next, we summarize the most commonly used datasets and evaluation metrics in 3D reconstruction and rendering. Finally, we summarize the current challenges and suggest potential directions for future research. Through this survey, we aim to provide researchers with a treasure trove of resources in understanding and using techniques related to 3D reconstruction and rendering, in order to promote technological development and application deepening in the field of 3D vision. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 13301 KB  
Article
Per-Pixel Manifold-Based Color Calibration Technique
by Stanisław Gardecki, Krzysztof Wegner, Tomasz Grajek and Krzysztof Klimaszewski
Appl. Sci. 2025, 15(6), 3128; https://doi.org/10.3390/app15063128 - 13 Mar 2025
Viewed by 779
Abstract
In this paper, we present a method for obtaining a manifold color correction transform for multiview images. The method can be applied in various scenarios, for correcting the colors of stitched images, adjusting the colors of images obtained in different lighting conditions, and [...] Read more.
In this paper, we present a method for obtaining a manifold color correction transform for multiview images. The method can be applied in various scenarios, for correcting the colors of stitched images, adjusting the colors of images obtained in different lighting conditions, and performing virtual view synthesis based on images taken by different cameras or in different conditions. The provided derivation allows us to use the method to correct regular RGB images. The provided solution is specified as a transform matrix that provides the pixel-specific color transformation for each pixel and therefore is more general than the methods described in the literature, which only provide the transformed images without explicitly providing the transform. By providing the transform for each pixel separately, we can introduce a smoothness constraint based on the transformation similarity for neighboring pixels, a feature that is not present in the available literature. Full article
(This article belongs to the Special Issue Technical Advances in 3D Reconstruction)
Show Figures

Figure 1

15 pages, 2549 KB  
Article
SRNeRF: Super-Resolution Neural Radiance Fields for Autonomous Driving Scenario Reconstruction from Sparse Views
by Jun Wang, Xiaojun Zhu, Ziyu Chen, Peng Li, Chunmao Jiang, Hui Zhang, Chennian Yu and Biao Yu
World Electr. Veh. J. 2025, 16(2), 66; https://doi.org/10.3390/wevj16020066 - 23 Jan 2025
Cited by 2 | Viewed by 1991
Abstract
High-fidelity driving scenario reconstruction can generate a lot of realistic virtual simulation environment samples, which can support effective training and testing for autonomous vehicles. Neural radiance fields (NeRFs) have demonstrated their excellence in high-fidelity scenario reconstruction; however, they still rely on dense-view data [...] Read more.
High-fidelity driving scenario reconstruction can generate a lot of realistic virtual simulation environment samples, which can support effective training and testing for autonomous vehicles. Neural radiance fields (NeRFs) have demonstrated their excellence in high-fidelity scenario reconstruction; however, they still rely on dense-view data and precise camera poses, which are difficult to obtain in autonomous vehicles. To address the above issues, we propose a novel approach called SRNeRF, which can eliminate pose-based operations and perform scenario reconstruction from sparse views. To extract more scene knowledge from limited views, we incorporate an image super-resolution module based on a fully convolutional neural network and introduce a new texture loss to capture scene details for higher-quality scene reconstruction. On both object-centric and scene-level datasets, SRNeRF performs comparably to previous methods with ground truth poses and significantly outperforms methods with predicted poses, with a PSNR improvement of about 30%. Finally, we evaluate SRNeRF on our custom autonomous driving dataset, and the results show that SRNeRF can still generate stable images and novel views in the face of sparse views, demonstrating its scalability in autonomous driving scenario synthesis. Full article
(This article belongs to the Special Issue Recent Advances in Intelligent Vehicle)
Show Figures

Figure 1

13 pages, 737 KB  
Article
Neural Radiance Fields with Hash-Low-Rank Decomposition
by Jiaxin Wang, Weichen Dai, Kangcheng Ma and Wanzeng Kong
Appl. Sci. 2024, 14(23), 11277; https://doi.org/10.3390/app142311277 - 3 Dec 2024
Viewed by 2441
Abstract
In recent advancements in novel view synthesis and neural rendering, neural radiance field (NeRF) has emerged as a powerful technique for synthesizing high-quality novel views of complex 3D scenes. However, the computational and storage demands of NeRF limit its applicability. In this paper, [...] Read more.
In recent advancements in novel view synthesis and neural rendering, neural radiance field (NeRF) has emerged as a powerful technique for synthesizing high-quality novel views of complex 3D scenes. However, the computational and storage demands of NeRF limit its applicability. In this paper, we present a novel approach to NeRF by combining low-rank decomposition and multi-hash encoding through a novel integration process to enhance efficiency and scalability. Our method reduces the model complexity and accelerates the training processes while maintaining high rendering quality. We demonstrate the effectiveness of our approach through extensive experiments on various datasets, showing significant improvements in performance and memory usage compared to traditional NeRF implementations. These results suggest that our approach can make NeRF more practical for real-world applications, such as virtual reality, gaming, and 3D reconstruction. Full article
Show Figures

Figure 1

20 pages, 12071 KB  
Article
Large-Scale Indoor Visual–Geometric Multimodal Dataset and Benchmark for Novel View Synthesis
by Junming Cao, Xiting Zhao and Sören Schwertfeger
Sensors 2024, 24(17), 5798; https://doi.org/10.3390/s24175798 - 6 Sep 2024
Cited by 1 | Viewed by 3719
Abstract
The accurate reconstruction of indoor environments is crucial for applications in augmented reality, virtual reality, and robotics. However, existing indoor datasets are often limited in scale, lack ground truth point clouds, and provide insufficient viewpoints, which impedes the development of robust novel view [...] Read more.
The accurate reconstruction of indoor environments is crucial for applications in augmented reality, virtual reality, and robotics. However, existing indoor datasets are often limited in scale, lack ground truth point clouds, and provide insufficient viewpoints, which impedes the development of robust novel view synthesis (NVS) techniques. To address these limitations, we introduce a new large-scale indoor dataset that features diverse and challenging scenes, including basements and long corridors. This dataset offers panoramic image sequences for comprehensive coverage, high-resolution point clouds, meshes, and textures as ground truth, and a novel benchmark specifically designed to evaluate NVS algorithms in complex indoor environments. Our dataset and benchmark aim to advance indoor scene reconstruction and facilitate the creation of more effective NVS solutions for real-world applications. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

18 pages, 498 KB  
Systematic Review
Nature in the Office: A Systematic Review of Nature Elements and Their Effects on Worker Stress Response
by María Luisa Ríos-Rodríguez, Marina Testa Moreno and Pilar Moreno-Jiménez
Healthcare 2023, 11(21), 2838; https://doi.org/10.3390/healthcare11212838 - 27 Oct 2023
Cited by 15 | Viewed by 4659
Abstract
Work-related stress is a significant problem in many work environments and can have negative consequences for both employees and organisations. This review aimed to identify which elements of biophilic design in the workplace affect workers’ stress response. To enable this, a literature search [...] Read more.
Work-related stress is a significant problem in many work environments and can have negative consequences for both employees and organisations. This review aimed to identify which elements of biophilic design in the workplace affect workers’ stress response. To enable this, a literature search was conducted using PsycINFO, Scopus, and Medline. The search was limited to articles published from 2012 to June 2023. This review only integrated quantitative data, incorporating twelve records for qualitative synthesis. The selected studies suggest that strategies such as access to outdoor environments or the creation of outdoor areas are effective in reducing stress in the workplace. If these are not feasible, the examined research advocates the use of virtual means to recreate such relaxation or break spaces. Furthermore, aspects of interest for future research were identified, such as multisensory stimulation, including the sense of smell, the exploration of views with natural elements, the creation of shelters, or the study of biomorphic forms. Full article
Show Figures

Figure 1

18 pages, 14913 KB  
Article
Camera and LiDAR Fusion for Urban Scene Reconstruction and Novel View Synthesis via Voxel-Based Neural Radiance Fields
by Xuanzhu Chen, Zhenbo Song, Jun Zhou, Dong Xie and Jianfeng Lu
Remote Sens. 2023, 15(18), 4628; https://doi.org/10.3390/rs15184628 - 20 Sep 2023
Cited by 9 | Viewed by 7235
Abstract
3D reconstruction of urban scenes is an important research topic in remote sensing. Neural Radiance Fields (NeRFs) offer an efficient solution for both structure recovery and novel view synthesis. The realistic 3D urban models generated by NeRFs have potential future applications in simulation [...] Read more.
3D reconstruction of urban scenes is an important research topic in remote sensing. Neural Radiance Fields (NeRFs) offer an efficient solution for both structure recovery and novel view synthesis. The realistic 3D urban models generated by NeRFs have potential future applications in simulation for autonomous driving, as well as in Augmented and Virtual Reality (AR/VR) experiences. Previous NeRF methods struggle with large-scale, urban environments. Due to the limited model capability of NeRF, directly applying them to urban environments may result in noticeable artifacts in synthesized images and inferior visual fidelity. To address this challenge, we propose a sparse voxel-based NeRF. First, our approach leverages LiDAR odometry to refine frame-by-frame LiDAR point cloud alignment and derive accurate initial camera pose through joint LiDAR-camera calibration. Second, we partition the space into sparse voxels and perform voxel interpolation based on 3D LiDAR point clouds, and then construct a voxel octree structure to disregard empty voxels during subsequent ray sampling in the NeRF, which can increase the rendering speed. Finally, the depth information provided by the 3D point cloud on each viewpoint image supervises our NeRF model, which is further optimized using a depth consistency loss function and a plane constraint loss function. In the real-world urban scenes, our method significantly reduces the training time to around an hour and enhances reconstruction quality with a PSNR improvement of 1–2 dB, outperforming other state-of-the-art NeRF models. Full article
Show Figures

Figure 1

31 pages, 4529 KB  
Review
Roadmap to a Circular Economy by 2030: A Comparative Review of Circular Business Model Visions in Germany and Japan
by Laura Montag
Sustainability 2023, 15(6), 5374; https://doi.org/10.3390/su15065374 - 17 Mar 2023
Cited by 12 | Viewed by 9458
Abstract
Circular business models operate differently from traditional linear models: by developing products designed for disassembly, reuse, and recycling; by using materials and products for as long as possible; and by replacing physical products with virtual ones, they aim to reduce the environmental impact [...] Read more.
Circular business models operate differently from traditional linear models: by developing products designed for disassembly, reuse, and recycling; by using materials and products for as long as possible; and by replacing physical products with virtual ones, they aim to reduce the environmental impact of their operations and facilitate the creation of a more sustainable future. In this article, the framework for circular business models is discussed from two perspectives: first, a systematic literature review is conducted to explore the academic point of view; second, a comparative policy review is conducted to analyze the past, present, and future visions of Germany and Japan in relation to their circular transition, particularly with regard to each country’s vision of circular business models. A first outcome is a synthesis of current circular business model archetypes and the developed circular business model matrix, which adds value to the literature by providing information on circular goals, strategies, the actors involved, and the social and political implications of each circular business model typology. A second outcome is a comparative, in-depth analysis of the current policy frameworks and strategies for circular business models in Germany and Japan. This article outlines the main ways in which both countries are currently making the transition to a circular economy, providing an important knowledge base for further development. Full article
Show Figures

Figure 1

22 pages, 2087 KB  
Review
Cholesterol Redistribution in Pancreatic β-Cells: A Flexible Path to Regulate Insulin Secretion
by Alessandra Galli, Anoop Arunagiri, Nevia Dule, Michela Castagna, Paola Marciani and Carla Perego
Biomolecules 2023, 13(2), 224; https://doi.org/10.3390/biom13020224 - 24 Jan 2023
Cited by 12 | Viewed by 5133
Abstract
Pancreatic β-cells, by secreting insulin, play a key role in the control of glucose homeostasis, and their dysfunction is the basis of diabetes development. The metabolic milieu created by high blood glucose and lipids is known to play a role in this process. [...] Read more.
Pancreatic β-cells, by secreting insulin, play a key role in the control of glucose homeostasis, and their dysfunction is the basis of diabetes development. The metabolic milieu created by high blood glucose and lipids is known to play a role in this process. In the last decades, cholesterol has attracted significant attention, not only because it critically controls β-cell function but also because it is the target of lipid-lowering therapies proposed for preventing the cardiovascular complications in diabetes. Despite the remarkable progress, understanding the molecular mechanisms responsible for cholesterol-mediated β-cell function remains an open and attractive area of investigation. Studies indicate that β-cells not only regulate the total cholesterol level but also its redistribution within organelles, a process mediated by vesicular and non-vesicular transport. The aim of this review is to summarize the most current view of how cholesterol homeostasis is maintained in pancreatic β-cells and to provide new insights on the mechanisms by which cholesterol is dynamically distributed among organelles to preserve their functionality. While cholesterol may affect virtually any activity of the β-cell, the intent of this review is to focus on early steps of insulin synthesis and secretion, an area still largely unexplored. Full article
(This article belongs to the Special Issue Biosynthesis, Structure and Self-Assembly of Insulin)
Show Figures

Graphical abstract

16 pages, 3127 KB  
Article
Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception
by Haozhi Shi, Lanmei Wang and Guibao Wang
Sensors 2022, 22(18), 7081; https://doi.org/10.3390/s22187081 - 19 Sep 2022
Cited by 3 | Viewed by 2267
Abstract
The quality of synthesized images directly affects the practical application of virtual view synthesis technology, which typically uses a depth-image-based rendering (DIBR) algorithm to generate a new viewpoint based on texture and depth images. Current view synthesis quality metrics commonly evaluate the quality [...] Read more.
The quality of synthesized images directly affects the practical application of virtual view synthesis technology, which typically uses a depth-image-based rendering (DIBR) algorithm to generate a new viewpoint based on texture and depth images. Current view synthesis quality metrics commonly evaluate the quality of DIBR-synthesized images, where the DIBR process is computationally expensive and time-consuming. In addition, the existing view synthesis quality metrics cannot achieve robustness due to the shallow hand-crafted features. To avoid the complicated DIBR process and learn more efficient features, this paper presents a blind quality prediction model for view synthesis based on HEterogeneous DIstortion Perception, dubbed HEDIP, which predicts the image quality of view synthesis from texture and depth images. Specifically, the texture and depth images are first fused based on discrete cosine transform to simulate the distortion of view synthesis images, and then the spatial and gradient domain features are extracted in a Two-Channel Convolutional Neural Network (TCCNN). Finally, a fully connected layer maps the extracted features to a quality score. Notably, the ground-truth score of the source image cannot effectively represent the labels of each image patch during training due to the presence of local distortions in view synthesis image. So, we design a Heterogeneous Distortion Perception (HDP) module to provide effective training labels for each image patch. Experiments show that with the help of the HDP module, the proposed model can effectively predict the quality of view synthesis. Experimental results demonstrate the effectiveness of the proposed model. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

28 pages, 2178 KB  
Article
Old Pedagogies for Wise Education: A Janussian Reflection on Universities
by Zane M. Diamond
Philosophies 2021, 6(3), 64; https://doi.org/10.3390/philosophies6030064 - 3 Aug 2021
Cited by 6 | Viewed by 4477
Abstract
This paper presents a synthesis of time-honoured pedagogical approaches to develop wisdom suitable to address the urgent problem-solving requirement of the modern university. During these last 30 years, I have employed a range of critical, interpretivist, qualitative research methods to examine archival and [...] Read more.
This paper presents a synthesis of time-honoured pedagogical approaches to develop wisdom suitable to address the urgent problem-solving requirement of the modern university. During these last 30 years, I have employed a range of critical, interpretivist, qualitative research methods to examine archival and archaeological evidence and conduct cross-cultural and often comparative and international case studies to study wisdom. My central concern has been to understand how teachers across diverse locations throughout history have learned to develop wisdom and how they have educated others to such understandings. As part of this work, I examined the modern university and its capacity to engage with local knowledge and wisdom. Over the course of analysis, I find that one of the constraints of scaling up institutions for learning wisdom into the now global model of the university is that universities have forgotten how to develop wisdom in the race towards industrialisation, colonisation, and neo-liberalism within the scientific paradigm. One of the early sacrifices of such scaling up was the ability of the university to preserve an intention to develop the wisdom of its students. Therefore, distant memory now is the ideation of wisdom that many societies and civilisations, and their institutions of higher learning, are in danger of forgetting the pedagogical pathway to do so. The paper begins with an examination of the long history of pedagogies for the development of wisdom. I then briefly discuss the methodological aspects of this paper and explain my key terms: information, knowledge and wisdom, followed by an examination of wisdom through the lens of the teaching and learning modalities of the Oral, Written, and Printing. My synthesis of wisdom artefacts and stories about pedagogy suggests that while wisdom is individually sensed, understood, and lived phenomenologically, its meaning is latent, socially agreed, and constrained in terms of how and if universities might cultivate its essential elements. Taking a Janussian backward- and forward-looking view, I propose a remembering and reconnecting approach to educating for wisdom through purposeful consideration of what we know about time-honoured pedagogies for teaching and learning wisdom, what are its current constraints, and what are its future opportunities in the university into the new postmodern, planetary, virtual education era. Full article
(This article belongs to the Special Issue From the Acquisition of Knowledge to the Promotion of Wisdom)
Show Figures

Figure 1

Back to TopTop