Next Article in Journal
A Systematic Review and Energy-Centric Taxonomy of Jamming Attacks and Countermeasures in Wireless Sensor Networks
Previous Article in Journal
Wearable Sensors for Human Health Monitoring and Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Multi-Level Perception Systems in Fusion of Lifeforms: Classification, Challenges and Future Conceptions

1
The Key Laboratory for the Physics and Chemistry of Nanodevices, Institute of Physical Electronics, Department of Electronics, Peking University, Beijing 100871, China
2
School of Electronic and Information Engineering, Beihang University, Beijing 100191, China
3
School of Integrated Circuits, Shandong University, Jinan 250010, China
*
Authors to whom correspondence should be addressed.
Sensors 2026, 26(2), 576; https://doi.org/10.3390/s26020576
Submission received: 29 November 2025 / Revised: 7 January 2026 / Accepted: 9 January 2026 / Published: 15 January 2026
(This article belongs to the Special Issue Sensors in Fusion of Lifeforms)

Abstract

The emerging paradigm of “fusion of lifeforms” represents a transformative shift from conventional human–machine interfaces toward deeply integrated symbiotic systems, where biological and artificial components co-adapt structurally, energetically, informationally, and cognitively. This review systematically classifies multi-level perception systems within fusion of lifeforms into four functional categories: sensory and functional restoration, beyond-natural sensing, endogenous state sensing, and cognitive enhancement. We survey recent advances in neuroprosthetics, sensory augmentation, closed-loop physiological monitoring, and brain–computer interfaces, highlighting the transition from substitution to fusion. Despite significant progress, critical challenges remain, including multi-source heterogeneous integration, bandwidth and latency limitations, power and thermal constraints, biocompatibility, and system-level safety. We propose future directions such as layered in-body communication networks, sustainable energy strategies, advanced biointerfaces, and robust safety frameworks. Ethical considerations regarding self-identity, neural privacy, and legal responsibility are also discussed. This work aims to provide a comprehensive reference and roadmap for the development of next-generation fusion of lifeforms, ultimately steering human–machine integration from episodic functional repair toward sustained, multi-level symbiosis between biological and artificial systems.

1. Introduction

Human–machine integration has evolved from simple assistive tools to tightly coupled systems such as advanced prostheses [1], implantable sensors [2], and brain–computer interfaces (BCI) [3], achieving ever-higher spatial density, bandwidth, and long-term stability. Yet, many prevailing paradigms—often framed as human–machine interfaces (HMIs) or brain–machine interfaces (BMIs)—remain predominantly tool-like and task-oriented, with implicit short-term usage assumptions or only weak, localized coupling between living tissue and engineered components [4,5]. Consequently, biological tissues and artificial devices are frequently treated as discrete entities linked by an interface, and even advanced “substitution” systems may not fully capture the demands of chronic biointegration.
Here, we synthesize and formalize these converging developments under the term “fusion of lifeforms” as an organizing framework for a sustained, dynamic, multilevel symbiosis characterized by (i) long-term co-existence and structural/energetic compatibility, (ii) bidirectional information exchange with individualized fitting and plasticity-driven remapping, and, in stronger forms, (iii) closed-loop regulation based on endogenous feedback—together constituting cross-layer integration across structure–energy–information–cognition within a shared physiological environment. Within this framework, hybrid systems are not limited to restoring impaired functions; under appropriate conditions, they may also enable composite forms with sensing, metabolic, and cognitive capacities that extend beyond those of the original organism [6,7]. This viewpoint motivates a systematic discussion of how sustained biointegration and reciprocal signaling can reshape the distribution of sensing, regulation, and cognition between biological and artificial substrates.
Conventional human–machine interaction models face persistent limitations: restricted information throughput and closed-loop latency [8,9]; micromotion and packaging that degrade signal quality [10,11]; power and thermal constraints that limit duty cycle [12,13]; validation protocols that focus on low-level signal metrics but do not adequately capture behaviorally relevant, real-world functional gains. These bottlenecks hinder scalability from single-organ assistance to hybrid organisms endowed with stable, high-bandwidth sensing and actuation. Therefore, a broader framework is needed—one that integrates sensing, actuation, energy management, packaging, and learning algorithms into cohesive symbiotic systems with standardized metrics, in order to ensure long-term performance and safety.
The title of this review reflects an effort to consolidate several mature but fragmented research lines into a unified systems-level view. By “multi-level perception systems,” we refer to end-to-end sensing–encoding–interfacing pipelines spanning neuroprosthetics, implantable physiological sensing, sensory augmentation, and brain–computer interfaces, where signals are acquired, transformed, and ultimately translated into functional outcomes. We discuss these systems under the lens of “Fusion of Lifeforms,” emphasizing sustained structural coupling, energy cooperation, bidirectional information loops, and cognitive co-adaptation between biological and artificial components. Building on this literature, we propose a functional classification to organize representative systems, summarize recurring engineering constraints, and outline future directions toward scalable, reliable, and benchmarkable symbiotic integration.
In this work, we adopt the four-axis framework of “structure–energy–information– cognition” that is schematically summarized in Figure 1, and we categorize fusion of lifeform sensing systems according to their primary functional intent into four classes. Along the four quadrants in Figure 1, Class I (sensory/functional restoration) reconstructs impaired sensory and motor pathways; Class II (endogenous state sensing) continuously monitors internal physiological and pathological states, with optional therapeutic actuation; Class III (beyond-natural sensing) maps non-native environmental cues onto exploitable neural channels; and Class IV (cognitive and learning enhancement) focuses on modulating and enhancing cognition and memory through diverse neural interface and neuromodulation strategies. Section 2 defines fusion of lifeforms and their system requirements in terms of structural coupling, energy cooperation, information closed loops, and cognitive co-adaptation. Section 3 reviews the principles and recent advances of each sensing category and delineates the technological continuum from “substitution” to “fusion”. Section 4 focuses on key bottlenecks (summarized in Figure 11)—including multi-source heterogeneous integration, intra-body communication bandwidth and latency, power supply and thermal management, biocompatibility and long-term reliability, and safety and dependability in complex systems—and proposes future directions such as layered heterogeneous networks, sustainable energy provisioning, innovative biointerfaces, and system-level security frameworks. Section 5 examines ethical and regulatory issues related to self-identity, neural privacy, social fairness, and legal liability, emphasizing the need for parallel progress in technological development and institutional frameworks.
This paper classifies multi-level sensing systems in fusion of lifeforms, reviews the state of the art, and analyzes outstanding challenges. It aims to provide a systematic reference and roadmap for future research. We anticipate that, driven by interdisciplinary advances in materials, energy, communication, and intelligent algorithms, fusion of lifeforms will gradually transition from “extracorporeal assistance” to “intracorporeal symbiosis” and from “functional repair” to “capability expansion”, ultimately steering humanity toward more inclusive and extensible forms of life.

2. Definition and System Characteristics of Fusion of Lifeforms

Fusion of lifeforms denotes a sustained, dynamic, and bidirectional symbiosis between organic organisms and artificial systems along four operational axes—structure, energy, information, and cognition. Structurally, fusion requires chronic physical coupling and biomechanical matching in implantable, adherent, or extracorporeal configurations. Energetically, it relies on in-body/on-body power supplies, energy harvesting, and metabolically compatible power management. Informationally, it should provide bidirectional sensing–stimulation–communication, preferably implemented in closed-loop architectures. Cognitively, it leverages neural plasticity to enable co-adaptation and functional co-evolution, encompassing learning, memory, and decision support. Beyond restoring impaired function, such symbioses may give rise to composite lifeforms with perceptual, metabolic, and cognitive capacities that transcend those of the original organism.
Functionally, we classify fusion of lifeform sensing systems into four classes according to their primary functional intent within the host organism. Class I (sensory and functional restoration) comprises neuroprosthetic systems that use electrical or optical encoding with individualized calibration to recover sensory and motor function. Class II (internal-state sensing) includes in vivo monitoring platforms for otherwise imperceptible physiological states (e.g., glucose, neurotransmitters, pressure, or flow) with optional therapeutic actuation. Class III (beyond-natural sensing) refers to augmentative systems that map non-human modalities (e.g., infrared/ultraviolet, magnetic, echoic, or radiation signals) onto neural substrates through direct encoding or sensory substitution combined with adaptive training. Class IV (cognition and learning enhancement) encompasses brain–data and brain–AI interfaces that extend cognitive capacity via external information delivery and targeted neuromodulation.

Determining the Primary Functional Intent

Many real-world systems span multiple functional intents. To make the Class I–IV labeling reproducible, we assign the primary functional intent using the following priority order: (1) the stated use-case/target population (what the system is explicitly built for), (2) the primary endpoint emphasized by this study (the main success metric optimized and reported), (3) the dominant closed-loop variable (the main variable sensed and regulated in the core loop), and (4) the dominant design constraints (the strongest engineering bottleneck shaping the system). When multiple intents are present, the class label refers to the primary intent; additional intents are treated as secondary and noted when relevant (e.g., “Class I, secondary Class IV”).

3. Functional Classification of Sensing Systems and Current Technologies

3.1. Sensory Restoration and Neural Fusion

The goal of restoring conventional senses is to re-establish the signal transmission pathway between external stimuli and the nervous system through deep integration of neuroprostheses with biological tissues, enabling individuals who have lost hearing, vision, touch, or smell to regain sensory function. Mechanistically, functional recovery typically depends on (i) delivering a sufficiently informative neural code that can be interpreted by downstream circuits [14,15], and (ii) leveraging neural plasticity and perceptual learning so that central pathways progressively remap and optimize the interpretation of the artificial input [16,17]. In this process, individualized fitting and, when available, closed-loop calibration act as a control layer that reduces mismatch between device encoding and the user’s neural state [18,19], stabilizes perception across contexts, and ultimately converts improved neural representations into measurable behavioral gains (e.g., speech intelligibility, navigation safety, object manipulation accuracy, and reduced cognitive load). These outcomes can be quantitatively assessed using standardized functional tests, such as the Speech Perception in Noise (SPIN) test for auditory restoration, Snellen chart for visual acuity in sight restoration, and object manipulation accuracy or sensory thresholds for tactile feedback systems. This approach allows for systematic comparison and evaluation across different systems and users (Table 1).
Unlike conventional prostheses, sensory restoration systems in fusion of lifeforms do more than substitute lost functions; they emphasize long-term in vivo coexistence and plasticity-driven functional reorganization, following a canonical pipeline of “sensing front end → feature extraction/encoding → neural stimulation or actuation → individualized calibration → behavioral validation.” Representative examples include cochlear implants [34], visual prostheses [35,36], electronic skin [37], electronic olfaction systems [38], and advanced prosthetic limbs [39,40]. Across these technologies, reproducible engineering frameworks have emerged for neural electrical/optical stimulation strategies, encoding and patient-specific fitting, wireless power and data links, and biocompatible encapsulation.

3.1.1. Auditory Restoration

Cochlear implants are among the most clinically validated and widely adopted technologies in current fusion sensing systems [41]. These devices employ multichannel electrode arrays to convert acoustic signals in real time into temporally coded pulse trains that bypass damaged cochlear hair cells and directly stimulate auditory nerve fibers, thereby partially reconstructing auditory function (Figure 2A) [42]. It is estimated that more than one million individuals with severe to profound hearing loss worldwide have benefited from cochlear implantation, making it the most widely used artificial neural prosthesis to date [43,44].
In terms of encoding strategies, classical approaches such as continuous interleaved sampling and advanced combination encoder rely on band-pass filtering, envelope extraction, and channel selection to preserve critical speech information under the physical constraints of a limited number of electrodes and current spread in the cochlea [46,47,48]. However, due to insufficient effective spectral resolution and inter-channel current spread, users still experience marked limitations in noisy environments, music perception, and pitch discrimination [49,50,51].
To mitigate the well-known speech-in-noise deficit, CI research has developed dedicated front-end noise-management pipelines that complement the electrode-limited sound coding stage. Beyond classical single-microphone suppression, modern processors increasingly leverage multi-microphone directionality and beamforming to improve the effective SNR at the input of the sound coder, yielding measurable gains in challenging listening conditions [52]. In addition, spatially selective modes such as ForwardFocus further enhance target speech perception in spatially separated multi-talker interference, reporting substantial improvements in speech reception thresholds in representative test paradigms [53]. More recently, deep-learning-based speech enhancement has demonstrated improved intelligibility for CI users in noise [54], and the trend is moving toward end-to-end denoising sound coding strategies that map acoustic inputs directly to denoised electrodograms [55], potentially reducing mismatch introduced by hand-crafted front-end processing and fixed vocoders.
In recent years, research on cochlear implants has gradually shifted from merely improving basic speech intelligibility toward more intelligent and individualized paradigms. The introduction of artificial intelligence has driven substantial performance gains: deep-learning-based noise suppression and acoustic scene classification have significantly enhanced speech clarity in complex acoustic environments [21,54,56]; machine-learning-driven automated fitting, individualized coding strategies, and outcome prediction models are enabling precise adaptation to patient-specific auditory perceptual characteristics and long-term optimization during follow-up [57,58,59,60,61]. In parallel, advances in intraoperative robot-assisted electrode insertion and hearing-preservation surgical strategies have further improved implantation accuracy and protection of residual hearing [62,63,64].
Collectively, these technological developments are driving cochlear implants from early “sensory substitution” devices toward adaptive, interactive “perceptual fusion” platforms, reflecting a clear trend toward deep structural and functional integration with the nervous system.

3.1.2. Visual Restoration

In the domain of artificial vision, similar fusion-oriented concepts are rapidly advancing. A variety of visual prosthesis systems—including epiretinal, subretinal, suprachoroidal retinal implants and visual cortical prostheses—aim to encode images captured by cameras or optical sensors into spatiotemporal patterns of electrical or optical/photovoltaic stimulation [23,65,66,67,68]. Through current steering and optimization of dynamic stimulation patterns, these systems seek to improve the spatial resolution and temporal continuity of phosphene perception (Figure 2B,C). Representative retinal prosthesis systems, such as Argus II, alpha IMS, and Prima, have enabled hundreds of blind patients to regain limited yet perceptible visual function [22,23,45,65,66].
In recent years, focused ultrasound stimulation has emerged as a non-invasive alternative with high spatial resolution and deep tissue penetration. For example, Lu et al. developed a two-dimensional focused ultrasound array that combines three-dimensional imaging guidance and auto-alignment technology to generate programmable ultrasonic fields, thereby achieving dynamic waveform projection at the retinal level [69]. In parallel, other researchers have proposed the use of miniature micro-LED arrays to project virtual images inside the eye for the treatment of anterior-segment blindness—a technology that can be regarded as a “miniature VR display system” embedded within the eyeball [9,70,71,72].
Beyond implantable visual prostheses, non-invasive navigation and visual substitution systems provide an important complement to visual function reconstruction. Early electronic travel aids (e.g., sonar-based canes and the NavBelt) sensed obstacles via ultrasound and delivered simple auditory cues for avoidance [73,74], but their limited feedback richness made it difficult to cope with complex urban environments. More recently, wearable and AI-enabled systems have begun to emerge. These include head-mounted virtual-vision navigation devices with integrated tactile–speech feedback and multimodal navigation–virtual companion systems [75,76]. These platforms parse camera-acquired environmental images and encode them into multipoint vibrotactile and speech feedback to guide blind users in performing real-world tasks such as independent walking and obstacle avoidance. Although the visual resolution and restorative effect of current systems remain limited, ongoing advances in optical interfaces, flexible encapsulation, biocompatibility, and AI-based image understanding and stimulation-encoding algorithms are collectively laying the groundwork for future artificial–vision fusion with higher dimensionality, stronger personalization, and greater adaptive capacity.

3.1.3. Tactile Restoration

Tactile restoration aims to reintroduce touch-related cues (e.g., pressure, vibration, or shear) into the user’s somatosensory system to support object manipulation and embodiment. In practice, tactile restoration technologies can be broadly divided into non-invasive, skin-surface interfaces and invasive, subdermal interfaces. Surface approaches reconstruct a sense of touch on the skin via mechanical indentation [77], vibrotactile stimulation, or transcutaneous electrical nerve stimulation (TENS) [78]. Mechanical indentation applies localized pressure or shear to the skin through external actuators, whereas vibrotactile stimulation uses wearable vibration motors or linear resonant actuators operating at different frequencies. TENS delivers current through surface electrodes to subcutaneous nerves to evoke action potentials; however, slight deviations in stimulation intensity can readily induce pain, limiting user acceptance.
In recent years, wearable systems that integrate electromyography (EMG) decoding and mechanical indentation feedback into lightweight fabric-based prosthetic sockets have demonstrated clear advantages in providing proportional tactile feedback during prosthetic grasping [79]. Moreover, combining mechanical indentation for high spatial resolution with vibrotactile stimulation for force or intensity mapping yields superior perceived feedback quality compared with either modality alone [80]. From a receptive-field perspective, mechanoreceptors that encode skin deformation (e.g., Merkel cells and Meissner corpuscles) exhibit more localized receptive fields than vibration-sensitive Pacinian corpuscles, making mechanical indentation stimuli superior to purely vibrotactile cues in terms of spatial resolution [81].
Subdermal approaches act directly on peripheral nerves through extraneural or intrafascicular electrodes (Figure 3 illustrates representative electrode designs), with nerve-cuff electrodes that wrap around the nerve (Figure 2A) being the most widely used [82]. These extraneural interfaces do not penetrate nerve fascicles, thereby reducing mechanical trauma and achieving biological stability over periods exceeding ten years in some cases. Nonetheless, mechanical mismatch between the electrode and neural tissue can still induce fibrotic encapsulation [83], motivating the development of all-polymer cuffs whose mechanical properties more closely match those of peripheral nerves to attenuate chronic inflammatory responses [84]. In addition, self-healing and highly stretchable conductive materials have improved the long-term robustness and reliability of prosthetic systems [85].
When combined with AI-based signal processing algorithms [86], such tactile sensor and actuator arrays can dynamically adapt to individual users’ perception thresholds and feedback preferences, progressively approximating the sensation of natural skin. It should be emphasized that tactile restoration is fundamentally a key component of closed-loop neuroprosthetic sensing systems; its system-level coordination with motor decoding and feedback encoding will be further elaborated on in Section 3.1.5.
Figure 3. Schematics show different interface designs with the peripheral nerve: (A) Nerve cuff, encircling the nerve; (B) Flat Interface Nerve Electrode (FINE), which gently reshapes the nerve; (C) Longitudinal Intrafascicular Electrode (LIFE), inserted longitudinally within a fascicle; (D) Transverse Intrafascicular Multichannel Electrode (TIME), which penetrates the nerve transversely; and (E) Utah Slanted Electrode Array (USEA), providing a 3D interface with varying electrode lengths. Adapted from [87].
Figure 3. Schematics show different interface designs with the peripheral nerve: (A) Nerve cuff, encircling the nerve; (B) Flat Interface Nerve Electrode (FINE), which gently reshapes the nerve; (C) Longitudinal Intrafascicular Electrode (LIFE), inserted longitudinally within a fascicle; (D) Transverse Intrafascicular Multichannel Electrode (TIME), which penetrates the nerve transversely; and (E) Utah Slanted Electrode Array (USEA), providing a 3D interface with varying electrode lengths. Adapted from [87].
Sensors 26 00576 g003

3.1.4. Olfactory and Speech Restoration

Technically, olfactory restoration typically maps chemical stimuli to neural percepts via an e-nose-based sensing and encoding front end followed by stimulation of central olfactory pathways, whereas speech restoration maps neural activity to communicative outputs (or, conversely, encodes speech features for neural delivery) through decoding/encoding algorithms coupled to appropriate neural interfaces.
Olfactory restoration is commonly built upon electronic nose (e-nose) technologies as the sensing front end. By using multichannel gas sensor arrays, feature extraction, and pattern recognition algorithms to emulate the selective responses of the olfactory epithelium, e-noses encode complex volatile organic compound profiles into “odor fingerprints.” Driven by advances in sensing materials and machine learning, such systems have recently achieved high-accuracy recognition of diverse odor classes, including disease-related breath signatures, thereby laying the groundwork for olfactory substitution and olfactory neuroprostheses [88]. On this basis, researchers have proposed conceptual “olfactory implant” architectures: an e-nose performs odor detection and feature encoding, and a miniaturized electrode array implanted in the olfactory bulb or associated olfactory pathways is driven via a wireless link to deliver electrical stimulation to central olfactory structures [27]. In principle, this approach could bypass damaged olfactory epithelium to restore olfactory function. Early human studies have shown that transnasal or olfactory-bulb stimulation can indeed evoke consciously perceived olfactory sensations, supporting the feasibility of olfactory neuroprostheses; however, challenges in spatial selectivity, long-term safety, and control of subjective odor quality remain at a very early exploratory stage [38,88,89].
By contrast, speech restoration neuroprostheses have achieved substantial clinical progress under the broader framework of brain–computer interfaces. By implanting high-density electrocorticographic grids or microelectrode arrays over or within motor and speech-related cortical areas, and coupling these recordings with deep neural networks that decode cortical activity during attempted speech, such systems can generate text or synthesize speech in real time. Recent studies in individuals with paralysis or amyotrophic lateral sclerosis (ALS) have reported decoding rates of approximately 60 words per minute with large-vocabulary sentence-level spelling accuracy, approaching or even surpassing the communication efficiency of conventional assistive communication devices [24,90,91]. Recent reviews suggest that, as high-spatiotemporal-resolution neural interfaces converge with advanced speech and language models, speech neuroprostheses may, in the medium term, support more natural continuous speech and richer emotional expression [92,93].
Overall, olfactory and speech neuroprostheses are evolving at different speeds, but both now follow a similar technical route that combines dedicated sensors, signal decoding, and targeted neural stimulation. Together, these developments are turning previously speculative forms of sensory and communicative restoration into experimentally and clinically testable interventions.

3.1.5. Neuroprostheses and Perception–Action Closed Loops

Technically, modern prostheses are evolving from purely mechanical devices to neurally informed systems, where decoded intent (from EMG or brain signals) drives adaptive joint/hand control, and sensorized feedback is encoded back to the user to close the human–machine loop. Modern upper-limb prostheses use EMG signals in combination with machine learning algorithms to decode user intent with high precision, transforming movement control from externally triggered commands into neurally driven, naturalistic responses [94]. Building on this, high-density EMG arrays combined with deep learning models such as convolutional and recurrent neural networks can exploit spatiotemporal muscle synergies to achieve continuous, proportional control of multiple degrees of freedom [95,96,97]. These models can also adapt in real time to user-specific contraction patterns, maintaining stable performance despite variability. Explainable AI methods help identify key EMG channels and feature dimensions, facilitating electrode layout optimization and the design of individualized training protocols, and improving system safety and debuggability [95,96]. On the output side, the integration of flexible tactile sensors, stretchable conductive hydrogels, and self-healing composite materials endows prosthetic hands with multimodal sensing of grasp force, slip, shear, and temperature [98,99,100]. Via transcutaneous or osseointegrated interfaces in combination with peripheral nerve electrodes, these signals can be encoded into position-specific “quasi-proprioceptive” and tactile feedback, significantly improving object recognition accuracy and the sense of limb embodiment [26,101]. As shown in Figure 4A,B, a novel electronic skin was developed, enabling an amputee to perceive a continuous spectrum of tactile and nociceptive sensations through the prosthesis, thereby allowing for the discrimination of object curvature and even sharpness [25]. Recently proposed high-channel-count implantable systems that combine intramuscular EMG recording with nerve stimulation further demonstrate the feasibility of implementing bidirectional neuro–electromechanical interfaces on a single implant platform, laying the groundwork for long-term in vivo symbiotic prostheses [102].
In lower-limb prosthetics, technological evolution likewise reflects a transition from “mechanical devices” toward “quasi-biological” systems. Motion control architectures that fuse EMG signals with inertial measurement units (IMUs), together with gait-phase recognition and intent-prediction algorithms, enable coordinated control of knee and ankle joints during initiation, acceleration and deceleration, and level changes such as slopes or stairs [104,105]. Microprocessor-controlled and even powered intelligent knee–ankle prostheses employ phase-dependent variable-impedance control or data-driven control policies to continuously adjust damping and output torque across different walking speeds, inclines, and uneven terrains, thereby improving terrain adaptability and gait symmetry [106,107,108,109]. More recently, deep neural networks and reinforcement learning methods have been used to learn mappings between environment conditions, gait states, and joint torques from large-scale gait datasets, allowing prostheses to maintain stable forward locomotion and obstacle negotiation even in previously unseen scenarios [109,110]. At the neural level, EEG, functional near-infrared spectroscopy (fNIRS), and hybrid BCIs are being explored for gait modulation in lower-limb prostheses and exoskeletons. By decoding movement intention or locomotor state, these systems enable “brain-controlled walking” [105,111,112]. They also offer a potential pathway for transitioning from muscle-driven to directly neural-driven control. In parallel, vibrotactile and electrical feedback systems are being developed for lower-limb prostheses to restore perception of ground contact, impact, and slope (As shown in Figure 4C). These feedback channels are designed to preserve gait stability [103] and to shorten perceptual and decision-making delays in the human–joint closed loop [113]. Across these developments, artificial intelligence is increasingly embedded in intent recognition, environment perception, control policy optimization, and feedback encoding. As a result, both upper- and lower-limb prostheses are evolving from single-function actuators into intelligent symbiotic subsystems that can co-adapt with the neuromuscular system.
Future prosthetic development is moving toward deeper integration with the nervous system. Neural interfaces—such as peripheral nerve electrodes and highly sensitive myoelectric sensors—are increasingly used to provide sensory feedback, allowing users to perceive contact forces and temperature at the prosthetic–environment interface, thereby enhancing embodiment and control precision [114,115,116]. At the same time, the combination of biomimetic materials and 3D printing has markedly improved the flexibility and individual adaptability of prosthetic structures [117], offering a new design paradigm for bio–mechanical integration. Such cross-level integration of sensing, control, and neural interfacing supports the emergence of a more continuous human–machine–neural continuum, effectively extending the functional boundaries of the body.

3.1.6. Neural Coupling and Cognitive Interaction Interfaces

At the level of neural coupling and cognitive interaction, so-called telepathy-type brain–computer interfaces can be regarded as early explorations that extend traditional motor and communication BCIs toward higher-level cognitive interaction. A representative example is Neuralink’s ongoing PRIME early feasibility study, in which a neurosurgical robot implants a multichannel cortical electrode array and a fully implanted wireless module. This configuration allows individuals with high-level paralysis to control a cursor, virtual keyboard, or simple games using attempted movements or imagined actions, thereby enabling continuous interaction with computers and external devices.
Public “Telepathy” demonstrations released by the company show that several paralyzed participants can use this interface in daily life for typing, web browsing, and gaming. However, most available information comes from clinical trial registrations and company blog posts rather than from peer-reviewed reports that systematically characterize efficacy and safety [118,119,120]. In popular discourse, the term “Telepathy” is often interpreted as “direct exchange of thoughts”. In contrast, current scientific progress is more accurately described as high-bandwidth decoding of intentions and command mapping: in practice, these systems infer actions such as “select a character”, “move the cursor”, or “execute a click” from cortical activity, rather than reading out complex semantic content or abstract thoughts.
In parallel, non-invasive brain-to-brain interfaces (B2BIs) have demonstrated the feasibility of “minimal information sharing” in healthy participants. For example, BrainNet uses EEG to acquire a sender’s binary decisions and applies transcranial magnetic stimulation to deliver “yes/no” information to another participant, enabling three individuals to cooperate on a simple task [121,122,123]. These channels, however, are extremely low in bandwidth and carry very limited semantics, and they remain far from the science fiction notion of “shared consciousness”.
A more mature foundation comes from invasive communication BCIs developed for patients with amyotrophic lateral sclerosis (ALS) or complete locked-in state (CLIS), as illustrated in Figure 5. In these systems, cortical or fully implanted interfaces decode intentional selections to support spelling, sentence generation, or even continuous speech control. Such neuroprostheses provide a relatively stable channel for “intent-based communication” in individuals with severe motor impairment [124,125,126].
In summary, current neural-coupling and cognitive–interaction interfaces are best viewed as systems that decode user intentions and map them onto external commands, rather than as tools for “mind reading”. Within strict constraints on safety, power, and bandwidth, they combine high-resolution neural recording, AI-based intent recognition, and, in some cases, sensory feedback to extend the loop of attention–intention–action–feedback into digital devices or simple multi-user settings. Future extensions toward multi-brain collaboration or cognitive enhancement will depend not only on advances in large-scale neural decoding and closed-loop stimulation, but also on clear norms regarding neural privacy, agency, and emerging “neurorights”. It should be emphasized that our discussion of these systems is primarily conceptual and forward-looking, and does not constitute an endorsement of their clinical effectiveness. In particular, highly publicized demonstrations should be interpreted with particular caution by the media and non-expert audiences, and not taken as evidence of mature or widely applicable clinical therapies.

3.2. Endogenous Sensing and Physiological Closed-Loop Control

The primary objective of in-body sensors is to endow individuals with the ability to perceive internal vital information that is otherwise inaccessible to conscious awareness. These sensors harvest the body’s intrinsic biomechanical activities (e.g., heartbeat, respiration, gastrointestinal peristalsis) to generate potential differences in triboelectric/contact electrification interfaces, thereby providing power for sensing and wireless transmission circuits. By deploying miniaturized implantable sensing units within the body, they enable self-powered, real-time monitoring and feedback of deep physiological and pathological signals such as blood glucose, blood pressure, and electroencephalography (EEG) [127]. The functional gains of these systems can be evaluated through metrics such as the accuracy and sensitivity of physiological signal detection (e.g., ±5% accuracy for blood glucose levels, ±1 mmHg for blood pressure), the real-time responsiveness of the system (response time < 1 s), and the stability and longevity of the sensor under continuous operation (e.g., at least 1-year battery life or self-powered efficiency) (Table 1). These metrics are critical for comparing the performance of different sensing technologies in real-world clinical or daily life conditions.
Currently, in-body sensor systems are evolving from exogenous perception modalities—such as short-term monitoring and adjunct imaging—toward endogenous sensing and feedback regulation platforms that can operate stably over the long term and cooperate with the body’s physiological networks, thus realizing a paradigm shift from “external observation of life” to “intrinsic self-perception of the living system.” Ongoing research is increasingly focused on dynamic physiological monitoring and regulation [28,31,32], metabolic process recognition [128], decoding of neural electrical activity [129], and multimodal fusion, thereby laying the foundation for self-sensing medicine, human–machine symbiosis, and intelligent living systems.

3.2.1. Vital Sign Sensing and Real-Time Regulation

Sensors for dynamic physiological monitoring are primarily tasked with enabling real-time, long-term tracking of vital signs such as blood pressure, respiration, and cardiac activity [130,131]. These sensors are mainly classified into wearable and implantable devices, which are tailored for continuous daily monitoring and precise perception of deep physiological states, respectively. For example, a wearable flexible respiratory sound patch attached to the chest wall acquires the spectral features of airway breath sounds to identify wheezes, thereby enabling real-time monitoring of airway narrowing and early detection of asthma [132] (Figure 6A–C). Wearable blood-pressure sensors detect capacitance changes induced by radial arterial pulsation to obtain pulse pressure waveforms, which are subsequently converted into blood pressure parameters, thus achieving beat-to-beat continuous blood pressure monitoring [133] (Figure 6D). However, such wearable devices still mainly rely on body-surface signals and are susceptible to attenuation through skin conduction and motion artifacts, making it challenging to realize continuous, high-precision monitoring of deeper physiological processes.
To achieve a genuinely fusion of lifeforms system, research is progressively shifting toward implantable platforms. Implantable physiological sensors acquire in situ physiological signals through direct contact with tissues and support closed-loop regulation. In the cardiac system, flexible self-powered sensing and regulation architectures based on triboelectric nanogenerators (TENGs) establish a technological pathway from “cardiac activity monitoring → refined cardiac function identification → myocardial functional intervention and repair.” Early miniature implantable subendocardial pressure sensors (SEPS) employed TENGs to realize in situ, self-powered detection of intracardiac pressure and arrhythmias [28]. The subsequently upgraded gapless Nano-Structured Triboelectric Nanogenerator (NSTENG), which eliminates mechanical spacing, achieves higher sensitivity and enables real-time recording of complete pulse waveforms and subtle myocardial motions [29]. On this basis, the self-generated TENG signals can also be directly used to electrically stimulate cardiomyocytes, promoting cardiomyocyte maturation and functional restoration of myocardial tissue, thereby extending the role of the system from “monitoring” to “therapy” [30].
Beyond the cardiac system, sensors for dynamic physiological monitoring have also demonstrated remarkable potential in other organ systems. In the gastrointestinal tract, Yao et al. developed an implantable self-powered vagus nerve stimulation device [31], in which a nanogenerator is attached to the gastric wall to harvest peristalsis-induced mechanical deformation and generate biphasic electrical pulses that stimulate the vagus nerve to modulate appetite, thereby achieving effective body-weight control. In the urinary system, Hassani et al. integrated a triboelectric sensing module with a shape-memory alloy actuator [32]; the former continuously senses bladder wall tension to determine the filling state and uses this as a trigger signal, while the latter actively compresses the bladder to induce voiding, thus constituting an implantable closed-loop control system capable of self-detection and autonomous urination. In orthopedics, a TENG-based self-powered implantable electrical stimulator upregulates intracellular Ca 2 + signaling in osteoblasts, promoting cell proliferation and bone matrix formation, thereby providing a long-term therapeutic strategy for osteoporosis and related fractures [33] and contributing to a gradually integrated technological framework that spans from monitoring to therapy [134].
Overall, sensors for dynamic physiological monitoring are evolving from body-surface perception toward in situ tissue sensing and, in the implantable domain, are leveraging self-powered mechanisms such as TENGs to achieve a transition from passive monitoring to closed loop, actively interventional operation. Existing studies in the cardiac, gastrointestinal, urinary, and skeletal systems have demonstrated their potential for long-term stable power supply, high-fidelity acquisition of deep physiological signals, and precise regulation based on physiological feedback, thereby outlining an integrated in vivo pathway that spans “monitoring–identification–regulation–repair.”

3.2.2. Metabolic Process Sensing and Chemical Homeostasis

Sensors for metabolic process recognition infer the body’s metabolic state by tracking changes in key chemical constituents in biofluids or tissues, among which glucose, pH, and dopamine are typical indicators. Glucose-monitoring technologies have evolved from finger-stick blood sampling to continuous glucose monitoring (CGM) and, more recently, to wearable non-invasive sensors. Conventional finger-stick methods rely on disposable test strips and colorimetric analysis, which only provide discrete glucose readings and fail to capture dynamic fluctuations. CGM systems, in contrast, implant electrochemical sensors into the subcutaneous tissue to continuously detect glucose concentrations in interstitial fluid and wirelessly transmit data, thereby yielding a dynamic glucose profile [129,135]. Commercial CGM devices based on this principle, such as Dexcom G7 and FreeStyle Libre 3, have achieved significant improvements in wear duration, measurement accuracy, and connectivity stability [136]. Recent research has shifted toward non-invasive wearable sensors that analyze sweat or interstitial fluid, utilizing enzymatic or non-enzymatic electrochemical reactions to achieve continuous, needle-free monitoring and thereby improve comfort and user compliance [137]. For instance, recent advances in flexible microfluidic platforms have enabled multiplexed, real-time monitoring of sweat metabolites and electrolytes, such as uric acid, pH, and K+, with high sensitivity and mechanical robustness during physical activity [138]. Although some studies suggest that glucose monitoring will ultimately move toward fully non-invasive approaches, such devices currently face challenges in signal calibration and long-term stability and are unlikely to completely replace CGM in the near term. Consequently, CGM systems and non-invasive wearable sensors should be viewed as two parallel developmental pathways: the former continues to advance in accuracy and intelligent data analytics, whereas the latter focuses on enhanced comfort and widespread accessibility [136].
Beyond blood glucose, implantable nanostructured pH sensors employ porous silicon or polymeric interfaces that are sensitive to changes in hydrogen-ion concentration to enable in situ, continuous monitoring of tissue microenvironment pH, thereby supporting the assessment of inflammatory progression, wound healing, and tumor microenvironment remodeling [128]. In addition, microelectrode-based electrochemical sensing technologies can record dopamine redox signals directly in the brain, enabling continuous dynamic monitoring of dopamine levels and providing a basis for endogenous closed-loop stimulation or medication adjustment according to neural state in patients with Parkinson’s disease, depression, and related disorders, with the goal of maintaining stable motor and affective function [139]. These advances are driving real-time perception of human metabolic status and the development of personalized medicine.
Overall, sensors for metabolic process recognition are shifting from external monitoring toward an in vivo cooperative sensing paradigm characterized by in situ operation and long-term coexistence with tissues. By continuously and dynamically tracking metabolic indicators such as blood glucose, pH, and dopamine, these systems can provide real-time feedback and adaptive regulation in response to changes in the internal physiological environment. In this way, sensors are no longer merely external auxiliary devices, but become integrated physiological units that co-participate in the maintenance of bodily homeostasis, embodying a broader trend toward human–device integration.

3.2.3. Neural Signal Decoding and Interaction Interfaces

Sensors for interpreting neural electrical activity record and analyze endogenous brain electrical signals to enable real-time perception of brain functional states and neuromodulatory processes. In contrast to communication-oriented brain–computer interfaces that primarily decode motor intentions (see Section 3.1.6), these systems emphasize “endogenous state sensing,” that is, identifying internal brain states—such as epileptic seizures, levels of consciousness, sleep rhythms, and emotional fluctuations—from neural signals and converting them into observable external readouts or controllable parameters. Implantable brain–computer interfaces (iBCIs) are the core technological paradigm in this field: by implanting high-density microelectrode arrays into the central nervous system to record neuronal spiking activity and decoding it algorithmically, they establish an internal sensing pathway for brain states within the body [140]. To support long-term, stable monitoring of endogenous signals, advances in interface materials and system integration are crucial. Flexible, conductive hydrogel electrodes can match the mechanical modulus of brain tissue, markedly reducing post-implantation inflammatory responses and enhancing signal stability [141,142] (Figure 7A). Nanomaterial-modified electrode interfaces further improve the spatial resolution and signal-to-noise ratio of neural recordings [143]. On the system integration level, organizations such as Neuralink have developed high-throughput, fully implantable recording devices that employ surgical robots to precisely insert kilo-channel flexible electrode threads, enabling parallel wireless acquisition of brain activity across multiple regions. Other studies have demonstrated a 64-channel miniature flexible electrode array encapsulated in a 22 × 22 mm2 titanium housing, achieving wireless power transfer and long-term in vivo signal acquisition [144,145] (Figure 7B,C). Collectively, these advances are driving neural interfaces to evolve from external, add-on recording devices into in situ implantable sensing units that can coexist with neural tissue over the long term, continuously capture deep neural state information, and support closed-loop feedback.
Leveraging high-quality neural signal acquisition and intelligent decoding algorithms, neural interfaces are increasingly being used to construct dynamic models of an individual’s internal physiological state and to implement adaptive closed-loop regulation. For example, in epilepsy therapy, responsive neurostimulation systems detect abnormal epileptiform discharge patterns and promptly deliver electrical stimulation to interrupt pathological activity at an early stage of seizure development [146]. Compared with conventional continuous stimulation, such closed-loop strategies significantly reduce seizure frequency while minimizing unnecessary stimulation-related side effects [147]. In the context of consciousness and sleep monitoring, EEG-based anesthesia depth indices have been introduced to assess and modulate the level of consciousness during anesthesia, thereby reducing the risk of intraoperative awareness [148,149]. Emerging work has also begun to decode emotional and cognitive states: implantable devices that monitor activity in specific brain regions can identify neural network signatures associated with conditions such as depression and, when needed, deliver brain stimulation or pharmacological interventions to stabilize patients’ mood [139]. In Parkinson’s disease, pathological β -band oscillations recorded via implanted electrodes serve as biomarkers to drive adaptive deep brain stimulation (DBS), enabling individualized closed-loop control of motor symptoms while improving therapeutic efficacy and reducing stimulation power consumption [150,151]. By integrating in vivo sensing with therapeutic intervention, neural-signal-decoding interfaces are thus transitioning from purely passive monitoring toward active regulation.
Taken together, neural-signal-decoding and interaction interfaces play a unique role within the fusion of lifeforms-system framework—rather than serving to read out thoughts for controlling external devices, they function as internal “sense organs” and “regulators” of the body, continuously tracking fluctuations in intrinsic brain states and providing adaptive feedback control. With the convergent development of implantable/non-invasive neural sensors and artificial intelligence algorithms, such interfaces are poised to become key components for maintaining neural homeostasis, forecasting pathological events, and delivering personalized interventions, thereby achieving deep integration of human–machine systems at both structural and functional levels.

3.2.4. Multimodal Sensing and System Integration

Multi-sensor networks refer to configurations in which wearable and implantable sensors are wirelessly interconnected to form an Internet of Bodies (IoB) or wearable–implantable body sensor networks (WIBSNs), thereby enabling multimodal cooperative sensing of vital signs, biochemical markers, and physiological signals [152]. Within wearable devices, substantial progress has been made in the high-level integration of multiple sensing units. For example, Yoon et al. integrated sweat-glucose (Figure 8C), potentiometric pH (Figure 8D), temperature, and dry-electrode electrocardiogram (ECG, Figure 8B) sensors into a single flexible skin patch (Figure 8A), enabling multimodal, synchronous monitoring of metabolic status and cardiac electrophysiological activity, and supporting real-time interaction with bodily signals for continuous surveillance and dynamic management of chronic metabolic diseases [153]. Ma et al. developed a smart contact lens that employs enzymatic electrochemical biosensing electrodes to read metabolite concentrations in tears and pressure-/capacitance-based structures to sense corneal deformation associated with intraocular pressure, thereby achieving non-invasive, real-time monitoring of glucose, lactate, and intraocular pressure and providing a continuous-monitoring modality for metabolic disorders [154].
Building on these developments, multi-sensor networks are further being extended to cross-layer cooperation between wearable and implantable devices, which are wirelessly interconnected to form an integrated body-sensing network. For instance, implantable sensors that monitor glucose, pH, or neural electrical activity in vivo can be linked via data connections to external wearable devices, enabling information integration across tissue layers [155]. In such architectures, in-body sensors are responsible for in situ acquisition of deep physiological signals, on-body devices serve as relays and provide auxiliary monitoring, and mobile terminals and cloud platforms execute analysis and feedback. Together, they establish a closed-loop pathway of “in-body sensing—on-body interaction—cloud-based decision-making,” enabling continuous cross-tissue information fusion and dynamic regulation.
Taken together, in-body sensors are evolving from localized signal acquisition devices into active sensing and regulation systems that can reside in the body over the long term and operate in coordination with physiological functions. Physiological dynamic-monitoring modules enable deep, time-resolved perception that extends from the body surface to in situ tissue sensing; metabolic process recognition modules provide continuous tracking of internal chemical homeostasis; neural electrical activity decoding modules achieve real-time reading and writing of neural circuit states; and multi-sensor networks further link in-body and on-body devices, as well as local and systemic levels, into a unified information-circulation pathway. In this way, sensors are evolving from externally attached tools into integrated functional units within the organism. These units actively participate in maintaining physiological homeostasis, enabling cooperative regulation and internal feedback control. This evolution signifies a paradigm shift from “devices serving the organism” to “devices becoming constituent parts of the organism,” thereby advancing the development of true fusion of lifeforms.

3.3. Suprasensory Augmentation and Channel Mapping

This class of technologies aims to introduce new senses that extend beyond the native human repertoire. Sensory dimensions that are normally inaccessible to humans—such as infrared and ultraviolet radiation, geomagnetic and electromagnetic fields, and ultrasound—are mapped onto existing tactile, auditory, or visual channels (Table 2). Through training that exploits neural plasticity, users can form stable perceptual representations and achieve quantifiable task benefits.
A representative example is geomagnetic sensing. The feelSpace vibrotactile belt uses a ring-shaped array of tactors around the waist to continuously indicate magnetic north [156]. After up to 15 months of training in natural environments with 9 participants, users developed a stable “sense of north” and showed significant improvements in orientation and navigation tasks. This work directly demonstrates that sensorimotor contingencies can be learned and transferred [165,166].
Infrared thermography, which is already widely used in night vision and medical thermal imaging, has also been formalized as a new sensory source [167]. In animal models, researchers have mapped infrared intensity onto electrical stimulation of the primary somatosensory cortex (S1) at different frequencies [157]. Rats learned to detect infrared cues, and the new modality coexisted with native touch rather than replacing it, effectively creating an additional sensory channel. Clinically, for users of visual prostheses such as Argus II, thermal cameras have been integrated into the visual pipeline [158]. The infrared input is semantically simplified and fused with the existing visual stream. This approach improves night-time navigation, human-body detection, and environmental awareness. A key principle is to replace high-redundancy video streams with low-bandwidth, high-value infrared features, thereby improving the signal-to-noise ratio at the source and reducing the decoding burden on the cortex.
Overall, suprasensory systems extend human and animal capabilities beyond their natural sensory range, allowing users to exploit otherwise inaccessible cues for navigation and interaction with the environment. To make these “extra” senses reliable in daily use, the whole chain—from sensing-source selection and feature compression to channel mapping and training—must be designed as an integrated system that yields stable, behaviorally meaningful improvements.

3.4. Cognitive Enhancement and Intelligent Integration

As a higher-level manifestation of fusion of lifeforms, cognitive enhancement aims to overcome the biological brain’s intrinsic limitations through deep integration of neural interfaces with external intelligence, thereby achieving generational leaps in higher-order cognitive functions such as learning, memory, and decision-making. This endeavor centers on constructing a bidirectional closed-loop cognitive coupling system: internal neural activity is continuously sensed and decoded in real time, while targeted stimulation or cloud-based intelligent feedback is delivered to the brain, optimizing neural plasticity and information-processing efficiency [168]. The field has evolved from rudimentary brain-state decoding toward advanced fusion pathways, including cognitive capacity enhancement, memory modulation, cloud-based knowledge collaboration, and multi-brain cooperation (Table 2). Key metrics to evaluate the functional gains of these systems include improvements in memory recall (e.g., Rey AVLT, Digit Span test), decision-making efficiency (e.g., Iowa Gambling Task), and learning speed (e.g., learning task completion time). Furthermore, neural plasticity can be assessed through changes in brain activity, measured via EEG or fMRI, in response to real-time feedback and closed-loop stimulation. Together, these technologies are shifting cognitive augmentation from a purely biological paradigm to a symbiotic human–machine integration paradigm, endowing fusion of lifeforms with a core engine of intelligence.

3.4.1. Cognitive Enhancement and Symbiotic Regulation

To achieve deep integration of the “fusion of lifeforms” at the cognitive level, BCI technologies are steering the transition from traditional external auxiliary stimulation toward a new paradigm of long-term symbiotic regulation with the brain. By constructing closed-loop systems that couple neural signal decoding with real-time feedback, this approach enables individuals to learn to modulate their own brain activity, thereby enhancing attention, emotional regulation, and memory, and ultimately achieving active intervention in cognitive function [168]. Studies have shown that physical neuromodulation techniques such as transcranial magnetic stimulation, deep brain stimulation, and focused ultrasound can improve cognitive function and alleviate neurological symptoms by modulating neural circuits involved in attention, memory, and decision-making, leveraging mechanisms of neural plasticity [169]. In particular, flexible neuromorphic electronics supporting near-sensor and in-sensor computing paradigms offer bio-inspired data compression and parallel processing, paving the way for seamless cognitive fusion in wearable and implantable platforms [170].
Building on these mechanisms, related technologies have been extended to multiple application scenarios closely linked to functional restoration of the organism. Tsai et al., for example, conducted closed-loop neurofeedback training in older adults, during which participants received real-time EEG feedback while performing tasks; the results showed significant improvements in attention, working memory, and executive control, indicating that feedback-based regulation can directly enhance cognitive performance. On this basis, Peterson et al. introduced a co-adaptive decoding framework in motor imagery tasks. When the system dynamically updated its parameters according to the subject’s neuromodulation performance, increased inter-class separability of neural representations and improved decoding accuracy were observed, demonstrating that closed-loop regulation can drive the brain to proactively reorganize its own activity patterns [160]. In addition, Matt et al. applied transcranial pulsed ultrasound to patients with Alzheimer’s disease; the intervention group exhibited significantly higher scores on cognitive scales than the control group and showed enhanced activation of attention–memory networks on functional MRI. These findings provide circuit-level evidence of plasticity and indicate that the aforementioned representational reshaping can be consolidated at the network level [161].
Taken together, the technological trajectory of cognitive enhancement is shifting from short-term exogenous stimulation or behavioral compensation toward a long-term symbiotic regulation mechanism jointly driven by neural learning and circuit plasticity. Within this mechanism, the brain forms new neural representations through feedback, while external systems consolidate these representations via circuit-level modulation so that the maintenance of cognitive function no longer depends on transient interventions but is jointly sustained and continuously reshaped by both human and machine. As a result, cognitive processes progressively transition from passive responses to open, co-regulated dynamics, laying the foundation for the long-term stable operation of the “fusion of lifeforms” at the cognitive level.

3.4.2. Memory Enhancement and Precision Intervention

At the level of memory enhancement and precision intervention, implantable brain–computer interfaces and deep brain stimulation are shifting from mere symptomatic relief toward “co-writing” memory traces within critical windows of memory formation. Current studies have shown that it is possible to record activity patterns in the hippocampus–medial temporal lobe and associated neocortical regions while individuals perform episodic memory tasks, use machine learning models to distinguish “high-memory” from “low-memory” states, and trigger closed-loop stimulation only when encoding falls into a low-efficiency state, thereby significantly improving subsequent recall performance [162,171]. In primate and human work on “hippocampal cognitive prostheses,” researchers have employed multi-input–multi-output (MIMO) models to reconstruct ensemble firing patterns along the CA3 → CA1 pathway and then replay the predicted encoding trajectories via electrical stimulation, partially restoring or even enhancing working memory and delayed matching performance under conditions of hippocampal damage or task interference [163,172].
Going a step further, closed-loop stimulation systems based on phase-locking algorithms can track theta oscillations in hippocampal–cortical networks in real time and deliver brief stimulation at the phase of maximal excitability. Experiments demonstrate that such phase-precise interventions improve memory performance more effectively and with fewer side effects than continuous open loop stimulation [164,173]. Recent reviews and quantitative analyses also indicate that memory-enhancing BCIs are moving from simple continuous stimulation toward adaptive closed-loop control triggered by electrophysiological biomarkers: on the one hand, high-density recording and modeling are used to capture individualized memory-encoding dynamics; on the other hand, minimal-dose stimulation is applied within appropriate spatiotemporal windows to reshape plasticity, thereby directly boosting the formation and retrieval of declarative memory without relying on long-term behavioral training. From the perspective of a fusion of lifeform, these approaches effectively outsource part of the “memory-writing” process to neural interfaces and algorithms. This transformation turns memory from a purely endogenous function into a human–machine co-executed physiological process, providing an experimentally verifiable pathway for precise interventions in both memory-disorder treatment and cognitive enhancement for healthy individuals [174].

3.4.3. Cloud Intelligence and Cognitive Extension

In the realm of “cloud intelligence and cognitive extension,” brain–cloud cooperation is regarded as a key pathway for embedding the individual brain into a distributed intelligent system. The basic concept is as follows: brain signals are acquired via wearable EEG or implantable BCIs, undergo preliminary preprocessing and encryption at local edge nodes, and are then uploaded to the cloud, where large-scale AI models perform pattern recognition and state estimation. The decoded results or optimized control/stimulation parameters are subsequently transmitted back downstream and fed into the cognitive process via neural stimulation or environmental feedback. In this way, a closed-loop cooperative architecture of “human brain–edge computing–cloud intelligence” is established [175].
For example, Rizzo et al. designed a cloud-based brain–computer interface system driven by steady-state visual evoked potentials (SSVEP) [176]. Using wearable EEG devices in combination with an embedded edge platform (Raspberry Pi 4), they achieved real-time interactive control of a cognitive building environment and demonstrated that support vector machines and random forest algorithms can effectively perform SSVEP classification on the edge device, with accuracies exceeding 97 % . At a deeper neuromodulation level, the fields of memory enhancement and deep brain stimulation have proposed the concept of a “brain co-processor”: through wireless bidirectional communication, implantable recording/stimulation devices are integrated with smartphones and cloud computing, allowing large-scale neural data to be continuously analyzed in the cloud and stimulation strategies to be adaptively adjusted based on electrophysiological biomarkers, thereby enabling long-term, fine-grained regulation of memory and cognitive functions [173,177] (as shown in Figure 9).
At the theoretical level, the “human brain/cloud interface” proposed by Martins and colleagues sketches a visionary blueprint for tightly coupling the human brain with cloud-based artificial general intelligence (AGI) systems. It emphasizes that, by integrating ultra-high-bandwidth neural interfaces with cloud computing resources, it may eventually become possible to realize cross-individual knowledge sharing and amplification of collective intelligence while simultaneously highlighting profound risks related to neural privacy, security, and personal agency [178].
Overall, cloud intelligence and cognitive extension provide fusion of lifeforms with a technical pathway akin to an “external intelligent cortex.” This shifts cognitive enhancement beyond local brain region stimulation toward a continuously co-regulated cognitive ecosystem, shared between humans and distributed cloud intelligence.

3.4.4. Inter-Brain Collaboration and Collective Intelligence

As a key pathway for realizing multi-brain collaborative sensing and cognition within the “fusion of lifeforms,” brain-to-brain interfaces (BBIs) combine BCIs with computer–brain interfaces (CBIs) to establish direct information-transfer channels between brains. In doing so, they bypass the constraints of conventional language and motor behavior and enable collaboration at the neural level. In recent years, BBI technologies have rapidly evolved from early unidirectional animal experiments into comprehensive systems encompassing non-invasive human–human communication, cross-species bidirectional control, and high-precision neuromodulation [179].
In the domain of non-invasive human brain communication, systems that integrate EEG with transcranial magnetic stimulation (TMS) have demonstrated the transmission of information at the level of conscious awareness between individuals. Grau and colleagues decoded a sender’s motor intention using EEG and induced phosphenes in a receiver’s visual cortex via TMS, achieving the first Internet-mediated brain-to-brain communication [180]. Building on a similar architecture, Jiang and co-workers developed “BrainNet,” which enables multiple participants to engage in collaborative decision-making via inter-brain information exchange, marking a new stage of multi-brain cooperation [121]. In the direction of cross-species bidirectional control, Yoo [181] and Zhang [182] independently demonstrated real-time human control of rat tail movements and maze navigation, respectively, with intracortical microstimulation (ICMS) delivering somatosensory feedback from the animal back to the human operator, as schematically illustrated in Figure 10. Together, these studies provide an initial bidirectional “perception–action” closed loop across species and lay the groundwork for cross-species fused perception. In parallel, advances in high-precision neuromodulation and emerging brain network technologies are substantially enhancing BBI performance: Lee and colleagues replaced TMS with focused ultrasound (FUS) to achieve more spatially precise somatosensory activation of specific brain regions [122], whereas Lu and co-workers used optogenetics combined with optical fiber recording to construct an “optical BBI” between mice, enabling ultra–high-speed transmission of motor information—two to three orders of magnitude faster than conventional electrophysiological approaches—and greatly expanding information throughput and response speed [183].
Overall, BBI technologies are progressing along three major axes: increasing non-invasiveness, cross-species integration, and high-precision, high-speed operation. This trajectory is gradually blurring the boundaries between individual brains and provides crucial technical support for constructing fusion of lifeforms in which perception and cognitive resources are deeply interconnected and shared at the neural level. At the same time, integrating heterogeneous systems while ensuring stable, safe, and efficient multi-layered sensory fusion remains a central challenge for future research.
Collectively, the various technological pathways for cognitive enhancement—from neuromodulation and memory intervention to cloud-based integration and inter-brain collaboration—are reshaping the cognitive architecture of fusion of lifeforms at multiple levels. Through real-time interactive sensing–regulation closed loops, these approaches not only optimize higher-order brain functions such as attention and working memory but also embed individuals within distributed intelligent networks, enabling on-demand allocation and leapfrog expansion of cognitive resources. This progression marks the transition of cognition from a closed, purely intracranial process to an open system defined by deep cooperative coupling between neural and machine intelligences, thereby establishing the cognitive-level foundation of fusion of lifeforms.

4. Interface Integration, System-Level Challenges, and Future Directions

4.1. Technical Challenges and Bottlenecks

4.1.1. Challenge 1: Complexity of Multi-Source Heterogeneous Sensing Fusion

Fusion of lifeform systems typically include multiple heterogeneous sensing and stimulation devices distributed inside and outside the body (as shown in Figure 11). Coordinating their timing and data fusion is inherently difficult. Sensing nodes at different locations often operate at different sampling rates and experience different transmission delays. Precise synchronization is required to maintain the stability of closed-loop control [184]. Calibration is also complex. Each sensor must be tuned to the individual’s physiology, while long-term implantation leads to device aging, signal drift, and scar-tissue formation, which gradually invalidate the original calibration parameters [185,186,187,188,189]. As a result, data semantics across sensing channels are difficult to unify, and signals from different modalities cannot be directly compared or fused. In addition, human–machine co-adaptation is a slow process. The user’s brain must learn how to integrate novel stimulation patterns from artificial devices [190,191], and the sensing–stimulation system must, in turn, adapt its encoding strategies and stimulation thresholds via machine learning, based on physiological feedback [192,193]. Together, these factors make real-time, reliable multimodal fusion a major obstacle to further system optimization.
At a deeper level, current multimodal heterogeneous sensing frameworks lack unified data semantics and interface standards, as well as robust mechanisms for cross-modal fusion and self-calibration. This makes temporal–spatial alignment and uncertainty quantification across channels and devices extremely difficult [187,194]. During long-term operation, the statistical properties of different modalities are nonstationary and exhibit strong inter-individual variability. Static calibration and fusion models cannot cope with these changes. The core bottleneck lies in the absence of a robust cross-modal fusion and automatic calibration pipeline that can operate under multi-rate sampling, time-varying noise, and uncertain delays. Typical limitations include unmodeled delays, covariance mismatch, and domain shifts across users and environments, all of which reduce the potential benefits of multi-source information complementarity. In summary, achieving truly real-time, adaptive, and cross-modal sensing fusion remains a central technical challenge.

4.1.2. Challenge 2: Bandwidth and Latency Limits of In-Body Information Transfer

Current human–machine interfaces are far behind biological neural pathways in terms of information throughput and real-time performance. A key reason is that invasive devices can only provide a limited number of channels and sampling rates. These constraints arise from fabrication limits and strict safety requirements on power and charge injection [8]. As a result, it is difficult to approach the scale of millions of neurons communicating in parallel in the brain. This limitation directly caps the achievable perceptual resolution and control accuracy. For example, the image resolution provided by existing visual prostheses is still extremely low and far from normal vision [9].
Wireless data links between implanted and external units introduce additional bottlenecks. Available bandwidth is limited and link stability is imperfect, with a non-negligible risk of packet loss [195]. For applications that require fast closed-loop feedback, these constraints can become critical [196]. Signal transmission across multiple devices also introduces nontrivial latency, which hinders immediate responses [197]. When several sensing and stimulation modules must operate in coordination, delays at each stage accumulate and may destabilize the control loop. In severe cases, such delay-induced effects can even lead to positive-feedback oscillations.

4.1.3. Challenge 3: System-Level Power Supply and Thermal Management

Fusion of lifeform systems are expected to operate continuously over long periods, yet powering and cooling implanted devices remains a major challenge [12]. Implantable batteries are constrained by limited volume. Their energy capacity and lifetime are therefore restricted, and frequent surgical replacement is clearly undesirable. Wireless power transfer is a promising alternative, but coupling efficiency and energy absorption in biological tissue limit its ability to supply multiple in-body nodes with stable, sufficient power [13].
Many high-performance functions, such as high-speed wireless communication and high-density signal acquisition, are associated with substantial power consumption [198,199]. This power leads to device heating. If the generated heat cannot be dissipated effectively, local tissue temperature may rise to damaging levels. The allowable power density in biological tissue is generally on the order of 80 mW/cm2 [200]; so, the temperature rise caused by implanted devices must be strictly controlled. Wireless telemetry is one of the most energy-hungry subsystems [201,202]. Radio-frequency coils used for power and data transfer can cause tissue heating, and their size strongly affects coupling efficiency [203]. When coils are miniaturized to fit into constrained anatomical spaces, energy-transfer efficiency decreases sharply. To meet power demands, the external transmitter must then operate at higher power, which further increases the risk of heating [204].
The internal human environment also imposes severe mechanical and material constraints. Available space is limited, and surrounding tissues are soft and curved. Power-delivery components therefore need to be highly miniaturized, mechanically compliant, and made from biocompatible materials. For devices designed for short- or medium-term use, it is preferable that the power module and other components be fully bioresorbable after completing their function so that no second surgery is needed for removal.
At present, implanted power systems struggle to meet simultaneous requirements on power level, temperature control, and device volume. Balancing these trade-offs is difficult. Enhancing power delivery capability while limiting heat generation and satisfying stringent implantation constraints constitutes a major bottleneck. This challenge calls for new energy-harvesting, storage, and thermal management strategies tailored to long-term in vivo operation.

4.1.4. Challenge 4: Tension Between Biocompatibility and Long-Term Reliability

Long-term implantation of artificial devices inevitably triggers biological reactions that degrade performance over time. The first issue is foreign-body response and tissue encapsulation. Electrodes and sensors in the body are often surrounded within weeks by fibrotic capsules or glial scar tissue [10,11]. This increases impedance and weakens signal transfer. Recorded signals gradually decline, and the current threshold required for stimulation increases [205,206,207]. The central nervous system and peripheral tissues differ in how they respond, but both tend to isolate implants through gliosis or fibrosis, which reduces device effectiveness [208].
Mechanical mismatch and micromotion-induced damage are additional long-term factors. Rigid electrodes have a much higher elastic modulus than surrounding soft tissues. Daily movements then generate small but repeated injuries and chronic inflammation. Micromotion of the implant can also tug on leads, loosen electrode contact, or even cause fractures [209,210]. At the same time, each component of an implanted device has its own failure modes. If the encapsulation layer loses integrity, body fluids can penetrate and cause short circuits and corrosion [211,212]. Electrode materials may dissolve or wear due to long-term electrochemical reactions [213,214]. Leads subjected to tens of thousands of bending cycles can undergo metal fatigue and break [215].
These problems have been repeatedly observed in long-term clinical use and significantly limit the service life and reliability of current implants. Researchers are trying to mitigate these conflicts through advances in materials and fabrication. Examples include ultra-flexible electrodes [216], bioresorbable devices [217,218,219], and improved encapsulation and surface coatings to enhance biological stability [220]. However, building deeply integrated systems capable of stable in vivo operation for decades will require fundamentally new solutions to chronic material–tissue interface compatibility.

4.1.5. Challenge 5: Safety and Reliability in Complex Fusion Systems

When multiple artificial devices operate as a network inside the body, system complexity increases dramatically [221,222], and ensuring reliability becomes extremely difficult. Devices may interfere with each other electromagnetically, especially when several wireless modules are active at the same time. Sensing loops and stimulation loops must also be designed to avoid mutual interference [223]. In complex architectures, cascading failures are more likely. A malfunction in any single subsystem can degrade overall performance or, in extreme cases, endanger the user’s life [224].
At present, there is no unified set of interface standards or communication protocols for such human–machine fusion systems. Devices developed by different groups are hard to interconnect or interoperate. This lack of standardization limits large-scale deployment and long-term upgradability. Evaluation benchmarks for multimodal fusion systems are also missing. Traditional performance metrics, such as improvement in a single-organ function, cannot fully capture the overall effect of multimodal coordination. Cross-domain behavioral assessment methods are still not standardized. As a result, outcomes from different experimental platforms are difficult to compare or reproduce, which slows technical iteration and clinical translation.
Potential safety risks add further complexity [225,226]. Runaway behavior in closed-loop systems is a major concern. If a sensor reports incorrect values or a control algorithm makes an erroneous decision, continuous automatic stimulation may overcompensate a physiological parameter and cause new instabilities. For example, failure in closed-loop control of an insulin pump can lead to dangerous hypoglycemia [227]. Fusion systems therefore need built-in redundancy checks and safety thresholds. When abnormal states are detected, the system should enter a safe mode, shut down stimulation, or issue an alarm.
As implants become networked and more interconnected, cybersecurity also becomes critical. Devices may be vulnerable to hacking or unauthorized modification, with potentially severe consequences. Ensuring the safety and reliability of such complex hybrid bio–machine systems will require comprehensive governance mechanisms. This is one of the key challenges that will determine whether fusion of lifeform technologies can ultimately earn trust and achieve large-scale deployment.

4.2. Future Directions

4.2.1. Building Layered, Heterogeneous In-Body Intelligent Communication Networks

To address the information-transfer bottlenecks discussed in Section 4.1, future interface systems are likely to develop along two parallel communication paths: high-speed short-range optical links, and low-loss ultrasound links for deep or distant tissues. These two modalities are complementary and can jointly support an in-body high-speed network.
For optical communication, studies have shown that near-infrared links inside the body can markedly increase data rates. Light experiences relatively low absorption and electromagnetic interference in biological tissues, which allows for more efficient transmission [228]. Recent experiments in tissue-mimicking media have demonstrated data rates above 100 Mbps over sub-centimeter distances [123,229]. Data rates on the order of 100 kbps have also been achieved over sub-decimeter links [230]. In the future, miniaturized semiconductor laser or LED arrays, combined with modulation schemes such as wavelength-division multiplexing, may further expand bandwidth [229]. This would enable high-speed, short-range optical links between dense clusters of implants and allow for “optical routing” nodes inside the body that aggregate data from multiple sensors and forward these to a subcutaneous relay. To mitigate scattering and improve channel stability, transparent cranial windows [231,232,233] or refractive-index-matched implant materials [234,235] could provide stable optical paths.
For longer distances across organs or to deeper regions, ultrasound is more advantageous. Ultrasound attenuates much less than RF waves in tissue and can also be used for power transfer. This makes it a strong candidate to replace RF for in-body backbone communication [236,237]. Proof-of-concept studies have already achieved data rates above 30 Mbps through about 5 cm of tissue and have shown that the same ultrasonic link can deliver both data and power [238]. Future work may employ array-based ultrasonic transducers with spatial multiplexing to support parallel communication with multiple nodes [239,240]. New waveguide structures and epidermal acoustic channels based on flexible materials could act as in-body or on-skin conduits for ultrasound, enabling low-loss communication over tens of centimeters without increasing tissue damage [239,241].
In such architectures, high-speed optical links would handle real-time data exchange among local high-density nodes, while ultrasound would form a whole-body backbone network. Joint design of modulation and coding schemes, beamforming control, and routing protocols would allow for seamless switching and fusion between the two media. The resulting infrastructure could provide the “high bandwidth + low latency + wide coverage + low power” profile required by fusion of lifeforms. It would support coordinated operation of multimodal devices, help solve cross-modal data-synchronization issues highlighted in Challenge 1, and improve overall system reliability relevant to Challenge 5.

4.2.2. Sustainable and Adaptive Power and Thermal-Management Strategies

The power and thermal challenges outlined in Challenge 3 ultimately arise from physical limits on energy density and heat dissipation. A promising direction is to develop sustainable and adaptive in vivo power and thermal management strategies. In such systems, multiple energy sources and cooling techniques are integrated, and power delivery is dynamically adjusted according to physiological state, forming an energy framework that “coexists” with the host.
On the supply side, one goal is to improve the efficiency and safety of implanted wireless power transfer [242]. Magnetic resonant coupling and ultrasound-based power delivery are two representative approaches. They can provide continuous energy to multiple implanted nodes without interrupting surrounding tissues. In parallel, in vivo energy harvesting can partially support self-powered operation of implants. Glucose biofuel cells are a particularly attractive option. Studies have shown that a single implanted enzymatic fuel cell can continuously harvest tens of microwatts from glucose in body fluids. This is sufficient to light an LED or drive small sensors and can operate in rats for several months without obvious immune rejection [243]. Other techniques, such as motion-energy harvesting [244,245,246] and thermoelectric generation [247], can tap into endogenous sources like heartbeat, respiration, or muscle contraction, thereby reducing dependence on external power.
Thermal management must be advanced in parallel to control temperature rise during device operation. One approach is to integrate phase-change materials into the implant as thermal buffers. When the device heats up, these materials absorb latent heat and smooth temperature peaks [248,249]. Another option is to design microfluidic cooling paths that conduct heat away from hot spots toward larger surfaces where it can dissipate more safely [250,251]. Experimental data suggest that as long as power density is kept below about 80 mW/cm2 [200], irreversible tissue damage can be avoided, making precise thermal control essential.
Finally, the power-supply and heat-management components of the implant must still meet strict requirements for miniaturization, mechanical flexibility, and biocompatibility. This includes encapsulating power components in soft, biocompatible materials so that the system can bend and deform with body movements without injuring tissue. With these advances, future fusion of lifeform systems may employ intelligent power modules that adjust output dynamically to the internal environment, provide sufficient energy, and actively suppress overheating, thereby greatly improving both endurance and safety.

4.2.3. Innovative Biointerfaces and Long-Lived Encapsulation Materials

To address the fundamental tension between biocompatibility and long-term reliability (Challenge 4), future work must advance both biointerface materials and encapsulation technologies. From a material science and bioengineering perspective, new materials and structural designs can improve compatibility between artificial devices and living tissue at the source. This, in turn, can significantly extend the safe service life of implants.
One key direction is the development of flexible electronic materials with mechanical properties that match those of human tissues. Soft substrates and stretchable conductors allow implanted devices to deform together with surrounding tissue [252,253,254,255,256]. In particular, advanced fabrication techniques such as electrospinning enable the scalable production of highly conformable, breathable nanofibrous membranes that further enhance mechanical compliance and long-term biocompatibility in both wearable and implantable systems [256]. This reduces cutting and friction damage caused by rigid materials. Mesh-like flexible electrodes have already shown great potential in brain implants by markedly attenuating immune responses and maintaining stable signals over long periods [216,257]. By lowering the effective Young’s modulus and optimizing the geometric design, flexible electronics can minimize chronic inflammation and scar formation induced by mechanical mismatch.
A second direction is the use of bioresorbable materials, which offers a fundamentally different design philosophy. Transient electronic devices are built from polymers such as polyanhydrides, polylactic acid (PLA), poly(lactic-co-glycolic acid) (PLGA), and silk fibroin, together with dissolvable metals and semiconductors [219,258,259,260]. After a predefined service period, the device gradually dissolves in body fluids and is cleared by normal metabolic pathways, eliminating the need for surgical removal. At the device level, resorbable silicon electronics have been used in animal brains for multimodal physiological monitoring and then disappeared after the task [261,262]. Fully bioresorbable wireless temporary cardiac pacemakers have demonstrated a truly leadless, battery-free, and fully cleared implant pathway. These examples provide strong in vivo evidence for “use-and-disappear” clinical strategies [263]. Overall, this approach combines chemical composition, geometry, and encapsulation design to program device lifetime. The system retires itself before performance degrades severely and is safely processed by the body. This avoids risks associated with long-term implantation, such as material aging, barrier failure, and chronic inflammation, and supports regulatory evaluation through well-defined lifetime–degradation curves and traceable safety profiles of degradation products.
Drug-eluting surface coatings offer another effective means to improve interface compatibility. Anti-inflammatory agents or neurotrophic factors can be loaded onto electrode surfaces and released gradually during the early post-implantation period. This helps suppress acute inflammation and glial scarring, protects neurons near the electrode, and reduces signal loss and tissue damage [218]. In parallel, new encapsulation technologies are being developed, including multilayer flexible barriers and superhydrophobic or anti-biofouling coatings [220]. These designs aim to increase resistance to body-fluid ingress and mechanical fatigue. By lowering the risk of cracking, delamination, and leakage, such encapsulation strategies help ensure that internal electronic components continue to operate reliably in complex in vivo environments.
Through a combined strategy that couples advances in materials, structural design, and pharmacology, future implantable electronics may achieve both high performance and markedly improved biocompatibility and durability. Flexible electronics can reduce tissue stress; drug coatings can mitigate immune responses; degradable materials can eliminate long-term foreign-body residues and revision surgery; and robust encapsulation can protect the remaining electronics over extended periods. Together, these innovations will extend the lifetime of fusion systems, reduce complications and maintenance needs, and make long-term human–machine integration a realistic goal.

4.2.4. Frameworks for System-Level Safety and Reliability

Beyond improving the reliability of individual devices, the safety of the entire fusion system is even more critical. This calls for a system-level safety and reliability framework that provides dual protection analogous to an “immune system” and a “nervous system” for fusion of lifeforms. The “immune” layer would detect and contain local faults through self-diagnosis and fault-tolerant mechanisms, preventing cascading failures. The “neural” layer would consist of secure communication protocols and global control strategies that coordinate all modules and ensure that information exchange remains reliable, efficient, and controlled.
In practice, robust fail–safe mechanisms are needed. Critical sensors and actuators should be deployed with redundancy so that backup units can take over seamlessly if one fails. Real-time anomaly detection algorithms should monitor deviations from normal operating ranges and trigger early warnings. Layered safety shutdown strategies are also required. When risk indicators are detected, the system should automatically downgrade functionality or shut down specific modules to prevent harm to the user. In parallel, cybersecurity defenses must be significantly strengthened. Encrypted communication, authentication, and intrusion detection techniques are essential to prevent implanted devices from being controlled by unauthorized external commands. Even in open, networked environments, such measures help maintain the confidentiality and integrity of internal signals and protect patients against malicious attacks.
Digital twin technology can serve as a powerful tool to enhance reliability and safety [264,265]. By creating a virtual replica of the in-body fusion system and its interaction with the physiological environment, engineers can simulate potential failure modes, interface issues, and extreme conditions before real-world deployment. Digital twins can also be used to repeatedly test control algorithms and fault-handling strategies under a wide range of abnormal scenarios, thereby optimizing fault tolerance and emergency responses and reducing real-world risk. During actual operation, the digital twin can function as a real-time monitoring and decision support layer that flags emerging anomalies and suggests corrective actions, improving autonomous safety management. At the ecosystem level, standardized interfaces and evaluation frameworks are urgently needed.
Common hardware interface specifications and communication protocols should be established to address cross-modal semantic and interface incompatibilities highlighted in Challenge 1, and to ensure that modules from different sources are plug-and-play and interoperable. A comprehensive metric set is also required. It should cover multimodal coordination performance, biocompatibility, fault recovery capacity, and other system-level properties so that solutions from different groups can be compared on a common basis. Regulatory bodies and the academic community will need to co-develop certification standards and ethical guidelines tailored to fusion of lifeform systems, providing clear safety boundaries and regulatory frameworks for clinical translation. Only when architecture design, fault protection, simulation-based validation, and industry-wide standards advance together can the safety and reliability of fusion of lifeforms be robustly guaranteed, enabling complex human–machine integration to evolve into a trustworthy, mature technology.
While overcoming the engineering barriers outlined in Section 4 is essential for the physical realization of fusion of lifeforms, the very act of solving these challenges—such as enabling high-bandwidth neural links and increasingly autonomous closed-loop systems—also amplifies their ethical and societal ramifications. System-level complexities in interoperability, security, and safety do not exist in a vacuum; they directly translate into risks of neural privacy breaches, erosion of human agency, and ambiguity in legal responsibility. Consequently, efforts to address technological bottlenecks in integration and control must be accompanied by a proactive and rigorous framework for ethical oversight and governance, as explored in the following section.

5. Safety and Ethical Considerations

Fusion-life technologies give rise to a series of cutting-edge ethical challenges. The first concerns blurred self-identity and shifting boundaries of personhood. Brain–computer interfaces and related technologies extend the scope of human identity, making cyborg-like human–machine hybrids possible; yet, while they may benefit humanity, they inevitably introduce high levels of risk and thus draw increasing ethical scrutiny. When thoughts and perceptions become deeply integrated with machine systems, individuals may experience philosophical uncertainty about “who I am”: if artificial components reshape cognition and personality, is the original self still intact? Case reports suggest that long-term implanted devices can indeed affect self-perception; some patients, for example, have wept when forced to have their implanted BCI removed, stating that “I lost myself” [266]. Some scholars argue that the development of enhanced BCIs (eBCIs) has moved beyond a simple cost–benefit calculus and now compels us to reconsider the nature of conscious selfhood and to ask who we are—and who we ought to become—in a fused human–machine condition [267]. At the same time, autonomous decision-making and free will face potential erosion. If neural devices can independently modulate emotion or steer decisions, to what extent does the user remain fully autonomous? Ethicists warn that “If you have a device that constantly steps up in your thinking or decision-making,” as Gilbert notes, “it might compromise you as an agent” [266]. In practice, people tend to defer to technologically superior recommendations: “You have the ultimate decision,” Gilbert observes, “but as soon as you realize the device is more effective in the specific context, you won’t even listen to your own judgement. You’ll rely on the device.” This phenomenon suggests that, within a cyborg mind co-governed by human and machine, the boundary between personal autonomy and system intelligence is becoming increasingly blurred.
Second, neuroprivacy and mental security present major ethical concerns. BCIs can directly read and influence brain signals, exposing an individual’s inner mental world to unprecedented exposure risk. The information within the brain is arguably the most intimate and personal form of data. If neural signals are improperly accessed, personal thoughts could be “read” or even manipulated. These risks of data misuse and “mind-reading” have spurred calls for neuroprivacy to be legally protected, akin to bodily privacy, with theoretical proposals advocating for “neurorights”—including freedom of thought, privacy, mental integrity, and personal identity—to safeguard mental sovereignty [268]. Meanwhile, issues of fairness and ethical inequality also arise. The high cost of cognitive enhancement devices may restrict access to a privileged few, potentially worsening social inequity. Concerns exist that if only the wealthy can obtain enhanced BCIs, the conferred cognitive advantages could widen pre-existing socioeconomic disparities. This may lead to a societal split between “enhanced” and “natural” individuals, creating new unfair competition in areas like education and employment. Such a cognitive divide challenges social equity and risks provoking new forms of discrimination and social tension.
Finally, fusion-life systems pose unprecedented challenges for legal responsibility and ethical governance. When human–machine hybrid systems participate in behavioral decisions or even act autonomously, assigning responsibility becomes complex and ambiguous. Consider an accident involving an exoskeleton with AI-based co-decision capabilities: should liability fall on the human operator, on the machine intelligence, or be shared between them? The existence of such hybrid agents can create a “responsibility vacuum,” as part of the decision-making originates in the human user and part in machine algorithms. Current legal frameworks typically treat machines as tools and humans as the sole bearers of responsibility, and have not yet formally recognized human–machine hybrids as a distinct, intermediate category. Ethical oversight and legal structures therefore need urgent updating. Regulators and researchers must develop clear guidelines and responsibility allocation mechanisms tailored to this new composite form of life, ensuring that society is adequately prepared to address the ethical and legal challenges before these technologies are deployed at scale.

6. Conclusions

This review synthesizes multi-level perception systems within the concept of fusion of lifeforms, defined as long-term symbiotic integration of biological and artificial components across structural, energetic, informational, and cognitive axes. We propose a four-class functional taxonomy—sensory restoration, beyond-natural sensing, endogenous state monitoring and regulation, and cognitive enhancement—and survey-representative technologies spanning neuroprostheses, implantable sensors, and brain–computer interfaces, highlighting a shift from external aids toward embedded, co-adaptive hybrid systems. Key bottlenecks include multimodal fusion, constrained in-body bandwidth and power, long-term biocompatibility, and the absence of unified interface standards and outcome metrics. Progress will require advances in materials, energy harvesting, in-body communication, and AI-driven decoding and closed-loop control, alongside fail–safe design principles. Finally, the convergence of human and machine intelligence demands parallel attention to identity, neural privacy, equity, and accountability.

Author Contributions

Conceptualization, B.Z. and S.X.; methodology, B.Z. and S.X.; investigation (literature survey), B.Z., X.Y. and Y.L.; writing—original draft preparation, B.Z. and X.Y.; writing—review and editing, B.Z., X.Y., Y.L., J.X. and S.X.; visualization, B.Z.; supervision, J.X. and S.X.; project administration, J.X. and S.X.; funding acquisition, J.X. and S.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the National Key R&D Program of China (Grant No. 2024YFC3406302), the National Natural Science Foundation of China (Grant No. 12204273), the Natural Science Foundation of Shandong Province, China (Grant No. ZR2024MF107), and the National Key R&D Program of China (Grant No. 2017YFA0701302).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors extend their sincere gratitude to Jiarui Yang (Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital), Hong Jiang, and Yi Ming (Neuroscience Research Institute, Peking University Health Science Center) for their invaluable conceptual insights into the fusion of lifeform framework and constructive comments on this manuscript. During the preparation of this manuscript, the authors made limited use of ChatGPT (OpenAI, GPT-5.1 Thinking) to assist with language polishing and grammar refinement, and used the image generation mode of Gemini 3 Pro as an auxiliary design tool for creating most of the graphical elements in Figure 1, Figure 2A and Figure 11. All scientific concepts, figure layouts, and final revisions were determined by the authors, who take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AGIArtificial General Intelligence
ALSAmyotrophic Lateral Sclerosis
B2BIBrain-to-Brain Interfaces
BBIsBrain-to-Brain Interfaces
BCIBrain–Computer Interfaces
CBIsComputer–Brain Interfaces
CGMContinuous Glucose Monitoring
DBSDeep Brain Stimulation
ECGElectrocardiogram
EEGElectroencephalography
eBCIsEnhanced Brain–Computer Interfaces
EMGElectromyography
e-noseElectronic Nose
FNIRsFunctional Near-Infrared Spectroscopy
FUSFocused Ultrasound
ICMSIntracortical Microstimulation
iBCIsImplantable Brain–Computer Interfaces
iEEGIntracranial Electroencephalography
IMUInertial Measurement Units
IoBInternet of Bodies
MIMOMulti-Input–Multi-Output
NSTENGNano-Structured Triboelectric Nanogenerator
PLAPolylactic Acid
PLGAPoly(lactic-co-glycolic acid)
SEPSSubendocardial Pressure Sensors
SSVEPSteady-State Visual Evoked Potentials
TENGsTriboelectric Nanogenerators
TENSTranscutaneous Electrical Nerve Stimulation
TMSTranscranial Magnetic Stimulation
WIBSNsWearable–Implantable Body Sensor Networks

References

  1. Huang, H.H.; Hargrove, L.J.; Ortiz-Catalan, M.; Sensinger, J.W. Integrating Upper-Limb Prostheses with the Human Body: Technology Advances, Readiness, and Roles in Human–Prosthesis Interaction. Annu. Rev. Biomed. Eng. 2024, 26, 503–528. [Google Scholar] [PubMed]
  2. Alam, F.; Ashfaq Ahmed, M.; Jalal, A.H.; Siddiquee, I.; Adury, R.Z.; Hossain, G.M.M.; Pala, N. Recent Progress and Challenges of Implantable Biodegradable Biosensors. Micromachines 2024, 15, 475. [Google Scholar] [CrossRef] [PubMed]
  3. Rapeaux, A.B.; Constandinou, T.G. Implantable brain machine interfaces: First-in-human studies, technology challenges and trends. Curr. Opin. Biotechnol. 2021, 72, 102–111. [Google Scholar] [CrossRef] [PubMed]
  4. Semertzidis, N.; Zambetta, F.; Mueller, F.F. Brain-Computer Integration: A Framework for the Design of Brain-Computer Interfaces from an Integrations Perspective. ACM Trans. Comput.-Hum. Interact. 2023, 30, 1–48. [Google Scholar] [CrossRef]
  5. Schalk, G. Brain–computer symbiosis. J. Neural Eng. 2008, 5, P1. [Google Scholar] [CrossRef]
  6. Gupta, A.; Vardalakis, N.; Wagner, F.B. Neuroprosthetics: From sensorimotor to cognitive disorders. Commun. Biol. 2023, 6, 14. [Google Scholar] [CrossRef]
  7. Schumann, F.; O’Regan, J.K. Sensory augmentation: Integration of an auditory compass signal into human perception of space. Sci. Rep. 2017, 7, 42197. [Google Scholar] [CrossRef]
  8. Liu, H.; Wang, J.; Zhai, L.; Fang, Y.; Huang, J. Neuralite: Enabling wireless high-resolution brain-computer interfaces. In Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, ACM MobiCom ’24, Washington, DC, USA, 18–22 November 2024; pp. 984–999. [Google Scholar]
  9. Lin, Y.N.; Ge, S.; Yang, N.N.; Xu, J.J.; Han, H.B.; Xu, S.Y. Artificial vision-aid systems: Current status and future trend. Prog. Biochem. Biophys. 2021, 48, 1316–1336. [Google Scholar]
  10. Sheikh, Z.; Brooks, P.J.; Barzilay, O.; Fine, N.; Glogauer, M. Macrophages, foreign body giant cells and their response to implantable biomaterials. Materials 2015, 8, 5671–5701. [Google Scholar] [CrossRef]
  11. Major, M.R.; Wong, V.W.; Nelson, E.R.; Longaker, M.T.; Gurtner, G.C. The foreign body response: At the interface of surgery and bioengineering. Plast. Reconstr. Surg. 2015, 135, 1489–1498. [Google Scholar]
  12. Khan, I.M.; Khan, S.; Khalifa, O.O. Wireless transfer of power to low power implanted biomedical devices: Coil design considerations. In Proceedings of the 2012 IEEE International Instrumentation and Measurement Technology Conference Proceedings, Graz, Austria, 13–16 May 2012; pp. 1–5. [Google Scholar]
  13. Sinclair, M.; Biswas, D.; Le, T.; Hyde, J.; Mahbub, I.; Chang, L.; Hao, Y. Design of a flexible receiver module for implantable wireless power transfer (WPT) applications. In Proceedings of the 2019 United States National Committee of URSI National Radio Science Meeting (USNC-URSI NRSM), Boulder, CO, USA, 9–12 January 2019; pp. 1–2. [Google Scholar]
  14. Nirenberg, S.; Pandarinath, C. Retinal prosthetic strategy with the capacity to restore normal vision. Proc. Natl. Acad. Sci. USA 2012, 109, 15012–15017. [Google Scholar] [CrossRef]
  15. Tabot, G.A.; Dammann, J.F.; Berg, J.A.; Tenore, F.V.; Boback, J.L.; Vogelstein, R.J.; Bensmaia, S.J. Restoring the sense of touch with a prosthetic hand through a brain interface. Proc. Natl. Acad. Sci. USA 2013, 110, 18279–18284. [Google Scholar] [CrossRef] [PubMed]
  16. Kral, A.; Sharma, A. Developmental neuroplasticity after cochlear implantation. Trends Neurosci. 2012, 35, 111–122. [Google Scholar] [CrossRef] [PubMed]
  17. Caravaca-Rodriguez, D.; Gaytan, S.P.; Suaning, G.J.; Barriga-Rivera, A. Implications of Neural Plasticity in Retinal Prosthesis. Investig. Ophthalmol. Vis. Sci. 2022, 63, 11. [Google Scholar]
  18. Orsborn, A.L.; Moorman, H.G.; Overduin, S.A.; Shanechi, M.M.; Dimitrov, D.F.; Carmena, J.M. Closed-Loop Decoder Adaptation Shapes Neural Plasticity for Skillful Neuroprosthetic Control. Neuron 2014, 82, 1380–1393. [Google Scholar] [CrossRef]
  19. Skinner, M.W. Optimizing Cochlear Implant Speech Performance. Ann. Otol. Rhinol. 2003, 112, 4–13. [Google Scholar]
  20. Uluşan, H.; Yüksel, M.B.; Topçu, Ö.; Yiğit, H.A.; Yılmaz, A.M.; Doğan, M.; Gülhan Yasar, N.; Kuyumcu, İ.; Batu, A.; Göksu, N.; et al. A full-custom fully implantable cochlear implant system validated in vivo with an animal model. Commun. Eng. 2024, 3, 132. [Google Scholar] [CrossRef]
  21. Borjigin, A.; Kokkinakis, K.; Bharadwaj, H.M.; Stohl, J.S. Deep learning restores speech intelligibility in multi-talker interference for cochlear implant users. Sci. Rep. 2024, 14, 13241. [Google Scholar] [CrossRef]
  22. da Cruz, L.; Dorn, J.D.; Humayun, M.S.; Dagnelie, G.; Handa, J.; Barale, P.O.; Sahel, J.A.; Stanga, P.E.; Hafezi, F.; Safran, A.B.; et al. Five-Year Safety and Performance Results from the Argus II Retinal Prosthesis System Clinical Trial. Ophthalmology 2016, 123, 2248–2254. [Google Scholar] [CrossRef]
  23. Lorach, H.; Goetz, G.; Smith, R.; Lei, X.; Mandel, Y.; Kamins, T.; Mathieson, K.; Huie, P.; Harris, J.; Sher, A.; et al. Photovoltaic restoration of sight with high visual acuity. Nat. Med. 2015, 21, 476–482. [Google Scholar] [CrossRef]
  24. Willett, F.R.; Kunz, E.M.; Fan, C.; Avansino, D.T.; Wilson, G.H.; Choi, E.Y.; Kamdar, F.; Glasser, M.F.; Hochberg, L.R.; Druckmann, S.; et al. A high-performance speech neuroprosthesis. Nature 2023, 620, 1031–1036. [Google Scholar] [CrossRef]
  25. Osborn, L.E.; Dragomir, A.; Betthauser, J.L.; Hunt, C.L.; Nguyen, H.H.; Kaliki, R.R.; Thakor, N.V. Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain. Sci. Robot. 2018, 3, eaat3818. [Google Scholar] [CrossRef] [PubMed]
  26. George, J.A.; Kluger, D.T.; Davis, T.S.; Wendelken, S.M.; Okorokova, E.V.; He, Q.; Duncan, C.C.; Hutchinson, D.T.; Thumser, Z.C.; Beckler, D.T.; et al. Biomimetic sensory feedback through peripheral nerve stimulation improves dexterous use of a bionic hand. Sci. Robot. 2019, 4, eaax2352. [Google Scholar] [CrossRef] [PubMed]
  27. Holbrook, E.H.; Puram, S.V.; See, R.B.; Tripp, A.G.; Nair, D.G. Induction of smell through transethmoid electrical stimulation of the olfactory bulb. Int. Forum Allergy Rhinol. 2019, 9, 158–164. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, Z.; Ma, Y.; Ouyang, H.; Shi, B.; Li, N.; Jiang, D.; Xie, F.; Qu, D.; Zou, Y.; Huang, Y.; et al. Transcatheter Self-Powered Ultrasensitive Endocardial Pressure Sensor. Adv. Funct. Mater. 2019, 29, 1807560. [Google Scholar] [CrossRef]
  29. Zhao, D.; Zhuo, J.; Chen, Z.; Wu, J.; Ma, R.; Zhang, X.; Zhang, Y.; Wang, X.; Wei, X.; Liu, L.; et al. Eco-friendly in-situ gap generation of no-spacer triboelectric nanogenerator for monitoring cardiovascular activities. Nano Energy 2021, 90, 106580. [Google Scholar]
  30. Zhao, L.; Gao, Z.; Liu, W.; Wang, C.; Luo, D.; Chao, S.; Li, S.; Li, Z.; Wang, C.; Zhou, J. Promoting maturation and contractile function of neonatal rat cardiomyocytes by self-powered implantable triboelectric nanogenerator. Nano Energy 2022, 103, 107798. [Google Scholar] [CrossRef]
  31. Yao, G.; Kang, L.; Li, J.; Long, Y.; Wei, H.; Ferreira, C.A.; Jeffery, J.J.; Lin, Y.; Cai, W.; Wang, X. Effective weight control via an implanted self-powered vagus nerve stimulation device. Nat. Commun. 2018, 9, 5349. [Google Scholar] [CrossRef]
  32. Arab Hassani, F.; Mogan, R.P.; Gammad, G.G.L.; Wang, H.; Yen, S.C.; Thakor, N.V.; Lee, C. Toward Self-Control Systems for Neurogenic Underactive Bladder: A Triboelectric Nanogenerator Sensor Integrated with a Bistable Micro-Actuator. ACS Nano 2018, 12, 3487–3501. [Google Scholar] [CrossRef]
  33. Tian, J.; Shi, R.; Liu, Z.; Ouyang, H.; Yu, M.; Zhao, C.; Zou, Y.; Jiang, D.; Zhang, J.; Li, Z. Self-powered implantable electrical stimulator for osteoblasts’ proliferation and differentiation. Nano Energy 2019, 59, 705–714. [Google Scholar] [CrossRef]
  34. Zhang, G.; Chen, R.; Ghorbani, H.; Li, W.; Minasyan, A.; Huang, Y.; Lin, S.; Shao, M. Artificial intelligence-enabled innovations in cochlear implant technology: Advancing auditory prosthetics for hearing restoration. Bioeng. Transl. Med. 2025, 10, e10752. [Google Scholar] [CrossRef] [PubMed]
  35. Menia, N.K.; Venkatesh, P. Retinal prosthesis: A comprehensive review. Expert Rev. Ophthalmol. 2025, 20, 89–106. [Google Scholar] [CrossRef]
  36. Gabel, V.P. (Ed.) Artificial Vision: A Clinical Guide, 1st ed.; Springer International Publishing: Cham, Switzerland, 2017; p. xvii+232. [Google Scholar]
  37. Lu, Q.; Lei, T.; Xu, J.; Qin, J. Principles, applications, and challenges of E-skin: A mini-review. Chem. Eng. J. 2025, 521, 166936. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Wang, Q.; Wu, F.; Yang, Q.; Tang, X.; Shang, S.; Hu, S.; Zhou, G.; Zhuang, L. Bionic sensing and BCI technologies for olfactory improvement and reconstruction. Chemosensors 2025, 13, 381. [Google Scholar] [CrossRef]
  39. Dhar, I.; Choudhury, B.B.; Sahoo, B.; Sahoo, S.K. A comprehensive review on advances in sensor technologies for prosthetic palms. Spectr. Eng. Manag. Sci. 2025, 3, 253–261. [Google Scholar] [CrossRef]
  40. Haghani Dogahe, M.; Mahan, M.A.; Zhang, M.; Bashiri Aliabadi, S.; Rouhafza, A.; Karimzadhagh, S.; Feizkhah, A.; Monsef, A.; Habibi Roudkenar, M. Advancing prosthetic hand capabilities through biomimicry and neural interfaces. Neurorehabilit. Neural Repair 2025, 39, 481–494. [Google Scholar] [CrossRef]
  41. Carlyon, R.P.; Goehring, T. Cochlear implant research and development in the twenty-first century: A critical update. J. Assoc. Res. Otolaryngol. 2021, 22, 481–508. [Google Scholar] [CrossRef]
  42. Roche, J.P.; Hansen, M.R. On the horizon: Cochlear implant technology. Otolaryngol. Clin. N. Am. 2015, 48, 1097–1116. [Google Scholar] [CrossRef]
  43. Zeng, F.G. Celebrating the one millionth cochlear implant. JASA Express Lett. 2022, 2, 077201. [Google Scholar] [CrossRef]
  44. Chadha, S.; Kamenov, K.; Cieza, A. The world report on hearing, 2021. Bull. World Health Organ. 2021, 99, 242–242A. [Google Scholar] [CrossRef]
  45. Arevalo, J.F.; Al Rashaed, S.; Alhamad, T.A.; Al Kahtani, E.; Al-Dhibi, H.A.; Mura, M.; Nowilaty, S.; Al-Zahrani, Y.A.; Kozak, I.; Al-Sulaiman, S.; et al. Argus II retinal prosthesis for retinitis pigmentosa in the Middle East: The 2015 Pan-American Association of Ophthalmology Gradle Lecture. Int. J. Retin. Vitr. 2021, 7, 65. [Google Scholar] [CrossRef] [PubMed]
  46. Wohlbauer, D.M.; Dillier, N. A hundred ways to encode sound signals for cochlear implants. Annu. Rev. Biomed. Eng. 2025, 27, 335–369. [Google Scholar] [CrossRef] [PubMed]
  47. González-García, M.; Prieto-Sánchez-de Puerta, L.; Domínguez-Durán, E.; Sánchez-Gómez, S. Auditory prognosis of patients with sudden sensorineural hearing loss in relation to the presence of acute vestibular syndrome: A systematic literature review and meta-analysis. Ear Hear. 2025, 46, 8–15. [Google Scholar] [CrossRef] [PubMed]
  48. Hu, Y.; Loizou, P.C. A new sound coding strategy for suppressing noise in cochlear implants. J. Acoust. Soc. Am. 2008, 124, 498–509. [Google Scholar] [CrossRef]
  49. McDermott, H.J. Music perception with cochlear implants: A review. Trends Amplif. 2004, 8, 49–82. [Google Scholar] [CrossRef]
  50. Clark, G.M. The multi-channel cochlear implant: Multi-disciplinary development of electrical stimulation of the cochlea and the resulting clinical benefit. Hear. Res. 2015, 322, 4–13. [Google Scholar] [CrossRef]
  51. Cucis, P.A.; Berger-Vachon, C.; Hermann, R.; Millioz, F.; Truy, E.; Gallego, S. Hearing in noise: The importance of coding strategies—Normal-hearing subjects and cochlear implant users. Appl. Sci. 2019, 9, 734. [Google Scholar] [CrossRef]
  52. Buechner, A.; Dyballa, K.H.; Hehrmann, P.; Fredelake, S.; Lenarz, T. Advanced Beamformers for Cochlear Implant Users: Acute Measurement of Speech Perception in Challenging Listening Conditions. PLoS ONE 2014, 9, e95542. [Google Scholar] [CrossRef]
  53. Hey, M.; Hocke, T.; Böhnke, B.; Mauger, S.J. ForwardFocus with cochlear implant recipients in spatially separated and fluctuating competing signals—Introduction of a reference metric. Int. J. Audiol. 2019, 58, 869–878. [Google Scholar] [CrossRef]
  54. Goehring, T.; Bolner, F.; Monaghan, J.J.M.; van Dijk, B.; Zarowski, A.; Bleeck, S. Speech enhancement based on neural networks improves speech intelligibility in noise for cochlear implant users. Hear. Res. 2017, 344, 183–194. [Google Scholar] [CrossRef]
  55. Gajecki, T.; Zhang, Y.; Nogueira, W. A Deep Denoising Sound Coding Strategy for Cochlear Implants. IEEE Trans. Biomed. Eng. 2023, 70, 2700–2709. [Google Scholar] [CrossRef] [PubMed]
  56. Lai, Y.H.; Tsao, Y.; Lu, X.; Chen, F.; Su, Y.T.; Chen, K.C.; Chen, Y.H.; Chen, L.C.; Li, P.H.; Lee, C.H. Deep learning-based noise reduction approach to improve speech intelligibility for cochlear implant recipients. Ear Hear. 2018, 39, 795–809. [Google Scholar] [CrossRef] [PubMed]
  57. Lindquist, N.R.; Appelbaum, E.N.; Fullmer, T.; Sandulache, V.C.; Sweeney, A.D. A hurricane, temporal bone paraganglioma, cholesteatoma, Bezold’s abscess, and necrotizing fasciitis. Otol. Neurotol. 2020, 41, e149–e151. [Google Scholar] [CrossRef] [PubMed]
  58. Nieratschker, M.; Yildiz, E.; Schnoell, J.; Hirtler, L.; Schlingensiepen, R.; Honeder, C.; Arnoldner, C. Intratympanic substance distribution after injection of liquid and thermosensitive drug carriers: An endoscopic study. Otol. Neurotol. 2022, 43, 1264–1271. [Google Scholar] [CrossRef]
  59. Koyama, H.; Kashio, A.; Yamasoba, T. Prediction of cochlear implant fitting by machine learning techniques. Otol. Neurotol. 2024, 45, 643–650. [Google Scholar] [CrossRef]
  60. Demirtaş Yılmaz, B. Prediction of auditory performance in cochlear implants using machine learning methods: A systematic review. Audiol. Res. 2025, 15, 56. [Google Scholar] [CrossRef]
  61. Shafieibavani, E.; Goudey, B.; Kiral, I.; Zhong, P.; Jimeno-Yepes, A.; Swan, A.; Gambhir, M.; Buechner, A.; Kludt, E.; Eikelboom, R.H.; et al. Predictive models for cochlear implant outcomes: Performance, generalizability, and the impact of cohort size. Trends Hear. 2021, 25, 23312165211066174. [Google Scholar] [CrossRef]
  62. Kashani, R.G.; Henslee, A.; Nelson, R.F.; Hansen, M.R. Robotic assistance during cochlear implantation: The rationale for consistent, controlled speed of electrode array insertion. Front. Neurol. 2024, 15, 1335994. [Google Scholar] [CrossRef]
  63. Ahmed, O.; Wang, M.; Zhang, B.; Irving, R.; Begg, P.; Du, X. Robotic systems for cochlear implant surgeries: A review of robotic design and clinical outcomes. Electronics 2025, 14, 2685. [Google Scholar] [CrossRef]
  64. Khan, U.A.; Dunn, C.C.; Scheperle, R.A.; Oleson, J.; Claussen, A.D.; Gantz, B.J.; Hansen, M.R. Robotic-assisted electrode array insertion improves rates of hearing preservation. Laryngoscope 2025, 135, 4364–4371. [Google Scholar] [CrossRef]
  65. Luo, Y.H.L.; da Cruz, L. The Argus® II retinal prosthesis system. Prog. Retin. Eye Res. 2016, 50, 89–107. [Google Scholar] [CrossRef]
  66. Stingl, K.; Bartz-Schmidt, K.U.; Besch, D.; Braun, A.; Bruckmann, A.; Gekeler, F.; Greppmaier, U.; Hipp, S.; Hörtdörfer, G.; Kernstock, C.; et al. Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS. Proc. R. Soc. B Biol. Sci. 2013, 280, 20130077. [Google Scholar] [CrossRef] [PubMed]
  67. Chai, X.; Li, L.; Wu, K.; Zhou, C.; Cao, P.; Ren, Q. C-Sight visual prostheses for the blind. IEEE Eng. Med. Biol. Mag. 2008, 27, 20–28. [Google Scholar] [CrossRef] [PubMed]
  68. Pouratian, N.; Yoshor, D.; Niketeghad, S.; Dornm, J.; Greenberg, R. Early feasibility study of a neurostimulator to create artificial vision. Neurosurgery 2019, 66, 310–146. [Google Scholar] [CrossRef]
  69. Lu, G.; Gong, C.; Sun, Y.; Qian, X.; Rajendran Nair, D.S.; Li, R.; Zeng, Y.; Ji, J.; Zhang, J.; Kang, H.; et al. Noninvasive imaging-guided ultrasonic neurostimulation with arbitrary 2D patterns and its application for high-quality vision restoration. Nat. Commun. 2024, 15, 4481. [Google Scholar]
  70. Vieira, I.V.; Fan, V.H.; Wiemer, M.W.; Lemoff, B.E.; Sood, K.S.; Mussa, M.J.; Yu, C.Q. In vivo stability of electronic intraocular lens implant for corneal blindness. Transl. Vis. Sci. Technol. 2025, 14, 33. [Google Scholar] [CrossRef]
  71. Shim, S.Y.; Gong, S.; Rosenblatt, M.I.; Palanker, D.; Al-Qahtani, A.; Sun, M.G.; Zhou, Q.; Kanu, L.; Chau, F.; Yu, C.Q. Feasibility of intraocular projection for treatment of intractable corneal opacity. Cornea 2019, 38, 523–527. [Google Scholar] [CrossRef]
  72. Zhang, B.; Zhang, R.; Zhao, J.; Yang, J.; Xu, S. The mechanism of human color vision and potential implanted devices for artificial color vision. Front. Neurosci. 2024, 18, 1408087. [Google Scholar] [CrossRef]
  73. Shoval, S.; Borenstein, J.; Koren, Y. The NavBelt-a computerized travel aid for the blind based on mobile robotics technology. IEEE Trans. Biomed. Eng. 1998, 45, 1376–1386. [Google Scholar]
  74. Dakopoulos, D.; Bourbakis, N.G. Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2010, 40, 25–35. [Google Scholar] [CrossRef]
  75. Xu, J.; Wang, C.; Li, Y.; Huang, X.; Zhao, M.; Shen, Z.; Liu, Y.; Wan, Y.; Sun, F.; Zhang, J.; et al. Multimodal navigation and virtual companion system: A wearable device assisting blind people in independent travel. Sensors 2025, 25, 4223. [Google Scholar] [CrossRef]
  76. Ge, S.; Lin, Y.N.; Lai, S.N.; Xu, J.J.; He, Y.L.; Zhao, Q.; Zhang, H.; Xu, S.Y. A virtual vision navigation system for the blind using wearable touch-vision devices. Prog. Biochem. Biophys. 2022, 49, 1543–1554. [Google Scholar]
  77. Battaglia, E.; Clark, J.P.; Bianchi, M.; Catalano, M.G.; Bicchi, A.; O’Malley, M.K. Skin stretch haptic feedback to convey closure information in anthropomorphic, under-actuated upper limb soft prostheses. IEEE Trans. Haptics 2019, 12, 508–520. [Google Scholar] [CrossRef] [PubMed]
  78. Miyahara, Y.; Kato, R. Development of thin vibration sheets using a shape memory alloy actuator for the tactile feedback of myoelectric prosthetic hands. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Mexico, 1–5 November 2021; pp. 6255–6258. [Google Scholar]
  79. Borkowska, V.R.; McConnell, A.; Vijayakumar, S.; Stokes, A.; Roche, A.D. A haptic sleeve as a method of mechanotactile feedback restoration for myoelectric hand prosthesis users. Front. Rehabil. Sci. 2022, 3, 806479. [Google Scholar] [CrossRef] [PubMed]
  80. Huang, H.; Li, T.; Bruschini, C.; Enz, C.; Justiz, J.; Antfolk, C.; Koch, V.M. Multi-modal sensory feedback system for upper limb amputees. In Proceedings of the 2017 New Generation of CAS (NGCAS), Genova, Italy, 6–9 September 2017; pp. 193–196. [Google Scholar]
  81. Antfolk, C.; D’Alonzo, M.; Controzzi, M.; Lundborg, G.; Rosen, B.; Sebelius, F.; Cipriani, C. Artificial redirection of sensation from prosthetic fingers to the phantom hand map on transradial amputees: Vibrotactile versus mechanotactile sensory feedback. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 21, 112–120. [Google Scholar] [CrossRef]
  82. Naples, G.; Mortimer, J.; Scheiner, A.; Sweeney, J. A spiral nerve cuff electrode for peripheral nerve stimulation. IEEE Trans. Biomed. Eng. 1988, 35, 905–916. [Google Scholar] [CrossRef]
  83. Grill, W.M.; Norman, S.E.; Bellamkonda, R.V. Implanted neural interfaces: Biochallenges and engineered solutions. Annu. Rev. Biomed. Eng. 2009, 11, 1–24. [Google Scholar] [CrossRef]
  84. Cuttaz, E.; Goding, J.; Vallejo-Giraldo, C.; Aregueta-Robles, U.; Lovell, N.; Ghezzi, D.; Green, R.A. Conductive elastomer composites for fully polymeric, flexible bioelectronics. Biomater. Sci. 2019, 7, 1372–1385. [Google Scholar] [CrossRef]
  85. Li, H.; Gao, G.; Xu, Z.; Tang, D.; Chen, T. Recent progress in bionic skin based on conductive polymer gels. Macromol. Rapid Commun. 2021, 42, 2100480. [Google Scholar] [CrossRef]
  86. Xu, C.; Solomon, S.A.; Gao, W. Artificial intelligence-powered electronic skin. Nat. Mach. Intell. 2023, 5, 1344–1355. [Google Scholar] [CrossRef]
  87. Roche, A.D.; Bailey, Z.K.; Gonzalez, M.; Vu, P.P.; Chestek, C.A.; Gates, D.H.; Kemp, S.W.P.; Cederna, P.S.; Ortiz-Catalan, M.; Aszmann, O.C. Upper limb prostheses: Bridging the sensory gap. J. Hand Surg. Eur. 2023, 48, 182–190. [Google Scholar] [CrossRef]
  88. Zhai, Z.; Liu, Y.; Li, C.; Wang, D.; Wu, H. Electronic noses: From gas-sensitive components and practical applications to data processing. Sensors 2024, 24, 4806. [Google Scholar] [CrossRef] [PubMed]
  89. Strickland, E. A Bionic Nose to Smell the Roses Again: Covid Survivors Drive Demand for a Neuroprosthetic Nose. IEEE Spectr. 2022, 59, 22–27. [Google Scholar] [CrossRef]
  90. Card, N.S.; Wairagkar, M.; Iacobacci, C.; Hou, X.; Singer-Clark, T.; Willett, F.R.; Kunz, E.M.; Fan, C.; Vahdati Nia, M.; Deo, D.R.; et al. An accurate and rapidly calibrating speech neuroprosthesis. N. Engl. J. Med. 2024, 391, 609–618. [Google Scholar] [CrossRef] [PubMed]
  91. Angrick, M.; Luo, S.; Rabbani, Q.; Candrea, D.N.; Shah, S.; Milsap, G.W.; Anderson, W.S.; Gordon, C.R.; Rosenblatt, K.R.; Clawson, L.; et al. Online speech synthesis using a chronically implanted brain–computer interface in an individual with ALS. Sci. Rep. 2024, 14, 9617. [Google Scholar] [CrossRef]
  92. Silva, A.B.; Littlejohn, K.T.; Liu, J.R.; Moses, D.A.; Chang, E.F. The speech neuroprosthesis. Nat. Rev. Neurosci. 2024, 25, 473–492. [Google Scholar] [CrossRef]
  93. Jhilal, S.; Marchesotti, S.; Thirion, B.; Soudrie, B.; Giraud, A.L.; Mandonnet, E. Implantable neural speech decoders: Recent advances, future challenges. Neurorehabilit. Neural Repair 2025. [Google Scholar] [CrossRef]
  94. Abdikenov, B.; Zholtayev, D.; Suleimenov, K.; Assan, N.; Ozhikenov, K.; Ozhikenova, A.; Nadirov, N.; Kapsalyamov, A. Emerging frontiers in robotic upper-limb prostheses: Mechanisms, materials, tactile sensors and machine learning-based EMG control: A comprehensive review. Sensors 2025, 25, 3892. [Google Scholar] [CrossRef]
  95. Gozzi, N.; Malandri, L.; Mercorio, F.; Pedrocchi, A. XAI for myo-controlled prosthesis: Explaining EMG data for hand gesture classification. Knowl.-Based Syst. 2022, 240, 108053. [Google Scholar] [CrossRef]
  96. Jarrah, Y.A.; Asogbon, M.G.; Samuel, O.W.; Wang, X.; Zhu, M.; Nsugbe, E.; Chen, S.; Li, G. High-density surface EMG signal quality enhancement via optimized filtering technique for amputees’ motion intent characterization towards intuitive prostheses control. Biomed. Signal Process. Control 2022, 74, 103497. [Google Scholar] [CrossRef]
  97. Tam, S.; Boukadoum, M.; Campeau-Lecours, A.; Gosselin, B. Intuitive real-time control strategy for high-density myoelectric hand prosthesis using deep and transfer learning. Sci. Rep. 2021, 11, 11275. [Google Scholar] [CrossRef]
  98. Park, J.; Kim, M.; Lee, Y.; Lee, H.S.; Ko, H. Fingertip skin–inspired microstructured ferroelectric skins discriminate static/dynamic pressure and temperature stimuli. Sci. Adv. 2015, 1, e1500661. [Google Scholar] [CrossRef]
  99. Stefanelli, E.; Sperduti, M.; Cordella, F.; Luigi Tagliamonte, N.; Zollo, L. Performance assessment of thermal sensors for hand prostheses. IEEE Sens. J. 2024, 24, 27559–27569. [Google Scholar] [CrossRef]
  100. Lee, J.H.; Heo, J.S.; Kim, Y.J.; Eom, J.; Jung, H.J.; Kim, J.W.; Kim, I.; Park, H.H.; Mo, H.S.; Kim, Y.H.; et al. A behavior-learned cross-reactive sensor matrix for intelligent skin perception. Adv. Mater. 2020, 32, 2000969. [Google Scholar] [CrossRef] [PubMed]
  101. Yildiz, K.A.; Shin, A.Y.; Kaufman, K.R. Interfaces with the peripheral nervous system for the control of a neuroprosthetic limb: A review. J. NeuroEng. Rehabil. 2020, 17, 43. [Google Scholar] [CrossRef] [PubMed]
  102. Čvančara, P.; Valle, G.; Müller, M.; Bartels, I.; Guiho, T.; Hiairrassary, A.; Petrini, F.; Raspopovic, S.; Strauss, I.; Granata, G.; et al. Bringing sensation to prosthetic hands—Chronic assessment of implanted thin-film electrodes in humans. npj Flex. Electron. 2023, 7, 51. [Google Scholar] [CrossRef]
  103. Charkhkar, H.; Christie, B.P.; Triolo, R.J. Sensory neuroprosthesis improves postural stability during Sensory Organization Test in lower-limb amputees. Sci. Rep. 2020, 10, 6984. [Google Scholar] [CrossRef]
  104. Cowan, M.; Creveling, S.; Sullivan, L.M.; Gabert, L.; Lenzi, T. A unified controller for natural ambulation on stairs and level ground with a powered robotic knee prosthesis. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 2146–2151. [Google Scholar]
  105. Hunt, G.R.; Hood, S.; Lenzi, T. Stand-up, squat, lunge, and walk with a robotic knee and ankle prosthesis under shared neural control. IEEE Open J. Eng. Med. Biol. 2021, 2, 267–277. [Google Scholar] [CrossRef]
  106. Mazzarini, A.; Fantozzi, M.; Papapicco, V.; Fagioli, I.; Lanotte, F.; Baldoni, A.; Dell’Agnello, F.; Ferrara, P.; Ciapetti, T.; Molino Lova, R.; et al. A low-power ankle-foot prosthesis for push-off enhancement. Wearable Technol. 2023, 4, e18. [Google Scholar] [CrossRef]
  107. Shepherd, M.K.; Rouse, E.J. The VSPA foot: A quasi-passive ankle-foot prosthesis with continuously variable stiffness. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 2375–2386. [Google Scholar] [CrossRef]
  108. Lenzi, T.; Cempini, M.; Newkirk, J.; Hargrove, L.J.; Kuiken, T.A. A lightweight robotic ankle prosthesis with non-backdrivable cam-based transmission. In Proceedings of the 2017 International Conference on Rehabilitation Robotics (ICORR), London, UK, 17–20 July 2017; pp. 1142–1147. [Google Scholar]
  109. Best, T.K.; Welker, C.G.; Rouse, E.J.; Gregg, R.D. Data-driven variable impedance control of a powered knee–ankle prosthesis for adaptive speed and incline walking. IEEE Trans. Robot. 2023, 39, 2151–2169. [Google Scholar] [CrossRef] [PubMed]
  110. Mendez, J.; Hood, S.; Gunnel, A.; Lenzi, T. Powered knee and ankle prosthesis with indirect volitional swing control enables level-ground walking and crossing over obstacles. Sci. Robot. 2020, 5, eaba6635. [Google Scholar] [CrossRef] [PubMed]
  111. AlQahtani, N.J.; Al-Naib, I.; Althobaiti, M. Recent progress on smart lower prosthetic limbs: A comprehensive review on using EEG and fNIRS devices in rehabilitation. Front. Bioeng. Biotechnol. 2024, 12, 1454262. [Google Scholar] [CrossRef] [PubMed]
  112. AlQahtani, N.J.; Al-Naib, I.; Ateeq, I.S.; Althobaiti, M. Hybrid functional near-infrared spectroscopy system and electromyography for prosthetic knee control. Biosensors 2024, 14, 553. [Google Scholar] [CrossRef]
  113. Al-Halawani, R.; Qassem, M.; Kyriacou, P.A. Monte Carlo simulation of the effect of melanin concentration on light–tissue interactions in reflectance pulse oximetry. Sensors 2025, 25, 559. [Google Scholar] [CrossRef]
  114. Saal, H.P.; Bensmaia, S.J. Biomimetic approaches to bionic touch through a peripheral nerve interface. Neuropsychologia 2015, 79, 344–353. [Google Scholar] [CrossRef]
  115. Petrini, F.M.; Valle, G.; Bumbasirevic, M.; Barberi, F.; Bortolotti, D.; Cvancara, P.; Hiairrassary, A.; Mijovic, P.; Sverrisson, A.Ö.; Pedrocchi, A.; et al. Enhancing functional abilities and cognitive integration of the lower limb prosthesis. Sci. Transl. Med. 2019, 11, eaav8939. [Google Scholar] [CrossRef]
  116. Gerratt, A.P.; Michaud, H.O.; Lacour, S.P. Elastomeric electronic skin for prosthetic tactile sensation. Adv. Funct. Mater. 2015, 25, 2287–2295. [Google Scholar] [CrossRef]
  117. Varaganti, P.; Seo, S. Recent advances in biomimetics for the development of bio-inspired prosthetic limbs. Biomimetics 2024, 9, 273. [Google Scholar] [CrossRef]
  118. Alkhouri, K.I. Neuralink’s brain-computer interfaces and the reshaping of religious-psychological experience. Conatus-J. Philos. 2025, 10, 9–56. [Google Scholar]
  119. Neuralink. A Year of Telepathy. 2025. Available online: https://neuralink.com/updates/a-year-of-telepathy/ (accessed on 22 November 2025).
  120. U.S. National Library of Medicine. Feasibility Study of the Neuralink N1 Implant in People with Quadriplegia. 2025. Available online: https://www.clinicaltrials.gov/study/NCT06429735 (accessed on 22 November 2025).
  121. Jiang, L.; Stocco, A.; Losey, D.M.; Abernethy, J.A.; Prat, C.S.; Rao, R.P.N. BrainNet: A multi-person brain-to-brain interface for direct collaboration between brains. Sci. Rep. 2019, 9, 6115. [Google Scholar] [CrossRef]
  122. Lee, W.; Kim, S.; Kim, B.; Lee, C.; Chung, Y.A.; Kim, L.; Yoo, S.S. Non-invasive transmission of sensorimotor information in humans using an EEG/focused ultrasound brain-to-brain interface. PLoS ONE 2017, 12, e0178476. [Google Scholar] [CrossRef]
  123. Xu, Z.; Truong, N.D.; Nikpour, A.; Kavehei, O. A miniaturized and low-energy subcutaneous optical telemetry module for neurotechnology. J. Neural Eng. 2023, 20, 036017. [Google Scholar] [CrossRef]
  124. Vansteensel, M.J.; Pels, E.G.; Bleichner, M.G.; Branco, M.P.; Denison, T.; Freudenburg, Z.V.; Gosselaar, P.; Leinders, S.; Ottens, T.H.; Van Den Boom, M.A.; et al. Fully implanted brain–computer interface in a locked-in patient with ALS. N. Engl. J. Med. 2016, 375, 2060–2066. [Google Scholar] [CrossRef]
  125. Chaudhary, U.; Xia, B.; Silvoni, S.; Cohen, L.G.; Birbaumer, N. Brain–computer interface–based communication in the completely locked-in state. PLoS Biol. 2017, 15, e1002593. [Google Scholar] [CrossRef]
  126. Chaudhary, U.; Vlachos, I.; Zimmermann, J.B.; Espinosa, A.; Tonin, A.; Jaramillo-Gonzalez, A.; Khalili-Ardali, M.; Topka, H.; Lehmberg, J.; Friehs, G.M.; et al. Spelling interface using intracortical signals in a completely locked-in patient enabled via auditory neurofeedback training. Nat. Commun. 2022, 13, 1236. [Google Scholar] [CrossRef]
  127. Zhang, M.; Yan, W.; Ma, W.; Deng, Y.; Song, W. Self-Powered Hybrid Motion and Health Sensing System Based on Triboelectric Nanogenerators. Small 2024, 20, 2402452. [Google Scholar] [CrossRef]
  128. Corsi, M.; Paghi, A.; Mariani, S.; Golinelli, G.; Debrassi, A.; Egri, G.; Leo, G.; Vandini, E.; Vilella, A.; Dähne, L.; et al. Bioresorbable Nanostructured Chemical Sensor for Monitoring of pH Level In Vivo. Adv. Sci. 2022, 9, 2202062. [Google Scholar] [CrossRef]
  129. Heo, Y.J.; Kim, S.H. Toward long-term implantable glucose biosensors for clinical use. Appl. Sci. 2019, 9, 2158. [Google Scholar] [CrossRef]
  130. Nirwal, G.K.; Wu, K.Y.; Ramnawaz, T.P.; Xu, Y.; Carbonneau, M.; Nguyen, B.H.; Tran, S.D. Chapter Ten—Implantable biosensors: Advancements and applications. In Biosensing the Future: Wearable, Ingestible and Implantable Technologies for Health and Wellness Monitoring Part B; Progress in Molecular Biology and Translational Science; Academic Press: Cambridge, MA, USA, 2025; Volume 216, pp. 279–312. [Google Scholar]
  131. Liu, T.; Liu, L.; Gou, G.y.; Fang, Z.; Sun, J.; Chen, J.; Cheng, J.; Han, M.; Ma, T.; Liu, C.; et al. Recent Advancements in Physiological, Biochemical, and Multimodal Sensors Based on Flexible Substrates: Strategies, Technologies, and Integrations. ACS Appl. Mater. Interfaces 2023, 15, 21721–21745. [Google Scholar] [CrossRef]
  132. Li, S.H.; Lin, B.S.; Tsai, C.H.; Yang, C.T.; Lin, B.S. Design of Wearable Breathing Sound Monitoring System for Real-Time Wheeze Detection. Sensors 2017, 17, 171. [Google Scholar] [CrossRef] [PubMed]
  133. Kim, J.; Chou, E.F.; Le, J.; Wong, S.; Chu, M.; Khine, M. Soft wearable pressure sensors for beat-to-beat blood pressure monitoring. Adv. Healthc. Mater. 2019, 8, 1900109. [Google Scholar] [CrossRef] [PubMed]
  134. Kassanos, P.; Rosa, B.G.; Keshavarz, M.; Yang, G.Z. From wearables to implantables—Clinical drive and technical challenges. In Wearable Sensors; Elsevier: Amsterdam, The Netherlands, 2021; pp. 29–84. [Google Scholar]
  135. Chen, C.; Zhao, X.L.; Li, Z.H.; Zhu, Z.G.; Qian, S.H.; Flewitt, A.J. Current and emerging technology for continuous glucose monitoring. Sensors 2017, 17, 182. [Google Scholar] [CrossRef]
  136. Kumar, K.V.; Yerraguntla, K.R.; Jenne, M.P.; Gadi, A.; Sepoori, A.; Gunda, A.; Gudivada, M.S. Advancements in continuous glucose monitoring: A revolution in diabetes management. Biomed. Mater. Devices 2025. [Google Scholar] [CrossRef]
  137. Harun-Or-Rashid, M.; Aktar, M.N.; Preda, V.; Nasiri, N. Advances in electrochemical sensors for real-time glucose monitoring. Sens. Diagn. 2024, 3, 893–913. [Google Scholar] [CrossRef]
  138. Mi, Z.; Xia, Y.; Dong, H.; Shen, Y.; Feng, Z.; Hong, Y.; Zhu, H.; Yin, B.; Ji, Z.; Xu, Q.; et al. Microfluidic Wearable Electrochemical Sensor Based on MOF-Derived Hexagonal Rod-Shaped Porous Carbon for Sweat Metabolite and Electrolyte Analysis. Anal. Chem. 2024, 96, 16676–16685. [Google Scholar] [CrossRef]
  139. He, C.; Tao, M.; Zhang, C.; He, Y.; Xu, W.; Liu, Y.; Zhu, W. Microelectrode-based electrochemical sensing technology for in vivo detection of dopamine: Recent developments and future prospects. Crit. Rev. Anal. Chem. 2022, 52, 544–554. [Google Scholar] [CrossRef]
  140. Wang, L.C.; Guo, Z.J.; Xi, Y.; Wang, M.H.; Ji, B.W.; Tian, H.C.; Kang, X.Y.; Liu, J.Q. Implantable Brain Computer Interface Devices Based on Mems Technology. In Proceedings of the 2021 IEEE 34th International Conference on Micro Electro Mechanical Systems (MEMS), Munich, Germany, 24–28 January 2021; pp. 250–255. [Google Scholar]
  141. Nam, J.; Lim, H.K.; Kim, N.H.; Park, J.K.; Kang, E.S.; Kim, Y.T.; Heo, C.; Lee, O.S.; Kim, S.G.; Yun, W.S.; et al. Supramolecular peptide hydrogel-based soft neural interface augments brain signals through a three-dimensional electrical network. Acs Nano 2020, 14, 664–675. [Google Scholar] [CrossRef]
  142. Rinoldi, C.; Ziai, Y.; Zargarian, S.S.; Nakielski, P.; Zembrzycki, K.; Haghighat Bayan, M.A.; Zakrzewska, A.B.; Fiorelli, R.; Lanzi, M.; Kostrzewska-Ksiezyk, A.; et al. In vivo chronic brain cortex signal recording based on a soft conductive hydrogel biointerface. ACS Appl. Mater. Interfaces 2022, 15, 6283–6296. [Google Scholar] [CrossRef]
  143. Wang, R. Innovative Applications of Nanotechnology in Neuroscience and Brain-computer Interfaces. Appl. Comput. Eng. 2025, 126, 148–154. [Google Scholar] [CrossRef]
  144. Su, Z.; Yang, J.; Wei, X.; Sun, L.; Tao, T.H.; Zhou, Z. A MEMS-based miniaturized wireless fully-implantable brain-computer interface system. In Proceedings of the 2025 IEEE 38th International Conference on Micro Electro Mechanical Systems (MEMS), Kaohsiung, Taiwan, 19–23 January 2025; pp. 445–448. [Google Scholar]
  145. Parikh, P.M.; Venniyoor, A. Neuralink and brain–computer interface—Exciting times for artificial intelligence. South Asian J. Cancer 2024, 13, 063–065. [Google Scholar] [CrossRef]
  146. Schulze-Bonhage, A. Brain stimulation as a neuromodulatory epilepsy therapy. Seizure 2017, 44, 169–175. [Google Scholar] [CrossRef] [PubMed]
  147. Sun, F.T.; Morrell, M.J. The RNS system: Responsive cortical stimulation for the treatment of refractory partial epilepsy. Expert Rev. Med. Devices 2014, 11, 563–572. [Google Scholar] [CrossRef] [PubMed]
  148. Li, F.; Gong, B.; Sheng, H.; Song, Z.; Yu, Y.; Yang, Y. The best indices of anaesthesia depth monitored by electroencephalogram in different age groups. Int. J. Neurosci. 2024, 136, 37–45. [Google Scholar] [CrossRef] [PubMed]
  149. Shalbaf, A.; Saffar, M.; Sleigh, J.W.; Shalbaf, R. Monitoring the depth of anesthesia using a new adaptive neurofuzzy system. IEEE J. Biomed. Health Inform. 2017, 22, 671–677. [Google Scholar] [CrossRef]
  150. Formaggio, E.; Tonellato, M.; Antonini, A.; Castiglia, L.; Gallo, L.; Manganotti, P.; Masiero, S.; Del Felice, A. Oscillatory EEG-TMS reactivity in Parkinson disease. J. Clin. Neurophysiol. 2023, 40, 263–268. [Google Scholar] [CrossRef]
  151. Lamoš, M.; Bočková, M.; Goldemundová, S.; Baláž, M.; Chrastina, J.; Rektor, I. The effect of deep brain stimulation in Parkinson’s disease reflected in EEG microstates. npj Park. Dis. 2023, 9, 63. [Google Scholar] [CrossRef]
  152. Rathee, A.; Poongodi, T.; Yadav, M.; Balusamy, B. Internet of things in healthcare wearable and implantable body sensor network (WIBSNs). In Soft Computing in Wireless Sensor Networks; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018; pp. 193–224. [Google Scholar]
  153. Yoon, S.; Yoon, H.; Zahed, M.A.; Park, C.; Kim, D.; Park, J.Y. Multifunctional hybrid skin patch for wearable smart healthcare applications. Biosens. Bioelectron. 2022, 196, 113685. [Google Scholar] [CrossRef]
  154. Ma, X.; Ahadian, S.; Liu, S.; Zhang, J.; Liu, S.; Cao, T.; Lin, W.; Wu, D.; de Barros, N.R.; Zare, M.R.; et al. Smart contact lenses for biosensing applications. Adv. Intell. Syst. 2021, 3, 2000263. [Google Scholar] [CrossRef]
  155. Li, D.; Gao, W. Physiological state assessment and prediction based on multi-sensor fusion in body area network. Biomed. Signal Process. Control 2021, 65, 102340. [Google Scholar]
  156. Kärcher, S.M.; Fenzlaff, S.; Hartmann, D.; Nagel, S.K.; König, P. Sensory augmentation for the blind. Front. Hum. Neurosci. 2012, 6, 37. [Google Scholar] [CrossRef] [PubMed]
  157. Thomson, E.E.; Carra, R.; Nicolelis, M.A.L. Perceiving invisible light through a somatosensory cortical prosthesis. Nat. Commun. 2013, 4, 1482. [Google Scholar] [CrossRef] [PubMed]
  158. Sadeghi, R.; Kartha, A.; Barry, M.P.; Bradley, C.; Gibson, P.; Caspi, A.; Roy, A.; Dagnelie, G. Glow in the dark: Using a heat-sensitive camera for blind individuals with prosthetic vision. Vis. Res. 2021, 184, 23–29. [Google Scholar] [CrossRef] [PubMed]
  159. Sohl-Dickstein, J.; Teng, S.; Gaub, B.M.; Rodgers, C.C.; Li, C.; DeWeese, M.R.; Harper, N.S. A device for human ultrasonic echolocation. IEEE Trans. Biomed. Eng. 2015, 62, 1526–1534. [Google Scholar] [CrossRef]
  160. Peterson, V.; Spagnolo, V.; Galván, C.M.; Nieto, N.; Spies, R.D.; Milone, D.H. Towards subject-centered co-adaptive brain–computer interfaces based on backward optimal transport. J. Neural Eng. 2025, 22, 046006. [Google Scholar] [CrossRef]
  161. Matt, E.; Mitterwallner, M.; Radjenovic, S.; Grigoryeva, D.; Weber, A.; Stögmann, E.; Domitner, A.; Zettl, A.; Osou, S.; Beisteiner, R. Ultrasound neuromodulation with transcranial pulse stimulation in Alzheimer disease: A randomized clinical trial. JAMA Netw. Open 2025, 8, e2459170. [Google Scholar] [CrossRef]
  162. Ezzyat, Y.; Wanda, P.A.; Levy, D.F.; Kadel, A.; Aka, A.; Pedisich, I.; Sperling, M.R.; Sharan, A.D.; Lega, B.C.; Burks, A.; et al. Closed-loop stimulation of temporal cortex rescues functional networks and improves memory. Nat. Commun. 2018, 9, 365. [Google Scholar] [CrossRef]
  163. Deadwyler, S.A.; Hampson, R.E.; Song, D.; Opris, I.; Gerhardt, G.A.; Marmarelis, V.Z.; Berger, T.W. A cognitive prosthesis for memory facilitation by closed-loop functional ensemble stimulation of hippocampal neurons in primate brain. Exp. Neurol. 2017, 287, 452–460. [Google Scholar] [CrossRef]
  164. Kragel, J.E.; Lurie, S.M.; Issa, N.P.; Haider, H.A.; Wu, S.; Tao, J.X.; Warnke, P.C.; Schuele, S.; Rosenow, J.M.; Zelano, C.; et al. Closed–loop control of theta oscillations enhances human hippocampal network connectivity. Nat. Commun. 2025, 16, 4061. [Google Scholar] [CrossRef]
  165. König, S.U.; Schumann, F.; Keyser, J.; Goeke, C.; Krause, C.; Wache, S.; Lytochkin, A.; Ebert, M.; Brunsch, V.; Wahn, B.; et al. Learning new sensorimotor contingencies: Effects of long-term use of sensory augmentation on the brain and conscious perception. PLoS ONE 2016, 11, e0166647. [Google Scholar] [CrossRef]
  166. Kaspar, K.; König, S.; Schwandt, J.; König, P. The experience of new sensorimotor contingencies by sensory augmentation. Conscious. Cogn. 2014, 28, 47–63. [Google Scholar] [CrossRef] [PubMed]
  167. Hou, F.; Zhang, Y.; Zhou, Y.; Zhang, M.; Lv, B.; Wu, J. Review on infrared imaging technology. Sustainability 2022, 14, 11161. [Google Scholar] [CrossRef]
  168. Jayasundera, S.A.B.N.; Peiris, M.P.W.S.S.; Rathnayake, R.G.G.A.; Aluthge, A.D.K.H.; Geethanjana, H.K.A. Investigating the Efficacy of Brain-Computer Interfaces in Enhancing Cognitive Abilities for Direct Brain-to-Machine Communication. Int. J. Adv. ICT Emerg. Reg. (ICTer) 2025, 18, 157–164. [Google Scholar] [CrossRef]
  169. Jangwan, N.S.; Ashraf, G.M.; Ram, V.; Singh, V.; Alghamdi, B.S.; Abuzenadah, A.M.; Singh, M.F. Brain augmentation and neuroscience technologies: Current applications, challenges, ethics and future prospects. Front. Syst. Neurosci. 2022, 16, 1000495. [Google Scholar] [CrossRef] [PubMed]
  170. Jang, H.; Lee, J.; Beak, C.J.; Biswas, S.; Lee, S.H.; Kim, H. Flexible Neuromorphic Electronics for Wearable Near-Sensor and In-Sensor Computing Systems. Adv. Mater. 2025, 37, 2416073. [Google Scholar] [CrossRef]
  171. Ezzyat, Y.; Kragel, J.E.; Burke, J.F.; Levy, D.F.; Lyalenko, A.; Wanda, P.; O’Sullivan, L.; Hurley, K.B.; Busygin, S.; Pedisich, I.; et al. Direct brain stimulation modulates encoding states and memory performance in humans. Curr. Biol. 2017, 27, 1251–1258. [Google Scholar] [CrossRef]
  172. Song, D.; Harway, M.; Marmarelis, V.Z.; Hampson, R.E.; Deadwyler, S.A.; Berger, T.W. Extraction and restoration of hippocampal spatial memories with non-linear dynamical modeling. Front. Syst. Neurosci. 2014, 8, 97. [Google Scholar] [CrossRef]
  173. Kucewicz, M.T.; Worrell, G.A.; Axmacher, N. Direct electrical brain stimulation of human memory: Lessons learnt and future perspectives. Brain 2023, 146, 2214–2226. [Google Scholar] [CrossRef]
  174. Kapsetaki, M.E. Brain-computer interfaces for memory enhancement: Scientometric analysis and future directions. Biomed. Signal Process. Control 2026, 112, 108904. [Google Scholar] [CrossRef]
  175. Kumar, Y.; Kumar, J.; Sheoran, P. Integration of cloud computing in BCI: A review. Biomed. Signal Process. Control 2024, 87, 105548. [Google Scholar] [CrossRef]
  176. Rizzo, L.; Cicirelli, F.; D’Amore, F.; Gentile, A.F.; Guerrieri, A.; Vinci, A. Using brain-computer interface in cognitive buildings: A real-time case study. In Proceedings of the 2025 IEEE 5th International Conference on Human-Machine Systems (ICHMS), Abu Dhabi, United Arab Emirates, 26–28 May 2025; pp. 433–436. [Google Scholar]
  177. Sladky, V.; Nejedly, P.; Mivalt, F.; Brinkmann, B.H.; Kim, I.; St. Louis, E.K.; Gregg, N.M.; Lundstrom, B.N.; Crowe, C.M.; Attia, T.P.; et al. Distributed brain co-processor for tracking spikes, seizures and behaviour during electrical brain stimulation. Brain Commun. 2022, 4, fcac115. [Google Scholar] [CrossRef]
  178. Martins, N.R.; Angelica, A.; Chakravarthy, K.; Svidinenko, Y.; Boehm, F.J.; Opris, I.; Lebedev, M.A.; Swan, M.; Garan, S.A.; Rosenfeld, J.V.; et al. Human brain/cloud interface. In Advances in Clinical Immunology, Medical Microbiology, COVID-19, and Big Data; Jenny Stanford Publishing: Singapore, 2021; pp. 485–538. [Google Scholar]
  179. Vakilipour, P.; Fekrvand, S. Brain-to-brain interface technology: A brief history, current state, and future goals. Int. J. Dev. Neurosci. 2024, 84, 351–367. [Google Scholar] [CrossRef]
  180. Grau, C.; Ginhoux, R.; Riera, A.; Nguyen, T.L.; Chauvat, H.; Berg, M.; Amengual, J.L.; Pascual-Leone, A.; Ruffini, G. Conscious brain-to-brain communication in humans using non-invasive technologies. PLoS ONE 2014, 9, e105225. [Google Scholar] [CrossRef]
  181. Yoo, S.S.; Kim, H.; Filandrianos, E.; Taghados, S.J.; Park, S. Non-invasive brain-to-brain interface (BBI): Establishing functional links between two brains. PLoS ONE 2013, 8, e60410. [Google Scholar]
  182. Zhang, S.; Yuan, S.; Huang, L.; Zheng, X.; Wu, Z.; Xu, K.; Pan, G. Human mind control of rat cyborg’s continuous locomotion with wireless brain-to-brain interface. Sci. Rep. 2019, 9, 1321. [Google Scholar] [CrossRef]
  183. Lu, L.; Wang, R.; Luo, M. An optical brain-to-brain interface supports rapid information transmission for precise locomotion control. Sci. China Life Sci. 2020, 63, 875–885. [Google Scholar]
  184. Menon, S.V.; Tirkey, R.; Singh, V. Real-time streaming in distributed and cooperative sensing networks. In Proceedings of the 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kamand, India, 24–28 June 2024; pp. 1–6. [Google Scholar]
  185. Li, Q.; Wang, W.; Yin, H.; Zou, K.; Jiao, Y.; Zhang, Y. One-dimensional implantable sensors for accurately monitoring physiological and biochemical signals. Research 2024, 7, 0507. [Google Scholar] [CrossRef]
  186. Mou, X.; Lennartz, M.R.; Loegering, D.J.; Stenken, J.A. Long-term calibration considerations during subcutaneous microdialysis sampling in mobile rats. Biomaterials 2010, 31, 4530–4539. [Google Scholar] [CrossRef]
  187. Leung, K.K.; Downs, A.M.; Ortega, G.; Kurnik, M.; Plaxco, K.W. Elucidating the mechanisms underlying the signal drift of electrochemical aptamer-based sensors in whole blood. ACS Sens. 2021, 6, 3340–3347. [Google Scholar]
  188. Wan, J.; Nie, Z.; Xu, J.; Zhang, Z.; Yao, S.; Xiang, Z.; Lin, X.; Lu, Y.; Xu, C.; Zhao, P.; et al. Millimeter-scale magnetic implants paired with a fully integrated wearable device for wireless biophysical and biochemical sensing. Sci. Adv. 2024, 10, eadm9314. [Google Scholar] [CrossRef]
  189. Kyrolainen, M.; Rigsby, P.; Eddy, S.; Vadgama, P. Bio-/haemocompatibility: Implications and outcomes for sensors? Acta Anaesthesiol. Scand. 1995, 39, 55–60. [Google Scholar] [CrossRef]
  190. Collins, K.L.; Guterstam, A.; Cronin, J.; Olson, J.D.; Ehrsson, H.H.; Ojemann, J.G. Ownership of an artificial limb induced by electrical brain stimulation. Proc. Natl. Acad. Sci. USA 2017, 114, 166–171. [Google Scholar] [CrossRef] [PubMed]
  191. Losey, D.M.; Stocco, A.; Abernethy, J.A.; Rao, R.P.N. Navigating a 2D virtual world using direct brain stimulation. Front. Robot. AI 2016, 3, 72. [Google Scholar] [CrossRef]
  192. Choi, Y.W.; Shin, H.B.; Lee, S.W. Brain-guided self-paced curriculum learning for adaptive human-machine interfaces. IEEE Trans. Syst. Man Cybern. Syst. 2025, 55, 4693–4704. [Google Scholar] [CrossRef]
  193. Liu, M.; Zhang, Y.; Tao, T.H. Recent progress in bio-integrated intelligent sensing system. Adv. Intell. Syst. 2022, 4, 2100280. [Google Scholar] [CrossRef]
  194. Kazanskiy, N.L.; Khorin, P.A.; Khonina, S.N. Biochips on the move: Emerging trends in wearable and implantable lab-on-chip health monitors. Electronics 2025, 14, 3224. [Google Scholar] [CrossRef]
  195. Sun, Z.; Tao, R.; Xiong, N.; Pan, X. CS-PLM: Compressive sensing data gathering algorithm based on packet loss matching in sensor networks. Wirel. Commun. Mob. Comput. 2018, 2018, 5131949. [Google Scholar] [CrossRef]
  196. Suman, S.; Mamidanna, P.; Nielsen, J.J.; Chiariotti, F.; Stefanović, Č.; Došen, S.; Popovski, P. Closed-loop manual control with tactile or visual feedback under wireless link impairments. IEEE Trans. Haptics 2025, 18, 352–361. [Google Scholar] [CrossRef]
  197. Zhang, W.; Flores, H.; Hui, P. Towards collaborative multi-device computing. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Athens, Greece, 19–23 March 2018; pp. 22–27. [Google Scholar]
  198. Marblestone, A.H.; Zamft, B.M.; Maguire, Y.G.; Shapiro, M.G.; Cybulski, T.R.; Glaser, J.I.; Amodei, D.; Stranges, P.B.; Kalhor, R.; Dalrymple, D.A.; et al. Physical principles for scalable neural recording. Front. Comput. Neurosci. 2013, 7, 137. [Google Scholar] [CrossRef]
  199. Wolf, P.D. Thermal considerations for the design of an implanted cortical brain–machine interface (BMI). In Indwelling Neural Implants: Strategies for Contending with the In Vivo Environment; CRC Press/Taylor & Francis: Boca Raton, FL, USA, 2008; Chapter 3. [Google Scholar]
  200. Seese, T.M.; Harasaki, H.; Saidel, G.M.; Davies, C.R. Characterization of tissue morphology, angiogenesis, and temperature in the adaptive response of muscle tissue to chronic heating. Lab. Investig. 1998, 78, 1553–1562. [Google Scholar]
  201. Kahn, A.R.; Chow, E.Y.; Abdel-Latief, O.; Irazoqui, P.P. Low-power, high data rate transceiver system for implantable prostheses. Int. J. Telemed. Appl. 2010, 2010, 563903. [Google Scholar] [CrossRef]
  202. Thomas, S.J.; Besnoff, J.S.; Reynolds, M.S. Modulated backscatter for ultra-low power uplinks from wearable and implantable devices. In Proceedings of the 2012 ACM Workshop on Medical Communication Systems, 2012, MedCOMM ’12, Helsinki, Finland, 13–17 August 2012; pp. 1–6. [Google Scholar]
  203. Mutashar, S.; Hannan, M.A.; Samad, S.A.; Hussain, A. Analysis and optimization of spiral circular inductive coupling link for bio-implanted applications on air and within human tissue. Sensors 2014, 14, 11522–11541. [Google Scholar] [CrossRef]
  204. Andersen, E.; Casados, C.; Truong, B.D.; Roundy, S. Optimal transmit coil design for wirelessly powered biomedical implants considering magnetic field safety constraints. IEEE Trans. Electromagn. Compat. 2021, 63, 1735–1747. [Google Scholar] [CrossRef]
  205. Silchenko, A.N.; Tass, P.A. Mathematical modeling of chemotaxis and glial scarring around implanted electrodes. New J. Phys. 2015, 17, 023009. [Google Scholar] [CrossRef]
  206. Earley, E.J.; Mastinu, E.; Ortiz-Catalan, M. Cross-channel impedance measurement for monitoring implanted electrodes. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022; pp. 4880–4883. [Google Scholar]
  207. González-González, M.A.; Conde, S.V.; Latorre, R.; Thébault, S.C.; Pratelli, M.; Spitzer, N.C.; Verkhratsky, A.; Tremblay, M.È.; Akcora, C.G.; Hernández-Reynoso, A.G.; et al. Bioelectronic medicine: A multidisciplinary roadmap from biophysics to precision therapies. Front. Integr. Neurosci. 2024, 18, 1321872. [Google Scholar] [CrossRef]
  208. Polikov, V.S.; Tresco, P.A.; Reichert, W.M. Response of brain tissue to chronically implanted neural electrodes. J. Neurosci. Methods 2005, 148, 1–18. [Google Scholar] [CrossRef]
  209. del Valle, J.; Rodríguez-Meana, B.; Navarro, X. Neural electrodes for long-term tissue interfaces. In Somatosensory Feedback for Neuroprosthetics; Güçlü, B., Ed.; Academic Press: Cambridge, MA, USA, 2021; Chapter 16; pp. 509–536. [Google Scholar]
  210. Sohal, H.S.; Clowry, G.J.; Jackson, A.; O’Neill, A.; Baker, S.N. Mechanical flexibility reduces the foreign body response to long-term implanted microelectrodes in rabbit cortex. PLoS ONE 2016, 11, e0165606. [Google Scholar] [CrossRef]
  211. Vanhoestenberghe, A.; Donaldson, N. Corrosion of silicon integrated circuits and lifetime predictions in implantable electronic devices. J. Neural Eng. 2013, 10, 031002. [Google Scholar] [CrossRef]
  212. Jiang, G.; Zhou, D.D. Technology advances and challenges in hermetic packaging for implantable medical devices. In Implantable Neural Prostheses 2: Techniques and Engineering Approaches; Springer: New York, NY, USA, 2010; pp. 27–61. [Google Scholar]
  213. Cogan, S.F. Neural stimulation and recording electrodes. Annu. Rev. Biomed. Eng. 2008, 10, 275–309. [Google Scholar] [CrossRef]
  214. Takmakov, P.; Ruda, K.; Phillips, K.S.; Isayeva, I.S.; Krauthamer, V.; Welle, C.G. Rapid evaluation of the durability of cortical neural implants using accelerated aging with reactive oxygen species. J. Neural Eng. 2015, 12, 026003. [Google Scholar] [CrossRef]
  215. Pokharel, P.; Mahajan, A.; Himes, A.; Lowell, M.; Budde, R.; Vijayaraman, P. Mechanisms of damage related to ICD and pacemaker lead interaction. Heart Rhythm O2 2023, 4, 820–822. [Google Scholar] [CrossRef]
  216. Fu, T.M.; Hong, G.; Viveros, R.D.; Zhou, T.; Lieber, C.M. Highly scalable multichannel mesh electronics for stable chronic brain electrophysiology. Proc. Natl. Acad. Sci. USA 2017, 114, E10046–E10055. [Google Scholar] [PubMed]
  217. Hassler, C.; Boretius, T.; Stieglitz, T. Polymers for neural implants. J. Polym. Sci. Part B Polym. Phys. 2011, 49, 18–33. [Google Scholar]
  218. Kato, Y.; Saito, I.; Hoshino, T.; Suzuki, T.; Mabuchi, K. Preliminary study of multichannel flexible neural probes coated with hybrid biodegradable polymer. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York City, NY, USA, 30 August–3 September 2006; pp. 660–663. [Google Scholar]
  219. Choi, Y.S.; Koo, J.; Lee, Y.J.; Lee, G.; Avila, R.; Ying, H.; Reeder, J.; Hambitzer, L.; Im, K.; Kim, J.; et al. Biodegradable polyanhydrides as encapsulation layers for transient electronics. Adv. Funct. Mater. 2020, 30, 2000941. [Google Scholar] [CrossRef]
  220. Dalrymple, A.N.; Robles, U.A.; Huynh, M.; Nayagam, B.A.; Green, R.A.; Poole-Warren, L.A.; Fallon, J.B.; Shepherd, R.K. Electrochemical and biological performance of chronically stimulated conductive hydrogel electrodes. J. Neural Eng. 2020, 17, 026018. [Google Scholar] [CrossRef]
  221. Hashemi Farzaneh, M.; Nair, S.; Nasseri, M.A.; Knoll, A. Reducing communication-related complexity in heterogeneous networked medical systems considering non-functional requirements. In Proceedings of the 16th International Conference on Advanced Communication Technology, PyeongChang, Republic of Korea, 16–19 February 2014; pp. 547–552. [Google Scholar]
  222. Zhang, M.; Raghunathan, A.; Jha, N.K. Trustworthiness of medical devices and body area networks. Proc. IEEE 2014, 102, 1174–1188. [Google Scholar] [CrossRef]
  223. Cong, P. Neural interfaces for implantable medical devices: Circuit design considerations for sensing, stimulation, and safety. IEEE Solid-State Circuits Mag. 2016, 8, 48–56. [Google Scholar]
  224. Valdez, L.D.; Shekhtman, L.; La Rocca, C.E.; Zhang, X.; Buldyrev, S.V.; Trunfio, P.A.; Braunstein, L.A.; Havlin, S. Cascading failures in complex networks. J. Complex Netw. 2020, 8, cnaa013. [Google Scholar] [CrossRef]
  225. Bequette, B.W. Fault detection and safety in closed-loop artificial pancreas systems. J. Diabetes Sci. Technol. 2014, 8, 1204–1214. [Google Scholar] [CrossRef]
  226. Doyle Francis J., I.; Huyett, L.M.; Lee, J.B.; Zisser, H.C.; Dassau, E. Closed-loop artificial pancreas systems: Engineering the algorithms. Diabetes Care 2014, 37, 1191–1197. [Google Scholar]
  227. Bequette, B.W.; Cameron, F.; Baysal, N.; Howsmon, D.P.; Buckingham, B.A.; Maahs, D.M.; Levy, C.J. Algorithms for a single hormone closed-loop artificial pancreas: Challenges pertinent to chemical process operations and control. Processes 2016, 4, 39. [Google Scholar] [CrossRef]
  228. Trevlakis, S.E.; Boulogeorgos, A.A.A.; Sofotasios, P.C.; Muhaidat, S.; Karagiannidis, G.K. Optical wireless cochlear implants. Biomed. Opt. Express 2019, 10, 707–730. [Google Scholar] [CrossRef]
  229. Alizadeh, H.; Koolivand, Y.; Sodagar, A.M. Pulse-based, multi-beam optical link for data telemetry to implantable biomedical microsystems. In Proceedings of the 2022 20th IEEE Interregional NEWCAS Conference (NEWCAS), Quebec City, QC, Canada, 19–22 June 2022; pp. 529–532. [Google Scholar]
  230. Ahmed, I.; Halder, S.; Bykov, A.; Popov, A.; Meglinski, I.V.; Katz, M. In-body communications exploiting light: A proof-of-concept study using ex vivo tissue samples. IEEE Access 2020, 8, 190378–190389. [Google Scholar] [CrossRef]
  231. Ghanbari, L.; Carter, R.E.; Rynes, M.L.; Dominguez, J.; Chen, G.; Naik, A.; Hu, J.; Sagar, M.A.K.; Haltom, L.; Mossazghi, N.; et al. Cortex-wide neural interfacing via transparent polymer skulls. Nat. Commun. 2019, 10, 1500. [Google Scholar] [CrossRef] [PubMed]
  232. Bennett, C.; Ouellette, B.; Ramirez, T.K.; Cahoon, A.; Cabasco, H.; Browning, Y.; Lakunina, A.; Lynch, G.F.; McBride, E.G.; Belski, H.; et al. SHIELD: Skull-shaped hemispheric implants enabling large-scale electrophysiology datasets in the mouse brain. Neuron 2024, 112, 2869–2885.e8. [Google Scholar] [CrossRef] [PubMed]
  233. Yang, N.; Liu, F.; Zhang, X.; Chen, C.; Xia, Z.; Fu, S.; Wang, J.; Xu, J.; Cui, S.; Zhang, Y.; et al. A hybrid titanium-softmaterial, high-strength, transparent cranial window for transcranial injection and neuroimaging. Biosensors 2022, 12, 129. [Google Scholar] [CrossRef]
  234. Turcotte, R.; Schmidt, C.C.; Emptage, N.J.; Booth, M.J. Focusing light in biological tissue through a multimode optical fiber: Refractive index matching. Opt. Lett. 2019, 44, 2386–2389. [Google Scholar] [CrossRef] [PubMed]
  235. Costantini, I.; Cicchi, R.; Silvestri, L.; Vanzi, F.; Pavone, F.S. In-vivo and ex-vivo optical clearing methods for biological tissues: Review. Biomed. Opt. Express 2019, 10, 5251–5267. [Google Scholar] [CrossRef]
  236. Jaafar, B.; Neasham, J.; Degenaar, P. What ultrasound can and cannot do in implantable medical device communications. IEEE Rev. Biomed. Eng. 2023, 16, 357–370. [Google Scholar] [CrossRef]
  237. Meng, M.; Kiani, M. Design and optimization of ultrasonic wireless power transmission links for millimeter-sized biomedical implants. IEEE Trans. Biomed. Circuits Syst. 2017, 11, 98–107. [Google Scholar] [CrossRef]
  238. Singer, A.; Oelze, M.; Podkowa, A. Mbps experimental acoustic through-tissue communications: MEAT-COMMS. In Proceedings of the 2016 IEEE 17th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Edinburgh, UK, 3–6 July 2016; pp. 1–4. [Google Scholar]
  239. van Neer, P.L.M.J.; Peters, L.C.J.M.; Verbeek, R.G.F.A.; Peeters, B.; de Haas, G.; Hörchens, L.; Fillinger, L.; Schrama, T.; Merks-Swolfs, E.J.W.; Gijsbertse, K.; et al. Flexible large-area ultrasound arrays for medical applications made using embossed polymer structures. Nat. Commun. 2024, 15, 2802. [Google Scholar] [CrossRef]
  240. Kou, Z.; Miller, R.J.; Singer, A.C.; Oelze, M.L. High data rate communications in vivo using ultrasound. IEEE Trans. Biomed. Eng. 2021, 68, 3308–3316. [Google Scholar] [CrossRef]
  241. Jin, P.; Fu, J.; Wang, F.; Zhang, Y.; Wang, P.; Liu, X.; Jiao, Y.; Li, H.; Chen, Y.; Ma, Y.; et al. A flexible, stretchable system for simultaneous acoustic energy transfer and communication. Sci. Adv. 2021, 7, eabg2507. [Google Scholar] [CrossRef]
  242. Nair, V.; Dalrymple, A.N.; Yu, Z.; Balakrishnan, G.; Bettinger, C.J.; Weber, D.J.; Yang, K.; Robinson, J.T. Miniature battery-free bioelectronics. Science 2023, 382, eabn4732. [Google Scholar] [CrossRef]
  243. Zebda, A.; Cosnier, S.; Alcaraz, J.P.; Holzinger, M.; Le Goff, A.; Gondran, C.; Boucher, F.; Giroud, F.; Gorgy, K.; Lamraoui, H.; et al. Single glucose biofuel cells implanted in rats power electronic devices. Sci. Rep. 2013, 3, 1516. [Google Scholar] [CrossRef]
  244. Li, N.; Yi, Z.; Ma, Y.; Xie, F.; Huang, Y.; Tian, Y.; Dong, X.; Liu, Y.; Shao, X.; Li, Y.; et al. Direct powering a real cardiac pacemaker by natural energy of a heartbeat. ACS Nano 2019, 13, 2822–2830. [Google Scholar] [CrossRef]
  245. Ryu, H.; Park, H.m.; Kim, M.K.; Kim, B.; Myoung, H.S.; Kim, T.Y.; Yoon, H.J.; Kwak, S.S.; Kim, J.; Hwang, T.H.; et al. Self-rechargeable cardiac pacemaker system with triboelectric nanogenerators. Nat. Commun. 2021, 12, 4374. [Google Scholar] [CrossRef]
  246. Che, Z.; O’Donovan, S.; Xiao, X.; Wan, X.; Chen, G.; Zhao, X.; Zhou, Y.; Yin, J.; Chen, J. Implantable triboelectric nanogenerators for self-powered cardiovascular healthcare. Small 2023, 19, 2207600. [Google Scholar] [CrossRef]
  247. Ren, W.; Sun, Y.; Zhao, D.; Aili, A.; Zhang, S.; Shi, C.; Zhang, J.; Geng, H.; Zhang, J.; Zhang, L.; et al. High-performance wearable thermoelectric generator with self-healing, recycling, and Lego-like reconfiguring capabilities. Sci. Adv. 2021, 7, eabe0586. [Google Scholar] [CrossRef]
  248. Wang, H.; Peng, Y.; Peng, H.; Zhang, J. Fluidic phase-change materials with continuous latent heat from theoretically tunable ternary metals for efficient thermal management. Proc. Natl. Acad. Sci. USA 2022, 119, e2200223119. [Google Scholar] [CrossRef]
  249. Jung, Y.; Kim, M.; Kim, T.; Ahn, J.; Lee, J.; Ko, S.H. Functional materials and innovative strategies for wearable thermal management applications. Nano-Micro Lett. 2023, 15, 160. [Google Scholar] [CrossRef]
  250. Reeder, J.T.; Xie, Z.; Yang, Q.; Seo, M.H.; Yan, Y.; Deng, Y.; Jinkins, K.R.; Krishnan, S.R.; Liu, C.; McKay, S.; et al. Soft, bioresorbable coolers for reversible conduction block of peripheral nerves. Science 2022, 377, 109–115. [Google Scholar] [CrossRef]
  251. Cooke, D.F.; Goldring, A.B.; Yamayoshi, I.; Tsourkas, P.; Recanzone, G.H.; Tiriac, A.; Pan, T.; Simon, S.I.; Krubitzer, L. Fabrication of an inexpensive, implantable cooling device for reversible brain deactivation in animals ranging from rodents to primates. J. Neurophysiol. 2012, 107, 3543–3558. [Google Scholar] [CrossRef]
  252. Zhang, E.N.; Clément, J.P.; Alameri, A.; Ng, A.; Kennedy, T.E.; Juncker, D. Mechanically matched silicone brain implants reduce brain foreign body response. Adv. Mater. Technol. 2021, 6, 2000909. [Google Scholar] [CrossRef]
  253. Nguyen, J.K.; Park, D.J.; Skousen, J.L.; Hess-Dunning, A.E.; Tyler, D.J.; Rowan, S.J.; Weder, C.; Capadona, J.R. Mechanically-compliant intracortical implants reduce the neuroinflammatory response. J. Neural Eng. 2014, 11, 056014. [Google Scholar] [CrossRef]
  254. Yu, K.; He, T. Silver-nanowire-based elastic conductors: Preparation processes and substrate adhesion. Polymers 2023, 15. [Google Scholar] [CrossRef]
  255. Deng, Y.; Bu, F.; Wang, Y.; Chee, P.S.; Liu, X.; Guan, C. Stretchable liquid metal based biomedical devices. npj Flex. Electron. 2024, 8, 12. [Google Scholar] [CrossRef]
  256. Chung, M.; Nirmale, V.S.; Reddy, V.S.; Koutsos, V.; Ramakrishna, S.; Radacsi, N. Enhancing the Performance of Wearable Flexible Sensors via Electrospinning. ACS Appl. Mater. Interfaces 2025, 17, 39747–39771. [Google Scholar] [CrossRef]
  257. Zhou, T.; Hong, G.; Fu, T.M.; Yang, X.; Schuhmann, T.G.; Viveros, R.D.; Lieber, C.M. Syringe-injectable mesh electronics integrate seamlessly with minimal chronic immune response in the brain. Proc. Natl. Acad. Sci. USA 2017, 114, 5894–5899. [Google Scholar]
  258. Place, E.S.; George, J.H.; Williams, C.K.; Stevens, M.M. Synthetic polymer scaffolds for tissue engineering. Chem. Soc. Rev. 2009, 38, 1139–1151. [Google Scholar] [CrossRef]
  259. Wang, Z.; Song, J.; Peng, Y. New insights and perspectives into biodegradable metals in cardiovascular stents: A mini review. J. Alloys Compd. 2024, 1002, 175313. [Google Scholar] [CrossRef]
  260. Fanelli, A.; Ghezzi, D. Transient electronics: New opportunities for implantable neurotechnology. Curr. Opin. Biotechnol. 2021, 72, 22–28. [Google Scholar] [CrossRef]
  261. Kang, S.K.; Murphy, R.K.J.; Hwang, S.W.; Lee, S.M.; Harburg, D.V.; Krueger, N.A.; Shin, J.; Gamble, P.; Cheng, H.; Yu, S.; et al. Bioresorbable silicon electronic sensors for the brain. Nature 2016, 530, 71–76. [Google Scholar] [CrossRef]
  262. Shin, J.; Liu, Z.; Bai, W.; Liu, Y.; Yan, Y.; Xue, Y.; Kandela, I.; Pezhouh, M.; MacEwan, M.R.; Huang, Y.; et al. Bioresorbable optical sensor systems for monitoring of intracranial pressure and temperature. Sci. Adv. 2019, 5, eaaw1899. [Google Scholar] [CrossRef]
  263. Choi, Y.S.; Yin, R.T.; Pfenniger, A.; Koo, J.; Avila, R.; Benjamin Lee, K.; Chen, S.W.; Lee, G.; Li, G.; Qiao, Y.; et al. Fully implantable and bioresorbable cardiac pacemakers without leads or batteries. Nat. Biotechnol. 2021, 39, 1228–1238. [Google Scholar] [CrossRef]
  264. Mekki, Y.M.; Luijten, G.; Hagert, E.; Belkhair, S.; Varghese, C.; Qadir, J.; Solaiman, B.; Bilal, M.; Dhanda, J.; Egger, J.; et al. Digital twins for the era of personalized surgery. npj Digit. Med. 2025, 8, 283. [Google Scholar] [CrossRef]
  265. Koopsen, T.; Gerrits, W.; van Osta, N.; van Loon, T.; Wouters, P.; Prinzen, F.W.; Vernooy, K.; Delhaas, T.; Teske, A.J.; Meine, M.; et al. Virtual pacing of a patient’s digital twin to predict left ventricular reverse remodelling after cardiac resynchronization therapy. EP Eur. 2024, 26, euae009. [Google Scholar] [CrossRef]
  266. Drew, L. The Ethics of Brain–Computer Interfaces. 2019. Available online: https://www.nature.com/articles/d41586-019-02214-2 (accessed on 22 November 2025).
  267. Gordon, E.C.; Seth, A.K. Ethical considerations for the use of brain–computer interfaces for cognitive enhancement. PLoS Biol. 2024, 22, e3002899. [Google Scholar] [CrossRef]
  268. Han, F.; Chen, H. Does brain-computer interface-based mind reading threaten mental privacy? ethical reflections from interviews with Chinese experts. BMC Med Ethics 2025, 26, 134. [Google Scholar] [CrossRef]
Figure 1. Schematic of a multidimensional enhancement framework for a “fusion of lifeform.” A semi-transparent human silhouette at the center represents the biological host. Four color-coded quadrants around the body depict four classes of enhancement systems: the upper-left Class I Restoration illustrates cochlear implants, visual prostheses, and artificial limbs with tactile feedback for reconstructing impaired sensory and motor functions; the lower-left Class II Endogenous Sensing shows implantable sensors that continuously monitor cardiovascular and other physiological/pathological states to support homeostatic regulation; the lower-right Class III Beyond-natural Sensing uses devices such as geomagnetic or infrared skin patches to convert non-natural environmental cues into signals exploitable by the nervous system; the upper-right Class IV Cognitive Enhancement features an implantable brain–computer interface that interacts with memory and cognitive networks to enhance learning and memory. Within each class, green solid arrows indicate closed-loop information flow from external or internal stimuli through sensing front ends and signal encoding/processing modules to neural or effector interfaces, ultimately generating behavioral or physiological responses. Icon sets along the outer ring represent four fundamental axes of fusion—Structure, Energy, Information, and Cognition—while red icons and red dashed arrows pointing back to the loops highlight key system constraints, including bandwidth/latency, power and thermal load, biocompatibility, and safety/ethical considerations.
Figure 1. Schematic of a multidimensional enhancement framework for a “fusion of lifeform.” A semi-transparent human silhouette at the center represents the biological host. Four color-coded quadrants around the body depict four classes of enhancement systems: the upper-left Class I Restoration illustrates cochlear implants, visual prostheses, and artificial limbs with tactile feedback for reconstructing impaired sensory and motor functions; the lower-left Class II Endogenous Sensing shows implantable sensors that continuously monitor cardiovascular and other physiological/pathological states to support homeostatic regulation; the lower-right Class III Beyond-natural Sensing uses devices such as geomagnetic or infrared skin patches to convert non-natural environmental cues into signals exploitable by the nervous system; the upper-right Class IV Cognitive Enhancement features an implantable brain–computer interface that interacts with memory and cognitive networks to enhance learning and memory. Within each class, green solid arrows indicate closed-loop information flow from external or internal stimuli through sensing front ends and signal encoding/processing modules to neural or effector interfaces, ultimately generating behavioral or physiological responses. Icon sets along the outer ring represent four fundamental axes of fusion—Structure, Energy, Information, and Cognition—while red icons and red dashed arrows pointing back to the loops highlight key system constraints, including bandwidth/latency, power and thermal load, biocompatibility, and safety/ethical considerations.
Sensors 26 00576 g001
Figure 2. Principles of cochlear implants and components of the Argus II visual prosthesis system. (A) Working principles of cochlear implants based on electrical versus optical stimulation. Conventional electrical stimulation uses a multichannel electrode array to depolarize spiral ganglion neurons, but current spread often causes channel crosstalk, limiting spectral resolution. In contrast, emerging optical stimulation micro-LED arrays enables spatially confined activation, which supports more independent channels and improved sound encoding. (B) External components of the Argus II visual prosthesis system. The setup (Second Sight Medical Products, Inc., Sylmar, CA, USA) includes camera-mounted glasses, a video processing unit (VPU) with battery, and an external radio-frequency (RF) coil for wireless communication. (C) Implanted components of the Argus II system. The intraocular implant comprises a 6 × 10 epiretinal electrode array, an electronics case, and a subconjunctival RF coil. Stimulation data and power are transmitted via the RF link between the external and internal coils. (B,C) Adapted from [45], licensed under CC BY 4.0.
Figure 2. Principles of cochlear implants and components of the Argus II visual prosthesis system. (A) Working principles of cochlear implants based on electrical versus optical stimulation. Conventional electrical stimulation uses a multichannel electrode array to depolarize spiral ganglion neurons, but current spread often causes channel crosstalk, limiting spectral resolution. In contrast, emerging optical stimulation micro-LED arrays enables spatially confined activation, which supports more independent channels and improved sound encoding. (B) External components of the Argus II visual prosthesis system. The setup (Second Sight Medical Products, Inc., Sylmar, CA, USA) includes camera-mounted glasses, a video processing unit (VPU) with battery, and an external radio-frequency (RF) coil for wireless communication. (C) Implanted components of the Argus II system. The intraocular implant comprises a 6 × 10 epiretinal electrode array, an electronics case, and a subconjunctival RF coil. Stimulation data and power are transmitted via the RF link between the external and internal coils. (B,C) Adapted from [45], licensed under CC BY 4.0.
Sensors 26 00576 g002
Figure 4. Schematic of the prosthetic sensory feedback system. (A) Flow of tactile feedback in a prosthetic system. During object grasping, tactile information is converted by the prosthetic controller into neuromorphic signals. These signals are delivered via transcutaneous electrical stimulation to the user’s peripheral nerves, evoking sensory perceptions such as touch and pain. (B) Phantom hand sensations elicited by transcutaneous electrical nerve stimulation. Electrical stimulation of the median nerve in the residual limb elicits sensations localized to the phantom thumb and index finger. (C) Selective stimulation using an implanted cuff electrode. A cuff electrode is implanted around the sciatic, tibial, and peroneal nerves and connected to an external stimulator. Selective stimulation of specific contacts evokes percepts (e.g., at missing toes or the heel, as reported in LL01 and LL02) that correspond to changes in plantar pressure. Adapted from: (A,B) [25]; (C) [103].
Figure 4. Schematic of the prosthetic sensory feedback system. (A) Flow of tactile feedback in a prosthetic system. During object grasping, tactile information is converted by the prosthetic controller into neuromorphic signals. These signals are delivered via transcutaneous electrical stimulation to the user’s peripheral nerves, evoking sensory perceptions such as touch and pain. (B) Phantom hand sensations elicited by transcutaneous electrical nerve stimulation. Electrical stimulation of the median nerve in the residual limb elicits sensations localized to the phantom thumb and index finger. (C) Selective stimulation using an implanted cuff electrode. A cuff electrode is implanted around the sciatic, tibial, and peroneal nerves and connected to an external stimulator. Selective stimulation of specific contacts evokes percepts (e.g., at missing toes or the heel, as reported in LL01 and LL02) that correspond to changes in plantar pressure. Adapted from: (A,B) [25]; (C) [103].
Sensors 26 00576 g004
Figure 5. Electrode placement and system components of a brain–computer interface designed for a patient with ALS. (A) Electrode locations based on fusion of postoperative CT and preoperative MRI. Electrode contacts over the sensorimotor and dorsolateral prefrontal cortex are marked as white dots; electrodes e2 and e3 were selected for BCI feedback. (B) Postoperative chest radiograph showing the subcutaneously implanted Activa PC+S transmitter and the leads connected to the electrodes; two of the leads are shown plugged into the device. (C) Postoperative CT image illustrating the spatial distribution of the four electrode strips; the dot-like structures indicate the electrode connectors. (D) Schematic of the BCI system, including the implanted transmitter, external receiving antenna, signal receiver, and control tablet. Adapted from [124].
Figure 5. Electrode placement and system components of a brain–computer interface designed for a patient with ALS. (A) Electrode locations based on fusion of postoperative CT and preoperative MRI. Electrode contacts over the sensorimotor and dorsolateral prefrontal cortex are marked as white dots; electrodes e2 and e3 were selected for BCI feedback. (B) Postoperative chest radiograph showing the subcutaneously implanted Activa PC+S transmitter and the leads connected to the electrodes; two of the leads are shown plugged into the device. (C) Postoperative CT image illustrating the spatial distribution of the four electrode strips; the dot-like structures indicate the electrode connectors. (D) Schematic of the BCI system, including the implanted transmitter, external receiving antenna, signal receiver, and control tablet. Adapted from [124].
Sensors 26 00576 g005
Figure 6. lFlexible wireless sensing system for asthma management. (A) Structural design of the flexible respiratory sound patch. (B) Schematic of on-body use, with the patch attached to the chest wall to capture breathing sounds. (C) System block diagram for real-time analysis of airway status. (D) Wrist-worn flexible blood-pressure sensor for coordinated monitoring. (AC) Adapted from [132]; (D) adapted from [133].
Figure 6. lFlexible wireless sensing system for asthma management. (A) Structural design of the flexible respiratory sound patch. (B) Schematic of on-body use, with the patch attached to the chest wall to capture breathing sounds. (C) System block diagram for real-time analysis of airway status. (D) Wrist-worn flexible blood-pressure sensor for coordinated monitoring. (AC) Adapted from [132]; (D) adapted from [133].
Sensors 26 00576 g006
Figure 7. Implantable neural sensing interfaces for high-fidelity brain-state monitoring. (A) Schematic of a high-signal-to-noise-ratio conductive hydrogel suitable for acute and chronic inter-cortical and epidural neural recordings, illustrating its interaction with neurons in the mouse brain. (B) Schematic and enlarged view of a wireless miniature brain–computer interface system implanted in a Labrador retriever. (C) Conceptual illustration of the implantation site of the wireless miniature brain–computer interface system in the human brain. (A) Adapted from [141]; (B,C) adapted from [144].
Figure 7. Implantable neural sensing interfaces for high-fidelity brain-state monitoring. (A) Schematic of a high-signal-to-noise-ratio conductive hydrogel suitable for acute and chronic inter-cortical and epidural neural recordings, illustrating its interaction with neurons in the mouse brain. (B) Schematic and enlarged view of a wireless miniature brain–computer interface system implanted in a Labrador retriever. (C) Conceptual illustration of the implantation site of the wireless miniature brain–computer interface system in the human brain. (A) Adapted from [141]; (B,C) adapted from [144].
Sensors 26 00576 g007
Figure 8. A wearable hybrid skin patch for continuous physiological monitoring. The figure presents (A) an exploded-view schematic of the patch assembly, (B) an optical image of the final device, and detailed views of its key components, including (C) the glucose and (D) pH sensors, culminating in (E) a magnified perspective of the integrated sensing platform. Adapted from [153].
Figure 8. A wearable hybrid skin patch for continuous physiological monitoring. The figure presents (A) an exploded-view schematic of the patch assembly, (B) an optical image of the final device, and detailed views of its key components, including (C) the glucose and (D) pH sensors, culminating in (E) a magnified perspective of the integrated sensing platform. Adapted from [153].
Sensors 26 00576 g008
Figure 9. Closed-loop framework for adaptive direct brain stimulation (adaptive direct electrical stimulation). The left panel illustrates potential scenarios for triggering responsive stimulation across three progressively more distributed and externalized brain–machine interface configurations. The right panel presents an extended example of a data-processing loop, in which neural signals flow from the implantable device to external computers and the cloud for collaborative analysis. By performing real-time or delayed biomarker analysis on continuously recorded intracranial electroencephalography(iEEG) signals (e.g., using machine-learning methods to detect epileptic activity), the system determines the timing and parameters of stimulation, thereby enabling state-dependent optimization of therapeutic and cognitive functions. Adapted from [177].
Figure 9. Closed-loop framework for adaptive direct brain stimulation (adaptive direct electrical stimulation). The left panel illustrates potential scenarios for triggering responsive stimulation across three progressively more distributed and externalized brain–machine interface configurations. The right panel presents an extended example of a data-processing loop, in which neural signals flow from the implantable device to external computers and the cloud for collaborative analysis. By performing real-time or delayed biomarker analysis on continuously recorded intracranial electroencephalography(iEEG) signals (e.g., using machine-learning methods to detect epileptic activity), the system determines the timing and parameters of stimulation, thereby enabling state-dependent optimization of therapeutic and cognitive functions. Adapted from [177].
Sensors 26 00576 g009
Figure 10. Overall experimental architecture of the wireless BBI system. On the left, an EEG-based BCI decodes the human user’s left- and right-hand motor imagery and eye-blink commands, which are mapped onto three navigation commands: left turn, right turn, and forward. On the right, a wireless microstimulation module delivers these commands in real time to electrodes implanted in the rat brain, with stimulation sites in the SIBF region (turning behavior) and the medial forebrain bundle (MFB, forward locomotion/virtual reward), enabling human intention to directly drive continuous walking and navigation in the rat. Adapted from [182].
Figure 10. Overall experimental architecture of the wireless BBI system. On the left, an EEG-based BCI decodes the human user’s left- and right-hand motor imagery and eye-blink commands, which are mapped onto three navigation commands: left turn, right turn, and forward. On the right, a wireless microstimulation module delivers these commands in real time to electrodes implanted in the rat brain, with stimulation sites in the SIBF region (turning behavior) and the medial forebrain bundle (MFB, forward locomotion/virtual reward), enabling human intention to directly drive continuous walking and navigation in the rat. Adapted from [182].
Sensors 26 00576 g010
Figure 11. System-level challenges and future directions for fusion of lifeform interfaces. The left panel summarizes five key bottlenecks in building multi-implant, closed-loop fusion interface systems: (1) multi-source heterogeneous sensing fusion across modalities with mismatched sampling rates, delays, and semantic incompatibilities; (2) in-body communication bottlenecks due to limited channels/bandwidth and latency accumulation; (3) stringent power-supply and thermal constraints arising from limited implant volume, wireless power-transfer efficiency, and tissue thermal safety; (4) the biocompatibility–reliability trade-off driven by foreign-body response, micromotion, and long-term encapsulation degradation; and (5) safety and reliability risks in networked systems, including interference, cascading failures, cybersecurity threats, and the lack of standardized benchmarks. The right panel highlights four corresponding directions to address these challenges: layered heterogeneous in-body networks (short-range NIR optical clusters coupled to an ultrasound backbone), sustainable and physiology-aware power/thermal management using multi-source energy and adaptive cooling, advanced biointerfaces enabling long-term stability (soft electronics, bioresorbable devices, and smart coatings), and system-level safety frameworks incorporating redundancy, anomaly detection, fail-safe operation, cybersecurity, standardization, and digital-twin–assisted validation.
Figure 11. System-level challenges and future directions for fusion of lifeform interfaces. The left panel summarizes five key bottlenecks in building multi-implant, closed-loop fusion interface systems: (1) multi-source heterogeneous sensing fusion across modalities with mismatched sampling rates, delays, and semantic incompatibilities; (2) in-body communication bottlenecks due to limited channels/bandwidth and latency accumulation; (3) stringent power-supply and thermal constraints arising from limited implant volume, wireless power-transfer efficiency, and tissue thermal safety; (4) the biocompatibility–reliability trade-off driven by foreign-body response, micromotion, and long-term encapsulation degradation; and (5) safety and reliability risks in networked systems, including interference, cascading failures, cybersecurity threats, and the lack of standardized benchmarks. The right panel highlights four corresponding directions to address these challenges: layered heterogeneous in-body networks (short-range NIR optical clusters coupled to an ultrasound backbone), sustainable and physiology-aware power/thermal management using multi-source energy and adaptive cooling, advanced biointerfaces enabling long-term stability (soft electronics, bioresorbable devices, and smart coatings), and system-level safety frameworks incorporating redundancy, anomaly detection, fail-safe operation, cybersecurity, standardization, and digital-twin–assisted validation.
Sensors 26 00576 g011
Table 1. Key studies systematic comparison across Class I–II perception systems.
Table 1. Key studies systematic comparison across Class I–II perception systems.
Ref (Year) Class/Function Interface and Site Key Outputs/Outcomes Main Limitation/Bottleneck Key Sensor Metrics
[20] (2024)I/Auditory restorationFully Implantable CIIn vivo guinea pig validation: frequency-selective stimulation; eABR evoked for ∼45–100 dB SPLWeak off-resonance/band-edge sensitivity; packaging needed; power–range trade-off8-ch MEMS; ∼300 mVpp@100 dB;
<600 μ W
[21] (2024)I/Auditory restorationNoise reduction technology of CISignificant improvement in multi-talker speech-in-noise perceptionHigh computational cost; not yet fully implantable in real time20% intelligibility (+5 dB SNR); 22 channels; high stability; low-latency RNN
[22] (2016)I/Visual restorationEpiretinal electrode array (retina)Stable perception of light and spatial location over 5 yearsExtremely low spatial resolution; reliance on external camera60 electrodes ( 20/1260 acuity); chronic 5+ years; wireless inductive low mW.
[23] (2015)I/Visual restorationSubretinal photodiode array (retina)Higher visual acuity than epiretinal systems in preclinical studiesLimited field of view; requires external IR projector70 μ m pixels; preclinical acute; photovoltaic IR low power.
[24] (2023)I/Speech restorationIntracortical microelectrodes (motor cortex)Real-time speech synthesis up to ∼62 words/min in paralyzed patientsInvasive interface; limited vocabulary size and long-term stability62 wpm speech; 23.8% WER; 80 ms latency; 128 electrodes
[25] (2018)I/Tactile restorationE-skin sensors with peripheral nerve interfaceRestored pain sensations and enabled discrimination of object curvature and sharpness via the prosthesisSingle-subject demonstration; coarse and non-natural sensations0–300 kPa range; graded touch-pain; 3 taxels/fingertip; multilayer higher epidermal sensitivity
[26] (2019)I/Tactile restorationPeripheral nerve electrodes (upper limb)Biomimetic sensory feedback via nerve stimulation improved dexterous bionic hand control and embodimentStudy limited to a single participantContact force/torque sensors: 0–25.6 N range, 0.1 N/bit, 30 Hz.
[27] (2025)I/Olfactory restorationOlfactory bulb interfaceInduced smell perception via olfactory bulb electrical stimulationSmall sample (n = 5); subjective reports without objective confirmationInduced smell (3/5 subjects); 1–20 mA, 3.17 Hz; subjective perception; small-sample.
[28] (2019)II/Cardiac sensingImplantable TENG pressure sensor (ventricle)Self-powered ultrasensitive sensor enables real-time endocardial pressure monitoringLimited to animal testing; durability and chronic stability unclearSelf-powered; 1.195 mV/mmHg sensitivity; R2 = 0.997; 0–350 mmHg; 108 cycles
[29] (2021)II/Cardiac sensingGapless TENG sensor (myocardium)No-spacer TENG enables precise cardiac monitoringPreclinical testing in animal modelSelf-powered; 3.67 V Voc, 51.7 nA Isc, 99.7% HR, 10 6 -cycle stable
[30] (2022)II/Cardiac regulationTENG-based stimulation interface (myocardium)Self-triggered pacing improves cardiac function in animal modelsInsufficient output energy for large-scale or human applicationSelf-powered TENG; 0.4–20 V, 20–80 V/cm EF, 100–400 µm depth
[31] (2018)II/Metabolic regulationTENG sensor with vagus nerve interfaceClosed-loop appetite suppression and weight reduction in ratsUnknown long-term biocompatibility; invasive implantationBattery-free; 0.05–0.12 V pulses, 12-week stable, 40 µW
[32] (2018)II/Urinary controlTENG sensor with SMA actuator (bladder wall)Autonomous on-demand bladder voiding in underactive modelsEarly feasibility stage; limited lifespan of actuatorsqOutput 35.6–114 mV for 0–6.86 N; saturates 0.67 mL
[33] (2019)II/Orthopedic therapyTENG electrodes at fracture siteEnhanced osteogenesis and bone healing in osteoporotic ratsLow stimulation power; preclinical validation onlyTENG 100 V, 1.6 μ A; EF 150 V/cm, 250 μ m
Table 2. Key studies’ systematic comparison across Class III–IV perception systems.
Table 2. Key studies’ systematic comparison across Class III–IV perception systems.
Ref (Year)Class/FunctionInterface and SiteKey Outputs/OutcomesMain Limitation/BottleneckKey Sensor Metrics
[156] (2012)III/Geomagnetic senseVibrotactile belt (waist skin)Users developed a “sense of north”; improved navigation/orientation tasksRequires long training; limited information bandwidth (direction only)Pointing error 41 → 23°, 163 → 84°; walk deviation 10°
[157] (2013)III/Infrared senseIntracortical microstimulation (S1 cortex, rat)Rats learned to detect IR signals; new IR perception coexisted with normal touchInvasive animal implant; simple stimulus representation (single-pixel IR)IR prosthesis: ICMS 0–400 Hz, 93% correct, 1.3 s
[158] (2021)III/Infrared senseRetinal prosthesis input fusion (Argus II)Improved night navigation and human detection for prosthetic vision usersAdditional external hardware; low-resolution thermal overlayThermal camera: 60 electrodes; 11 × 19 FOV ( 22° diagonal); 200 μ m diameter
[159] (2015)III/EcholocationHead-mounted ultrasonic sensor + stereo audioUsers learned to judge object distance/direction via sound after trainingTraining-dependent; limited spatial resolution and throughput vs. natural vision25–50 kHz bandwidth; 160° microphone field of view; echoes to 5 m; 75–86% correct
[160] (2025)IV/BCI skill learningEEG headset (scalp)Improved motor-imagery BCI accuracy via co-adaptive neurofeedback trainingnon-invasive signals limit resolution; gains can be task-specific62 electrodes; 512–1000 Hz sampling; 0.5–40 Hz bandwidth.
[161] (2025)IV/Cognitive therapyTranscranial ultrasound (head)Reported cognitive-score improvements and increased brain network activity vs. shamMechanism unclear; small cohort and transient effects 0.20   mJ / mm 2 flux; 5 Hz frequency; 3 μ s duration1.
[162] (2018)IV/Memory enhancementCortical electrodes (temporal lobe)Improved word recall with adaptive, timed stimulation in epilepsy patientsInvasive; variable benefit across individuals and tasks3–180 Hz bandwidth; 0.61 AUC; OR 1.18 (recall)
[163] (2017)IV/Memory enhancementDepth electrodes (hippocampus, primate)Improved memory-task performance using closed-loop hippocampal pattern stimulationHighly invasive; demonstrated only in animal models with external computing10–50 μ A current; 1.0 ms pulses; ≤ 20 Hz frequency; 70–75% accuracy.
[164] (2025)IV/Memory enhancementDepth electrodes (hippocampus, human)Enhanced hippocampal network connectivity associated with memory-related functionInvasive; cognitive benefits not yet fully quantified30 kHz sampling; 5–10 mm spacing; 0.1–1 kHz bandwidth; 500 Hz rate
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, B.; You, X.; Liu, Y.; Xu, J.; Xu, S. Multi-Level Perception Systems in Fusion of Lifeforms: Classification, Challenges and Future Conceptions. Sensors 2026, 26, 576. https://doi.org/10.3390/s26020576

AMA Style

Zhang B, You X, Liu Y, Xu J, Xu S. Multi-Level Perception Systems in Fusion of Lifeforms: Classification, Challenges and Future Conceptions. Sensors. 2026; 26(2):576. https://doi.org/10.3390/s26020576

Chicago/Turabian Style

Zhang, Bingao, Xinyan You, Yiding Liu, Jingjing Xu, and Shengyong Xu. 2026. "Multi-Level Perception Systems in Fusion of Lifeforms: Classification, Challenges and Future Conceptions" Sensors 26, no. 2: 576. https://doi.org/10.3390/s26020576

APA Style

Zhang, B., You, X., Liu, Y., Xu, J., & Xu, S. (2026). Multi-Level Perception Systems in Fusion of Lifeforms: Classification, Challenges and Future Conceptions. Sensors, 26(2), 576. https://doi.org/10.3390/s26020576

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop