1. Introduction
Driven by stringent emissions regulations and consumer demand, electric vehicles are developing rapidly. The battery has become the most prominent energy storage device, and therefore the demand for high-performance battery management system (BMS) is increasing. One of the main functions of the BMS is to ensure the safety of the battery and protect it from operating under conditions that are harmful to both the battery and the users [
1]. As the core system for monitoring and protecting high-voltage traction batteries, the BMS is critical for ensuring safety, maximizing driving distance, prolonging battery life, and optimizing performance. High-precision battery parameter monitoring is the foundation of the BMS, as it requires the system to accurately monitor the voltage, current, and temperature of the battery in real time.
Delta–Sigma (DS) analog-to-digital converters (ADCs) are widely used in BMS due to their high precision, high linearity, and anti-interference ability, which can effectively improve the performance and reliability of BMS. A high-precision, low-power ADC can ensure real-time and accurate collection of battery data for BMS to extend battery life and improve system safety. Continuous-Time (CT) DS modulators (DSMs) offer inherent anti-aliasing but exhibit sensitivity to clock jitter and require precise RC time constants. Despite their slow improvement with CMOS scaling, Discrete-Time (DT) DSM still benefits from robustness in variations due to their switched-capacitor (SC) implementation and easier design compared to CT DSM [
2,
3,
4]. The Cascade of Integrators with FeedForward (CIFF) topology [
5] offers a distinct architectural advantage by inherently constraining the dynamic range required at the outputs of its integrators. The feedforward branches in the CIFF topology effectively route the input signal around the integrators, delivering it directly to the summation point where quantization occurs. This leads to a substantial reduction in their output voltage swings. This characteristic not only enhances overall linearity by minimizing the demands on the integrators’ operational amplifiers but also substantially relaxes their slew rate and output requirements. The relaxed specifications allow for the use of more power-efficient amplifier designs, making the CIFF structure particularly advantageous for high-resolution, low-power applications.
Noise reduction is critical for BMS as precise measurement of low-frequency signals is essential. The first op-amp is a major source of both offset voltage and 1/f noise, which are critical limitations for achieving high resolution in ADCs. Although large input devices are employed in the first op-amp to achieve low flicker noise and improved matching [
6,
7], they also result in large parasitic capacitance at the op-amp’s virtual ground. This capacitance, in turn, reduces the unity-gain bandwidth (UGB) and phase margin, leading to performance degradation.
Chopping is a technique that employs a modulation–demodulation process to suppress low-frequency noise and dc offset errors in ADCs [
8,
9]. Typically, using an SC circuit, the input signal is periodically modulated from the baseband signal to a higher frequency around the chopping frequency, f
C. The modulated signal is then processed by the ADC. During this process, the low-frequency noise and offset remain in the baseband. During demodulation, the signal is switched back to the baseband, while noise and offset are shifted to higher frequencies. The high-frequency noise components are removed by a low-pass filter. The process of shifting the noise away from the signal band enhances the signal-to-noise ratio (SNR), which is critical for high-resolution applications. Temperature-induced drifts, which typically occur at low frequencies, are minimized without requiring software calibration.
Most architectures use a single chopping frequency to suppress the circuit’s local noise. This paper presents a second-order CIFF DT-DS ADC using a hybrid chopping technique that combines both system-level and inner chopping to improve DC precision and linearity. The two-stage op-amps are used in the second-order modulator. Inner chopping is applied at the input and output of the first gain stage within the first-stage op-amp. This architecture effectively mitigates the flicker noise and offset of the critical first stage while avoiding the large charge injection errors associated with chopping at the final output of the integrator. Furthermore, system-level chopping is employed by modulating the input signal and demodulating the digital output after the quantizer. This multi-level chopping approach can be flexibly extended to other key nodes in the signal chain, offering a scalable methodology to suppress residual offsets and non-idealities across different stages of the ADC. The architecture of the second-order DT-DS ADC is proposed in
Section 2.
Section 3 describes the circuit implementation details.
Section 4 presents the measurement results. Finally, conclusions are given in
Section 5.
2. Architecture
Figure 1 illustrates the overall system architecture of the proposed ADC. In practical applications, this ADC can be configured flexibly depending on the target measurement type. For voltage and temperature sensing, a multiplexer (MUX) module is placed before the ADC to sequentially select each channel. This ADC supports the measurement of individual differential cell voltages in a series configuration. Each cell voltage measurement is a differential measurement of the voltage between two adjacent cell input pins. This approach avoids duplicating the ADC for each sensor, thereby saving area and power while maintaining measurement integrity across all cells. For current measurement, the ADC can be directly connected without an MUX, allowing continuous and synchronous sampling of the current signal. This operation optimizes both system integration and resource efficiency, adapting to the distinct signal characteristics and sampling requirements of voltage, temperature, and current in BMS applications.
Figure 2 illustrates the block diagram of the implemented DT DSM. The modulator is based on a second-order CIFF architecture, chosen for its improved stability and reduced sensitivity to circuit non-idealities compared to higher-order single-loop structures. It operates at a sampling frequency of 256 kHz, enabling an oversampling ratio suitable for achieving high resolution within the target bandwidth. The coefficients of the loop filter were systematically optimized through iterative simulations in MATLAB/Simulink (v24.1.0.2537033). This process balanced multiple design constraints to meet the required effective resolution, including dynamic range, in-band noise shaping, and modulator stability. The final coefficient set not only satisfies the target signal-to-noise ratio but also ensures robust operation under practical circuit imperfections such as capacitor mismatch and finite amplifier gain. The use of a CIFF topology also helps to relax the output swing requirements of the integrators, contributing to lower power consumption and better linearity. These are key advantages for precision measurement applications such as battery monitoring.
The modulator architecture depicted in
Figure 3 is a single-loop, single-bit, second-order SC DS ADC. It employs a feedforward architecture based on two integrators and a SC summing network. The first integrator utilizes a two-stage operational amplifier. An inner set of choppers, together with switches S
Φ1 and S
Φ2, implements a cross-coupled sampling scheme that mitigates the effect of offset and 1/f noise of the first OTA. Furthermore, offset reduction is achieved through an outer set of choppers (system choppers), which, together with a digital chopper at the output of the modulator, implements a system-level chopping scheme. During phase Φ1, the input signal (Vin) is sampled on the 4.5 pF input capacitors, C
S. During the subsequent phase Φ2, the switches S
Φ1 and S
Φ2 reverse the input and thus transfer a charge packet proportional to 2·C
S·Vin to the integration capacitors, C
INT. This cross-coupled sampling scheme [
10] ensures that only the capacitors Cs are exposed to the input Common-Mode (CM) voltage.
The chopper switches S3 and S3B are directly connected to the integration capacitor CINT. This connection effectively switches the positive–negative (PN) terminals of the integrator between its input and output. As a low-pass network, the integrator suppresses the high-frequency noise components generated by the chopping technique. This technique modulates intrinsic low-frequency impairments, such as 1/f noise and DC offset, to these higher frequencies, thereby effectively removing them from the baseband signal.
To optimally balance noise suppression with power consumption and potential signal degradation, a multi-rate chopping clock scheme is employed. The chopping frequency for the internal op-amp modules, fop_chop, is fs/48, which is directly derived from the master sampling clock. The global system-level chopping clock fsys_chop operates at a lower frequency of fs/192.
Chopping frequency is usually set to be multiple of half the sampling frequency. Reducing the chopping frequency also helps to increase input impedance [
11,
12]. The internal op-amp chopper mitigates the critical 1/f noise and offset of the input transistors. Its frequency of f
s/48 ensures that the first harmonic of the chopped noise is pushed to a high enough frequency where the NTF provides sufficient attenuation, preventing it from folding back into the baseband. The frequency is also chosen as a compromise to minimize the dynamic power dissipation associated with the switching activity within the analog circuits. In contrast, the system-level chopping, which modulates the input signal and demodulates the digital output, operates at the slower rate of f
s/192. The slower frequency reduces the risk of introducing switching-related effects, such as charge injection and clock feedthrough, at in-band frequencies. Furthermore, a slower clock simplifies the design and power requirements of the digital demodulator. The faster f
op_chop aggressively suppresses noise at the critical front-end, while the slower f
sys_chop provides a power-efficient and robust resolution for correcting residual offsets across the entire system with minimal introduction of nonlinearities. The chosen f
sys_chop is a sub-harmonic of f
op_chop, ensuring a coherent clocking structure that avoids complex frequency relationships.
3. Circuit Implementation
The core first integrator, critical for the overall noise performance, is implemented with a two-stage op-amp in
Figure 4. It achieves a 96 dB DC gain while drawing 18 μA. The chopping switches are strategically located at the input and the output of the first gain stage. This configuration effectively suppresses the flicker noise and offset of the critical input transistors, which dominate the overall low-frequency noise. It also avoids the nonlinear distortion that would arise from chopping the large-signal output of the second gain stage.
Figure 5 presents the simulated performance of the amplifier across varying process, voltage, and temperature (PVT) conditions. The evaluation covers key process corners (SS, TT, FF, SF, FS), a supply voltage fluctuation of ±10%, and an operating temperature span from −25 °C to 125 °C. The data confirm that the amplifier exhibits consistent and reliable operation under all examined PVT variations.
The effectiveness of the chopping technique is comprehensively demonstrated in
Figure 6 and
Figure 7. As shown in
Figure 6, the choppers significantly reduce the output noise power density of the op-amp.
Figure 7 presents the Monte Carlo simulation results (500 runs), comparing the input-referred offset voltage of the amplifier with and without chopping techniques enabled. The distribution clearly demonstrates that the chopping mechanism effectively suppresses the offset, as evidenced by a significantly tighter and reduced offset spread when chopping is active. This confirms the technique’s efficacy in mitigating device mismatches and low-frequency noise contributing to the DC offset. The second integrator is scaled down for improved power efficiency, achieving a DC gain of 98 dB while drawing only 10 μA.
The architecture of the digital chopper is presented in the block diagram of
Figure 8. The demodulation clock is derived synchronously from the main sampling clock (f
s) through division. The synchronous digital control ensures precise phase alignment between the modulation and demodulation processes, maximizing the rejection of undesired interference. The output of the comparator (COM_OUT) is fed into one input of an XOR gate, which functions as a digital multiplier. This operation synchronously demodulates the signal, effectively shifting the baseband signal back to its original frequency spectrum.
4. Measurement Results
Figure 9 shows the chip micrograph of the proposed ADC, which features a multi-rate chopping clock scheme. The chip was fabricated in a 180 nm BCD process. It has a core area of 0.494 mm
2 and draws 35 μA from a 1.8 V supply. The voltage range of the input signal is ±200 mV, the sampling frequency is 256 KHz, and C
INT of first integrator is 4.8 pF.
The prototype ADC was assembled onto a custom-designed printed circuit board (PCB) for performance evaluation.
Figure 10 presents the measured output power spectral density (PSD) of the proposed ADC, obtained with an input signal of 58.6 Hz at −0.92 dBFS. To ensure high spectral fidelity, a data record of 2
16 points was acquired and processed using a Hann window function, which effectively reduces spectral leakage and improves frequency-domain accuracy.
The measured performance of the ADC is summarized in
Figure 11, which plots the SNR and signal-to-noise-and-distortion ratio (SNDR) against input amplitude. The peak SNR and SNDR are 91.2 dB and 90.6 dB, respectively.
Figure 12 depicts a gradual decline in both SNR and SNDR with increasing input frequency. A comprehensive performance comparison with state-of-the-art works is provided in
Table 1. The proposed ADC achieves a good figure-of-merit (FoM), highlighting its efficiency in processing low-frequency signals. This capability is a key requirement for BMS applications. Furthermore, the design exhibits effective noise suppression, as evidenced by the close matching of SNR and SNDR near full scale. Robustness under practical operating conditions is demonstrated in
Figure 13. Across a temperature range of −40 °C to +125 °C and a supply voltage variation of ±10%, the SNDR variation remains within 0.11 dB. This result validates the effectiveness of the implemented PVT-stabilization techniques, ensuring reliable performance in the automotive industry.
5. Conclusions
This article describes the design and implementation of a DT-DS ADC that achieves low noise, high linearity, and high resolution using a hybrid chopping technique. In the target application of this research, BMS primarily deals with DC signals, where the speed is inherently low. SAR ADCs are indeed the dominant architecture in commercial BMS chips, primarily due to their multiplexing capability across many channels. However, their resolution is often limited. Furthermore, achieving higher resolution with SAR architectures usually necessitates higher sampling rates and more complex calibration, which increases power consumption. In contrast, the CIFF architecture used in this work operates with reduced integrator output swings and provides greater stability margin. As a result, this architecture is better suited for BMS mentioned in this paper.
This work demonstrates a dual-frequency chopping strategy implemented across distinct nodes in the signal path. The inner chopping effectively suppresses the noise of the first, most critical op-amp, while the system-level chopping further reduces the overall noise and residual channel offset. Fabricated in a 180 nm BCD process, the prototype ADC achieves a peak SNDR of 90.6 dB and a peak SNR of 91.2 dB within a 600 Hz bandwidth. The total power dissipation is 63 µW from a 1.8 V supply including the ADC modulator, the reference module, and the clock network. By mitigating low-frequency noise and distortion, the proposed architecture provides an efficient solution for high-performance DT-DS ADC designs. It is particularly suited for processing the slow-varying, low-frequency signals in BMS applications, such as cell voltage and temperature monitoring.