1. Introduction
Safety and brand recognition are two of the leading factors driving greater adoption of LEDs in automotive front-lighting applications. The market is growing year by year, and thanks to LEDs’ performance, reliability and versatility, they have become fundamental parts of distinctive brand identities [
1]. Cost reduction and road safety are stringent requirements to be fulfilled by car makers (Original Equipment Manufacturers: OEMs) [
2]. For this purpose, OEMs are approaching the front-light headlamp using different designs:
However, LED lighting designers are constantly challenged by increasing power dissipation, thermal constraints and design complexities. Currently, front-lighting systems with direction and intensity management for high/low beams (HB/LB) and adaptive headlamp systems are provided in premium vehicles, but, with a consistent cost reduction, these systems will be integrated also into mainstream vehicles in the coming years [
3].
Since the power range for front-light applications requires at least 40 W [
3,
4], the need for DC-DC is the standard solution in the market to fulfill the overall system efficiency requirements. In
Figure 1a, a glare-free front headlamp application realized with a combination of a power stage and a matrix manager controlled by a dedicated microcontroller is represented. The matrix manager device is a series of bypass switches with an adjustable slew rate [
5] used to control the average current flowing in each LED of the string. Every single switch incorporates a dedicated PWM engine to adjust the average current from no current to the maximum allowable current from the DC-DC power stage that relies on a current control loop to regulate the current flowing through the LED string [
6]. The intensity and the shape of the light beam are then managed by the commands sent by the microcontroller to the matrix manager device. In typical use cases, the system is connected to a 12 V battery, and the output voltage required to supply the LED string can vary from 2.5 V to more than 70 V [
1,
3] (depending on the number of LEDs the matrix manager is shorting). Considering the input/output voltages scenario, a buck–boost topology is necessary.
Most LED drivers for HB (with three to five white LEDs) and LB (with five to eight LEDs) use a buck–boost DC-DC topology, which ensures that the desired output current is generated even during battery fluctuation (such as below 4 V during cold cranking or higher than 26 V during load dump) [
4].
While these converters are effective in regulating power with static loads (high and/or low beams), they may struggle to maintain a constant current when the load is abruptly changed. In a DC-DC system, there is the potential for the matrix manager to dynamically short the LEDs in the string. This operation instantaneously reduces the forward voltage biasing of the entire load, causing the output capacitor (which is coupled in parallel to the load) to discharge. This generates a current spike through the LED string that has the potential to damage the remaining LEDs in the string [
4].
To overcome the issues mentioned earlier, various solutions have been adopted in the market, depending on the specific application case. For matrix lighting, the dual-stage boost + buck architecture is generally preferred [
7,
8,
9]. In this solution, the high- and fixed-voltage boost stage supplies several higher-bandwidth buck stages. The buck stages have small output capacitors to minimize the current overshoots on the LED string during load variations. However, as one can see in
Figure 2, this architecture involves two DC-DC converters and two DC-DC controllers, which results in higher system cost and lower overall efficiency.
For standard light functions such as HB and LB (
Figure 1b), a single buck–boost DC-DC is used to supply both LED strings. To save the system cost of an additional DC-DC, a bypass between the two light functions can be implemented using a switch in parallel to the portion of the LED string to be shorted. Current overshoots can be avoided by driving the turn-on phase of the switch as slowly as needed to match the limited bandwidth of the buck–boost regulator [
10]. However, the function bypass switch must be driven at a slow speed, which puts severe constraints on the duty cycle of the single light function (duty cycle variation is used to adjust the LED light flux, without showing color shift in the generated light).
A solution that fits both applications and is based on a single buck–boost regulator has been adopted in [
11]. In this case, the system overcomes the duty cycle limitation of the single DC-DC, showing, at the same time, high efficiency and low-cost applications. However, microcontroller supervision is needed to program the voltage steps in advance and to perform real-time calculations for timing optimization.
In this work, a new control system implemented with a single device to power dynamic adaptive lighting applications is presented.
2. System: Overview and Implementation
To enhance system efficiency, a single buck–boost regulator has been evaluated as an alternative to the dual-stage boost + buck architecture. In
Figure 3, the proposed system [
12] designed to address the previously identified limitations is illustrated. The system consists of (1) a dedicated device able to perform both the functions of a classical DC-DC controller and discharging path activation/deactivation, (2) a SEPIC power converter to achieve buck–boost regulation, (3) an NMOS switch with series limiting resistances to create a parallel path to ground and (4) the LED load. The power converter manages the current supplied to the LEDs, while the parallel NMOS switch dissipates current overshoots generated during load variations. During normal operation, this parallel adaptive output discharge (AOD) path remains deactivated.
In
Figure 4a,b, the system behavior with and without the discharging path is shown. Considering the case when no AOD path is present, when one or more LEDs are dynamically bypassed/shorted, the sudden decrease in the series resistance and the voltage drop in the load, consisting of the remaining LEDs in the string, causes a steep increase in the output current (
Figure 4a) due to the discharging current coming from the output capacitance. In cases where the current overshoot exceeds the maximum rating, the LED could face permanent damage or a lifetime reduction. With the AOD feature, during the dynamic load transition, if the output current exceeds a predefined rising threshold—set at 120% of the target regulated current, as shown in the principle sketch in
Figure 4b—the additional NMOS is activated by the controller, creating a parallel path to ground to dissipate the excess charge stored in the power converter output capacitor. In this way, the current flowing through the LEDs has been kept limited. The discharging path remains active until the output current drops below a predefined falling threshold—set at 110% of the target regulated current in the figure—at which point it is deactivated. The output voltage is dynamically adjusted to properly supply the LED string. The operation can occur once or multiple times for each load bypass until the required voltage is achieved. The number of activations is dependent on the bypass switch speed and the maximum discharge current allowed through the dumper NMOS, which determines the output capacitor discharge rate. To ensure system robustness and maintain LED reliability over time, i.e., to effectively limit the current spike, the discharge rate must exceed the speed of the bypass switches as a minimum requirement.
The proposed solution was implemented with a test chip fabricated in a 130 nm BCD technology by Infineon Technologies. A die micrograph is shown in
Figure 5, where the main blocks have been highlighted. The key elements of the discharging path are as follows:
The N-MOS dumping element in parallel to the output capacitance (COUT);
The current sensing resistor (RFB) used to sense the output current;
The NMOS current limiter resistor RAOD1,2.
In the system (
Figure 3), the LED bypassing activity is detected sensing the output electrical current through the resistor R
FB (also used for the regulation loop). The resistor is connected to the two test chip feedback pins F
BH and F
BL. The voltage signal across R
FB is proportional to the flowing current (I
LED). This signal is amplified by the internal current sense amplifier (CSA) and its output is connected to
The error amplifier to close the control loop (blue parts in
Figure 3);
A comparator to manage the overcurrent events (orange parts in
Figure 3).
Related to the overcurrent detection feature, the output voltage of the CSA is compared with the internal reference VAOD. When the output voltage of the CSA overcomes VAOD, the comparator output toggles to HIGH. The transition of this comparator consequently triggers N-MOS to turn ON, enabling the dumping path. Once the dumping element has been activated, the new path discharges the COUT, preventing the flowing of the extra current on the LEDs. To ensure the high reliability of the N-MOS, the current on the dumping path is limited by the presence of RAOD1,2, but, at the same time, the discharging path shall be fast enough to discharge the COUT faster than the bypass switch closure time. Furthermore, the reliability and thermal impedance of the MOSFET are very important aspects to be taken into consideration, and they will be deeply analyzed in the following paragraphs.
The blue-highlighted subcircuits are dedicated to the generation of the output current, although the current sense circuit can be considered part of both functionalities. The LED current (I
LED) is detected by the CSA and compared with the Voltage Reference (V
REF) of the current regulation loop. The error amplifier (EA) then produces a signal proportional to the difference between the V
REF and the output of the current sense circuit. The difference at the error amplifier inputs is minimized by the control loop adjusting the switching activity at the Switching Output (SWO) via the pulse width modulation (PWM) engine and the gate driver. The regulation ensures that the target current is delivered to the load through the SEPIC converter. The output of the error amplifier is connected to the compensation pin (COMP), which is connected to an RC stability network to ensure proper system stability. When a switch in the matrix manager is closed, the DC-DC controller detects a sudden increase in output current. In response to this higher current, the error amplifier discharges the COMP node to compensate for the change [
13]. Additionally, the device incorporates a peak current control mode to enhance stability performance [
14].
The internal VAOD signal is used to define the current threshold that triggers the activation of the dumping element. This signal is derived as a function of the internal reference used for generating the output LED current (VREF). This approach reduces the process variation spread related to VAOD threshold in respect to the VREF. To ensure efficient capacitor discharge and to prevent continuous toggling of the comparator output, hysteresis has been added to the VAOD signal. When the load current reaches 120% of the target current (VAOD = 1.2 × VREF), the N-MOS is turned on by the device. Once the N-MOS is activated, the discharge of the output capacitance begins, and the internal VAOD is adjusted to lower the deactivation threshold to 110% of the target LED current (VAOD = 1.1 × VREF).
In
Figure 6, the AOD activation and deactivation thresholds as functions of V
REF are represented. The V
REF signal is responsible for the regulated voltage V
FBH–V
FBL. The voltage drop across R
FB (
Figure 3) is proportional to the flowing current. An increase (decrease) in the V
REF corresponds to an increase (decrease) in V
FBH–V
FBL and consequently an increase (decrease) in the output current, acting as an analog dimming.
The adaptive output discharge activation threshold is modulated by the VREF down to a minimum reference voltage VAOD_ON(MIN) in order to reduce noise effects at low currents, while VAOD_HYS is the difference between the activation and deactivation thresholds corresponding to 10% of the reference voltage.
3. Experimental Results
The system was tested on a real test bench designed to emulate a glare-free high beam scenario, where the LED string was controlled by a matrix manager (
Figure 7a). The matrix manager includes an adjustable slew rate, which can be configured through specific commands via the Serial Peripheral Interface (SPI) [
5]. Typically, the slowest slew rate configuration is chosen to minimize electromagnetic emissions.
In typical use cases, the DC-DC and the light source including the matrix manager are not on the same PCB [
9]. In
Figure 7b, the real measurement test bench used in this work is shown. The system is composed of the following:
The LED load board;
The DC-DC with an AOD feature;
A board emulating the car body control unit, including the microcontroller.
In
Figure 7c, the schematic with the main parts is represented. The DC-DC converter used in this work is a classical SEPIC topology. This is because the focus of this work was not related to the study of an innovative DC-DC converter, but it was mainly related to the use of the additional AOD feature in common applicative use cases. For this reason, a widely used DC-DC converter has been considered in this work. A SEPIC (Single-Ended Primary Inductor Converter) DC-DC converter is a power converter capable of stepping up (boosting) or stepping down (bucking) an input voltage to produce a regulated output voltage [
15]. The SEPIC topology is widely used in applications where the input voltage can vary widely, such as battery-powered systems, automotive electronics, and renewable energy systems like solar energy harvesting [
15,
16,
17,
18]. It is especially useful in scenarios where the output voltage needs to remain constant regardless of whether the input voltage is higher or lower than the desired output. The key components are as follows:
Input inductor (L1A): handles input current and stores energy during each switching cycle.
Output Inductor (L1B): works together with the input inductor to manage current and energy transfer.
Coupling Capacitor (CSEPIC): transfers energy between the input side and the output side while also isolating the two sides electrically.
Switch (M1): provides the switching action to convert DC to a pulsed waveform.
Diode: allows current to flow to the output during the switch-off period, preventing reverse current flow.
Output Capacitor (COUT): smooths the output voltage to provide stable DC output.
In
Figure 7c, the switching node current sense resistor R
SWCS, used to detect the peak current through power switch M
1, is also shown. The behavior of this function is outside of the scope of this work.
In
Table 1, the values of the components used in the experimental setup described in
Figure 7 are reported.
In this setup, the DC-DC converter regulates the LED current to 1.0 A, with each LED having a nominal forward voltage of 3.0 V at a junction temperature of 25 °C. This forward voltage decreases as the junction temperature rises [
19]. During testing, the LED string operates at approximately 36 V. A 200 Hz PWM pattern is applied to the matrix manager, consisting of two scenarios: (1) continuous bypass and reconnection of nine LEDs in addition to three LEDs that remain always ON, and (2) continuous bypass and reconnection of six LEDs alongside six LEDs that are always ON. When LEDs are bypassed, a lower output voltage must be provided by the DC-DC converter. However, during this transition, a current spike in the LED string (I
LED) is generated, discharging the output capacitor. The current spike, if not properly managed, can damage the LED.
To evaluate the performance of the proposed solution, experimental measurements have been performed both with and without the adaptive output discharge circuit enabled. If the adaptive output discharge circuit is disabled, the energy stored in the output capacitor and the primary inductor is discharged entirely into the LED string, even though the matrix manager employs a controlled slew rate for switching. This uncontrolled discharge highlights the risks to LED reliability in the absence of the adaptive discharge mechanism.
In this scenario, a peak current of 5 V is reached when a transition from 12 to 3 LEDs is performed, as shown in the yellow trace of
Figure 8a. The figure also depicts the output voltage step (magenta curve) and the variation in the COMP voltage (compensation node) during the transition (blue curve). With all the LEDs active, the output voltage is obtained by the sum of the 12 LEDs’ forward voltage (V
FWD ~3 V) and it is approximately 35/36 V. The exact voltage depends on the diode temperature, since the V
FWD has a strong dependence on the temperature [
19]. After the switching, only a portion of LEDs are still active, and the output voltage decreases depending on the number of the remaining LEDs. In
Figure 8b, the same waveforms as in
Figure 8a are presented, with the addition of the AOD (adaptive output discharge) signal shown in green. When the AOD feature is enabled, during the transition from higher to lower output voltage, the test chip effectively controls the output current, activating the discharging path every time the I
LED is above the activation threshold and deactivating it when the current decreases below the deactivation threshold. In the specific case reported in
Figure 8b, the AOD NMOS is activated 12 times to properly discharge the output capacitor, with the discharge current limited by the R
AOD resistors. The current limitation ensures that the LED string is not overstressed during the lifetime. The system has undergone extensive testing across various use cases to validate its consistent performance and behavior.
Figure 8c,d illustrate measurements for a transition from 12 LEDs to 6 LEDs.
In
Table 2, the performances of the proposed system with respect to state-of-the-art solutions are summarized. Four main aspects have been considered: (1) efficiency, referring to the system power conversion capability; (2) bandwidth, i.e., the system speed; (3) complexity, which takes into consideration the effort to properly set and size all the system parts (including software); and (4) cost. As one can see, in the proposed solution, the most relevant advantages rely on good performance in all the aspects associated with a low system cost.
The complexity of the AOD solution is mainly due to the sizing of the discharging path. For this reason, reliability aspects must be considered for the discharging NMOS and limiting resistors. This aspect is extensively analyzed in the following section.
4. AOD Discharging Path
The dimensioning of the AOD discharging path is a key aspect to manage in order to obtain an efficient and reliable power dissipation during the current spikes. A critical aspect of this process is ensuring that the chosen NMOS operates within its Safe Operating Area (SOA) to avoid thermal runaway or device failure. The SOA chart serves as a guide to define the safe operating conditions for a MOSFET, thus preventing possible damage. For example,
Figure 9a illustrates the SOA for a typical MOSFET such as the IAUC26N10S5L245 [
20]. The chart is divided into several distinct regions, each representing a specific operational limit: (1) R
DS(ON) limitation: this region represents the current limit imposed by the MOSFET’s drain–source on-resistance (R
DS(ON)), as per Ohm’s law, with the relationship given by I
D = V
DS/R
DS(ON); (2) maximum current limitation: this limit defines the absolute maximum drain current (I
D) the MOSFET can handle, beyond which damage may occur; (3) maximum power limitation: this boundary marks the highest power the MOSFET can dissipate while maintaining junction temperatures within safe limits; (4) thermal instability limitation: representing the onset of thermal instability, these lines indicate regions where localized overheating could lead to device failure, also known as secondary breakdown; and (5) BV
DSS breakdown limitation: this limit specifies the maximum voltage the MOSFET can withstand without entering breakdown, ensuring safe operation below this threshold.
The standard SOA characteristics are typically provided for an ambient temperature of 25 °C and under single-pulse applications. However, these conditions represent ideal scenarios that are not always practical in applications with higher ambient temperatures or pulsed power applied. In these cases, the SOA needs to be derated to ensure the MOSFET’s reliability and lifespan, thus compensating for the reduced power dissipation capability and the increased transient thermal resistance arising when operating conditions are not the ideal ones. To properly assess derating, two additional graphs are needed: (1) the power dissipation chart, which illustrates the device heat dissipation capacity under steady-state conditions across various ambient temperatures (
Figure 9b), and (2) the transient thermal resistance chart, which represents the thermal impedance of the package during dynamic conditions, such as varying pulse durations and repetition rates (
Figure 9c). The transient thermal resistance chart highlights how pulse duration and repetition rate influence the heat dissipation performance. For instance, in an ADB system, the discharge phase of the output capacitance, consisting of a few (3–4) pulses each lasting 3–4 µs with a 10 ms repetition rate, can be approximated to a single 20 µs pulse. In this scenario, the transient thermal impedance (Z
thJC) may degrade by a factor of 1.6 (increasing from 0.15 K/W to 0.25 K/W) when compared to the 10 µs single-pulse curve indicated on the SOA chart.
The power dissipation chart (
Figure 9b) illustrates the impact of the ambient temperature on the dissipation capability of the considered MOSFET. In automotive front-lighting systems, temperature can reach up to 105 °C and, in these conditions, the power dissipation capability is reduced by half, dropping from 40 W at 25 °C to 20 W at 105 °C. When combining the effects of higher ambient temperature with longer pulse duration, the effective power dissipation capability is further reduced by a derating factor (
DF) of 3.2—a factor of 2 due to limited power dissipation and a factor of 1.6 due to changes in transient thermal impedance (Z
thJC).
To create an updated SOA chart, a reference point from the original power dissipation curve must be taken. For example, at a junction temperature of 25 °C, the device can handle a single-pulse current of 50 A with a source-to-drain voltage drop of 11 V, resulting in a peak power of 550 W. When factoring in longer pulse durations and the higher ambient temperature, this peak power is recalculated as follows:
This new power limit at V
DS = 11 V is achieved with a current equal to
The updated current limit, adjusted to reflect the new peak power constraint, is derived by shifting the original limit downward to cross the new boundary at the recalculated values. The modified SOA region is highlighted in orange in
Figure 9d.
Considering the specific case depicted in
Figure 7c, a crucial role in establishing an upper limit for the current flowing through the MOSFET (M2) during the discharge process is played by the limiting resistance R
AOD2. At the same time, to reduce the thermal stress on M2, R
AOD1 is sized to reduce the power amount on M2, thus sharing the power dissipation between the resistance and the MOSFET. The peak of the discharging current in the AOD path is defined by the MOSFET thermal capability and the duration of the current pulse. In this work, a peak current of I
D = 15.6 A has been considered.
To determine the suitable R
AOD2 value, the following key parameters must be considered: (1) AOD gate driver output voltage: this is the voltage provided by the test chip to the MOSFET gate, determining its switching between ON and OFF states; and (2) MOSFET Threshold Voltage (Vth): this represents the minimum gate-to-source voltage required to transition the MOSFET from its OFF state to its ON state. Using these two parameters, the value of R
AOD2 can be calculated as follows:
R
AOD1 is intended to maintain the MOSFET within the SOA during operation. For example, with
ID = 15.6 A, the considered MOS [
20] can manage a
VDS up to 11 V considering the revised SOA. Considering a maximum output voltage (
VOUT,max) equal to 50 V (depending on the maximum output voltage related to the application),
RAOD1 can be calculated using the following relationship:
This calculation ensures that the MOSFET operates within its derated SOA, considering the elevated temperature and the pulsing scenario.
5. Conclusions
In this work, a cost-effective solution for driving a dynamic LED string using a single DC-DC converter/controller has been presented. Based on existing features of a standard DC-DC converter for LED applications—such as the current sense circuit and the driving stage—adding few additional external components to create a discharge path, the proposed solution has been achieved. The LED current is monitored and compared with a defined threshold. The activation of a dumping element prevents current spikes that could reduce the LEDs’ lifespan. Proper power management of the discharge path is crucial, since the output voltage of the DC-DC converter decreases proportionally to the number of bypassed LEDs. Consequently, larger LED transitions require the discharge path to dissipate more power. Experimental data confirm that the system reacts to a fast change in the LED load limiting the current spikes. Although the proposed solution exhibits good performances in terms of efficiency, speed and system cost, some drawbacks need to be considered. First, the peak discharge current is a key parameter, defining the power handling and dissipation within the system. The correct sizing of the discharging path is crucial for the system reliability. The two limiting resistors play a fundamental role in fixing the peak current and in offloading the power to be dissipated by the NMOS. The operational integrity of the MOSFET within the Safe Operating Area is a critical consideration to be carefully evaluated. Furthermore, the device has been designed to manage fast discharges, but no strategy has been implemented for a fast charging phase. In these phases, the system reacts with the limited bandwidth of the DC-DC. To this end, an intelligent management of the COMP node can be considered to improve the performances in terms of speed.