Next Article in Journal
Towards Consumer Acceptance of Residential Batteries
Next Article in Special Issue
High-Level Synthesis (HLS)-Enabled Field-Programmable Gate Array (FPGA) Algorithms for Latency-Critical Plasma Diagnostics and Neural Trigger Prototyping in Next-Generation Energy Projects
Previous Article in Journal
Citizen Participation and Engagement in Local Energy Communities: Governing Sustainable Energy Transitions Through Human-Centric Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GEM3k: Architecture and Design of a Novel 3rd Generation High Channel Density Soft X-Ray Diagnostic System Towards Commercial Fusion Power Plants

by
Andrzej Wojeński
1,*,
Grzegorz Kasprowicz
1 and
Maryna Chernyshova
2
1
Institute of Electronic Systems, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland
2
National Centre for Nuclear Research, Andrzeja Sołtana 7, 05-400 Otwock, Poland
*
Author to whom correspondence should be addressed.
Energies 2026, 19(4), 918; https://doi.org/10.3390/en19040918
Submission received: 29 November 2025 / Revised: 28 January 2026 / Accepted: 5 February 2026 / Published: 10 February 2026

Abstract

Achieving reliable, grid-scale electricity generation from nuclear fusion, as envisioned by the DEMOnstration Fusion Power Plant (DEMO) and future commercial reactors, requires unprecedented plasma stability and long-term control. This operational goal is fundamentally challenged by, among others, the dynamic nature of the high temperature plasma and the need to monitor high-Z impurities, such as tungsten, which can severely compromise energy confinement, resulting in discharge disruption and damage to internal reactor walls. Real-time Soft X-ray (SXR) diagnostic systems are therefore an integral and critical component of fusion power plant infrastructure, providing essential temporal and spatial resolution data on these fast-evolving phenomena. To address the severe demands imposed by the extreme operating environment of future fusion reactors, such as DEMO (including intense neutron and gamma fluxes), this work details a current stage in the long-term development of an advanced and robust diagnostic system engineered specifically for technological preparation and future application in these high-fluence environments. This paper presents the third generation of the SXR measurement system, GEM3k, based on Gas Electron Multiplier (GEM) technology. This novel diagnostic utilizes a Field Programmable Gate Array (FPGA)-based architecture, specifically designed for the high-rate acquisition of energy- and spatially resolved plasma radiation distributions. The GEM3k design exploits the inherent radiation hardness of GEM detectors, positioning them as robust sensor units for monitoring plasma dynamics and impurity emissions in future fusion environments. The system readout comprises approximately 34,000 individual pixels mapped to nearly 3000 measurement channels in an XYUV coordinate configuration. This layout enables submillimeter spatial resolution simultaneously with a time resolution better than 10 ms. Addressing the engineering challenges of such a complex high-density readout, this work details the comprehensive design of the GEM3k system, focusing on its architecture, electronics, performance estimations, and data distribution strategies. By enabling precise tracking of impurities and fast plasma behavior, the GEM3k system contributes to the stable, high-gain operation required for future fusion reactors. This directly supports the development of sustainable fusion energy and its eventual integration into modern electricity grids. Furthermore, the planned enhancement to a real-time operating mode could pave a way for a next-generation system for direct integration into reactor control loops. Currently in the prototype phase with initial hardware tests completed, the GEM3k design leverages our extensive experience with diagnostics developed for the JET and WEST tokamaks.

1. Introduction

With global energy consumption projected to rise by 340% to nearly 85,000 TWh by the end of the century [1,2], fusion energy represents a vital path toward high-output, carbon-free power generation. The transition from experimental plasma devices to commercial fusion power plants, such as the planned DEMOnstration Fusion Power Plant (DEMO) [3], necessitates robust and continuous control of plasma performance over long pulse durations. To achieve high net electrical output (e.g., >300 MW) and ensure machine safety, highly reliable real-time diagnostic systems are indispensable. Among these, Soft X-ray (SXR) diagnostics play a critical role in monitoring plasma performance.
In tokamaks utilizing tungsten (W) plasma-facing components (e.g., WEST [4,5,6], JT-60SA [7,8], ITER [9,10]), monitoring the spatial distribution of high-Z impurities is a central objective. Preventing W accumulation is critical to avoid performance degradation and precipitate disruptions [11,12,13,14,15,16,17,18,19,20,21,22]. Consequently, precise spatially and temporally resolved SXR measurements are essential for both fundamental studies and active plasma control [11,13,23,24,25].
Furthermore, SXR diagnostics are vital for machine safety, particularly regarding runaway electron (RE) dynamics [26]. RE beams pose a significant threat to internal components, making early detection crucial. In low-density scenarios or disruptions, SXR emission (1–10 keV) becomes locally dominated by RE bremsstrahlung interacting with impurities [27,28]. A toroidal SXR pinhole camera can capture these features as localized enhancements, providing sensitive data on beam formation and MHD interaction that complements hard X-ray diagnostics. Recent multi-energy SXR systems [29] have successfully demonstrated early imaging of RE birth, confirming the potential of toroidal cameras for characterizing these events in next-step devices.
However, the harsh environment of ITER’s deuterium–tritium phase poses a severe challenge. High neutron and gamma fluxes render conventional semiconductor detectors ineffective. Historical data from TFTR D-T operations [30,31] demonstrate that even shielded silicon detectors degrade rapidly due to neutron-induced lattice displacement. Consequently, standard solid-state sensors would require frequent, impractical replacement in a burning plasma environment [32].
This challenge has prompted the exploration of resilient SXR technology alternatives. Conventional (silicon) detectors are expected to be replaced with radiation-tolerant photon-counting arrays during or beyond the D–D phase [33,34,35]. One of the candidates is the Gas Electron Multiplier (GEM) detector [36,37,38,39,40]. Due to their high tolerance for intense neutron radiation, gas detectors are well suited for future power plants or fusion reactors, like ITER, where the low voltage ionization chamber is currently considered for the radial X-ray camera application [35,41,42,43]. The feasibility of this technology has been demonstrated at the WEST tokamak, a testing bed for the ITER divertor technology, where GEM detector-based diagnostics successfully delivered calibrated, spatially, and spectrally resolved SXR measurements [14,40].
GEM operational principles are detailed in [36,44,45,46]. Detector performance depends heavily on the optimization of two design stages:
These parameters collectively define the detector’s gain, time response, saturation limits, and data processing complexity.
In this work, we present a novel GEM detector design featuring a high-density readout structure capable of processing a massive number of channels simultaneously. This architecture enables high-resolution 2D imaging which, when integrated into a toroidal viewing geometry, complements conventional poloidal tomography to allow for the reconstruction of three-dimensional emission structures. This multidimensional diagnostic approach is particularly significant for resolving complex spatial phenomena, such as the coupling between impurity transport and Magneto Hydro Dynamic (MHD) instabilities. The high spatial resolution, inherent to GEM detectors, paired with sufficient temporal resolution, allows for the tracking of fast-evolving, localized emission features in both the plasma core and edge.
The system’s modular design ensures adaptability to various fusion devices. As an initial step, the system is scheduled for installation on the TJ-II stellarator to perform first measurements in a real plasma environment [72,74,75].
The paper is organized as follows: Section 2 outlines the physical requirements and studies related to the GEM detector construction and XYUV readout board design; Section 3 includes literature studies related to existing technologies of diagnostic and acquisition system for high channel detectors; Section 4 focuses on the system architecture concept and model, including discussion about real-time applications; Section 5 describes the hardware modules, data distribution, and Field Programmable Gate Array (FPGA) signal processing; Section 6 presents the estimations of system performance and Section 7 reports on preliminary laboratory tests of the manufactured hardware and characteristics.

2. GEM Detector Design

The design stage of the GEM3k system is preceded by the proper characterization, modeling, and simulations regarding GEM detector construction and corresponding readout board scheme. Therefore, this Section provides an elaboration about the physics requirements, followed by design of the complex XYUV readout board, performed analysis, and development of supporting software.

2.1. Physics Requirements

As established in the Introduction, resolving complex 3D phenomena, such as tungsten redistribution in neoclassical tearing modes and internal kink modes [76], or the anisotropic emissivity of runaway electron beams, requires diagnostic capabilities that transcend standard 1D lines of sight. A toroidal SXR pinhole camera, equipped with appropriate collimation, spectral filtering, and temporal resolution, can capture these features as localized enhancements and profile distortions along selected lines of sight. While tomography is the principal method for determining local emissivity, the inverse problem based on standard line-integrated data is mathematically ill-posed and often underdetermined [77,78,79,80,81]. To stabilize 3D reconstruction and resolve sub-millisecond transient events (e.g., magnetic reconnection, edge-localized modes), the diagnostic system must provide high data density with specific spatiotemporal precision. In this context, the high spatial resolution and 2D imaging capability of the proposed GEM detectors are essential. They could provide the necessary data density to stabilize the inversion, enabling precise 3D reconstruction of plasma structures that standard diagnostic sets cannot fully resolve.
For the development of this tangential-view camera, physics goals defined the critical design parameters, such as (i) spatial resolution: to resolve plasma structures and anisotropic radiation patterns, a sub-millimeter pixel pitch is required; (ii) time resolution: to capture slow MHD events, the system must support sampling rates sufficient for at least 1–10 ms temporal resolution; (iii) dynamic range: the system must handle the broad dynamic range between standard thermal emission and intense bursts associated with, e.g., REs.
Tangential SXR imaging has been explored previously. More recently, a triple-GEM-based tangential pinhole camera was successfully operated on KSTAR [82,83], featuring a 10 × 10 cm2 active area with a 12 × 12 pixel readout. While successful in imaging sawteeth and poloidal perturbations, upgrading to a higher-density readout is necessary for precise 2D tomography and runaway beam localization. Unlike hybrid “GEMpix” solutions that rely on time-over-threshold modes, which can suffer from pile-up at high fluxes [84,85], the proposed GEM3k system utilizes a patterned anode readout optimized for superior spatial and energy-resolved measurements under high-flux tokamak conditions.
The use of toroidal imaging opens new possibilities for studying azimuthal asymmetries in tokamak plasmas, still a largely unexplored area with potentially significant implications for confinement and impurity accumulation. These asymmetries may be particularly relevant in advanced scenarios or perturbed magnetic geometries and are expected to play an even more prominent role in non-axisymmetric systems such as stellarators.

2.2. GEM Detector XYUV Readout Construction

To meet the requirements for plasma intensity and signal-to-noise ratio (SNR), the detector is designed to operate with a reduced gas amplification factor (~103) [61,71,86]. This places stringent demands on the front-end electronics but ensures operation below the saturation limit. The key system parameters defined for this readout design are flux capability, optimized to handle up to 2 MHz events per channel, and pixel density comprising 34,816 individual pixels on readout board.
The design process was executed in two research stages. First, physics model simulations were performed to optimize the GEM foil (CERN, Switzerland) construction (geometry, voltage, gas mixture) and determine the electron cloud distribution, which defined the appropriate pixel size. The design targeted a sub-millimeters spatial resolution to unambiguously reconstruct plasma radiation maps and perform energy discrimination [72,73]. These simulations dictated the pixel size required to accurately resolve photon positions. Simultaneously, expected analog signal waveforms were estimated using simulation tools [87,88,89,90,91] to define the electronic specifications. The primary outcomes of this stage established the physical layout:
  • Pixel geometry: Hexagonal pixels with a side length of 0.307 mm (approx. area 0.245 mm2), separated by a 75 µm clearance gap.
  • Channel mapping: A unique XYUV coordinate system where pixels are routed in series to form 3072 combined readout channels.
Translating these parameters into a physical Printed Circuit Board (PCB) presented significant challenges. The primary constraint was limiting the detector window to 10 cm × 10 cm while accommodating the dense pixel array, requiring a multi-layer PCB design.
A critical design consideration was channel capacitance. Since pixels connected in series form long signal paths on the backplane, parasitic capacitance accumulates. These channels connect directly to operational amplifiers (OPA858) in a Trans-Impedance Amplifier (TIA) configuration. It is essential to properly select feedback resistance, RF, and compensation capacitance, CF. To maintain high bandwidth (f−3 dB ≈ 10 MHz) and low noise, the datasheet indicates that stray parasitic capacitance must be minimized. Specifically, the maximum input capacitance per channel must be ≤10 pF.
Because the signal paths are densely packed across multiple layers, standard PCB design software could not accurately extract capacitance values. To address this, we employed advanced electromagnetic simulation tools, Ansys SIwave and Ansys Electronic Desktop (Q3D), version 2025R1, combined with the Altium Designer CAD-model exporter. Figure 1 presents the 3D models imported into the Ansys environment. Due to the design’s complexity, simulations were performed on one-quarter of the backplane board per run to verify the capacitance distribution for each individual channel.
The Ansys Q3D 2025 R1 (ANSYS Inc., Canonsburg, PA 15317, USA) software generated a matrix of parasitic capacitances for each channel (series of pixels) with respect to ground (GND). This process yielded two datasets: the backplane capacitance—that is, the capacitance inherent to each pixel-channel line on the detector board—and the Multichannel Analog-Front-End (M-AFE) input capacitance, the capacitance for each of the 128 input channels on the front-end board, covering the trace from the connector to the amplifier.
To determine the total parasitic capacitance affecting the TIA bandwidth, these values must be summed, including the connector capacitance (approximated at 0.2 pF per pin). However, the complex routing of the backplane, which utilizes 24 separate connectors, requires a precise mapping between each specific pixel-channel line and its corresponding M-AFE channel.
To facilitate this, a dedicated software tool, pixelMapper v1.0 (WUT, Warsaw, Poland), was developed (described later in this Section). This tool generates a JSON structure representing the complete topology of the backplane, linking channels to their specific connectors and AFE inputs. By combining the Ansys Q3D 2025 R1 capacitance data (exported in CSV format) with the JSON mapping files, the complete signal path characteristics were reconstructed for every channel in the system.
Figure 2 presents the resulting distribution of capacitance values for each pixel-channel line on the backplane board. The analysis of 889 independent signal lines indicates a highly favorable outcome for the TIA design. The majority of parasitic capacitance values with respect to GND are found to be below 1 pF, which significantly benefits the closed-loop bandwidth by shifting the dominant pole to higher frequencies. The maximum capacitance of the backplane, observed for any channel and reported in this Section, was 5.883 pF, which remains well within the design limit of ≤10 pF established for the OPA858 amplifier.
Figure 3 provides a comprehensive breakdown of the parasitic capacitance for the analyzed quadrant. Each stacked bar represents the total capacitance per channel, summing three distinct contributions: the pixel-channel line on the backplane (green), the M-AFE trace (blue), and the connector capacitance (included in the M-AFE value).
Statistical summaries of these results, grouped by connector (AFE board), are presented in Table 1. This analysis confirms that the majority of combined channels remain below the critical 10 pF limit. The pixel contributions themselves are minimal. The average total capacitance typically ranges between 5 and 7 pF, with a standard deviation of 2 pF. In Figure 3, certain blank areas appear where data is missing. This results from the cropping technique applied during Ansys SIwave simulations to limit the computational domain; channels located outside the selected region were excluded. In the dataset presented, the highest capacitance values are observed on AFE board 24. Since the software maps these values to specific physical channels, these high-capacitance outliers can be individually fine-tuned during hardware testing by adjusting the feedback resistance (RF) and compensation capacitance (CF).
The finalized PCB is a highly complex 10-layer design. The front side is dedicated exclusively to the hexagonal pixel matrix, surrounded by a continuous ground plane. The reverse side hosts 22 high-density 170-pin connectors that interface with the M-AFE boards. To achieve the required density, the manufacturing process supported 0.350 mm vias placed directly on pixel pads and maintained trace clearances of 75 µm.
For power distribution, a +12VDC voltage was selected to support the high power consumption of the active electronics. The rail is regulated independently by power management sections on each M-AFE board. To maximize compatibility with standard power units, the board supports multiple connector formats, including Advanced Technology Extended (ATX), PCIe, and Corsair Type 4. The manufactured board is shown in Figure 4.
Although the full detector assembly is currently being prepared for High Voltage (HV) tests, the standalone readout board serves as a critical verification platform for the signal acquisition electronics described later.
Given the scale of the design (over 30,000 pixels) and the complexity of the signal routing, the risk of layout errors was non-negligible. To mitigate this, a custom Python3-based automation tool, pixel Mapper v1.0, was developed. This software verifies the physical PCB layout against the reference pixel map generated by the physics simulations (CSV format). It parses the Altium Designer Electronic Design Interchange Format (EDIF) and netlist files to validate signal routing at both the schematic and PCB levels. It automatically identifies discrepancies, such as unconnected polygons or routing errors, providing precise localization data. The tool was further expanded to include a Graphical User Interface (GUI) for visualization (Figure 5). This allows users to browse the channel-to-pixel mapping on selected AFE boards and identify pixel clusters.
This visualization capability is essential for initial laboratory tests. It enables the precise identification of signal lines required for cluster reconstruction and photon positioning, allowing for selective data acquisition and significantly reducing the data volume during the prototyping phase.
At present, the readout board has verified electrical connectivity and has undergone preliminary tests with prototype electronic boards (detailed in Section 7). While the full operational description of the detector system is beyond the scope of this paper, it is noted that the complete assembly requires dedicated gas handling and HV supply systems, as described in [92].

3. Overview of Existing Technologies

The development of the system architecture begins with an analysis of the GEM detector construction and its associated physics requirements. As described in the previous Section, the GEM readout board consists of 3072 individual readout channels. Based on physics simulations and aligned with data from tokamak experiments, it has been determined that each channel may experience radiation intensities up to 2 Mevents/s. This value represents the upper operational limit and serves as the primary performance criterion guiding the system design.
Having defined the configuration and operational characteristics of the GEM detector, a review of currently available systems and technologies was necessary. Given the large number of channels, minimizing system size emerged as a key design objective. The best solution would therefore be to apply a custom Application-Specific Integrated Circuit (ASIC) dedicated and designed to work with GEM detectors. However, there are few published reports of such ASICs being used in existing installations.
For that reason, several approaches were evaluated to address this challenge. This Section outlines the main concepts considered, along with corresponding conclusions and rationale behind the final system architecture.

3.1. Complete Systems with Dedicated GEM-ASICs

Based on our research, several ASICs have been designed specifically as front-end units for GEM detectors, primarily for use in large high-energy physics (HEP) experiments. Notable examples of such custom GEM-tailored ASICs include:
The first ASIC, APV25, is an Analog Front-End chip specifically designed for use with GEM detectors. It simultaneously processes signals from 128 input channels from the detector readout board. Connected directly to the readout board, it consists of a low-noise preamplifier, a shaper circuit, an analog shape processor, and a sample and hold unit. The analog input stage is fixed at detector capacity or preamp gain and is embedded within the ASIC. The signal is sourced to a 192-cell analog pipeline. At the end, there is a multiplexed analog output with a registered signal. It provides time-multiplexed analog signals from input channels in a 128:1 ratio. The chip supports up to 128 channels, which is sufficient for the project. It combines the analog stage of the signal processing into a compact package and provides the output suitable for further digital processing. The chip operates at a maximum trigger rate of approximately 5 kHz, with a sampling frequency of about 40 MHz. Due to its multiplexed output and global trigger mode, it provides a sequential sample from each channel in a pipelined mode. However, the APV25 ASIC lacks built-in zero suppression, so all data occupies the bandwidth. This results in a global snapshot of the detector signals. This mode of operation was implemented in the second generation of SXR-GEM system, currently working at the WEST tokamak [97].
The chip does not feature integrated clustering capabilities, so the clustering algorithms must be implemented externally. A key element to consider is the requirement for an extra Analog-Digital Converter (ADC) chip interfaced with an FPGA. This setup requires the development of additional hardware, including boards interfacing, high speed ADCs for signal digitization, an FPGA for data processing, and supplementary electronics to meet ASIC operational requirements (e.g., clock generation, synchronization, power supply). Despite this, such a design remains relatively compact. With modern solutions such as integrated XADC in FPGAs or RF-SoC solutions, the system can be further optimized and miniaturized. Since the whole project based on APV25 has already been developed, it includes all the necessary infrastructure for integration with the GEM detector. However, adapting the system to the current XYUV coordinate scheme may require a redesign of the architecture, introducing additional complexity. It is important to note that most components in this setup are custom-made, which may impact maintainability and scalability.
The second available option involves the use of VMM series ASICs, with the most recent version, VMM3a, currently used in the CERN ATLAS experiment [98]. These chips represent a significant advancement over the APV25, mostly due to their operation in the digital domain. Each VMM3a ASIC supports 64 input channels and integrates several advanced signal processing features, including zero suppression, independent channel triggering, and adjustable preamplifier gain. The chip is capable of handling trigger rates up to 1 MHz per channel and includes an embedded ADC for monitoring readouts. The input capacitance supported by the chip is approximately 2 nF, which is higher than that of the previous design. The designed VMM-based hybrid board is composed of two ASICs and a dedicated FPGA Xilinx Spartan6. The whole hardware setup consists of a Digital VMM adapter card (DVMM), a Front-End Concentrator (FEC), an SRS Power Crate 2k, and a PC working as a Data Acquisition (DAQ) computer. The system is equipped with dedicated software for configuration and data acquisition, supporting both charge and timing information readouts. Additionally, the VMM3a chip incorporates Single-Event Upset (SEU)-tolerant logic. The ASIC offers a broad range of built-in features, as indicated in [27,95,96], and is a complete solution for the HEP applications of GEM detectors. The supporting infrastructure is based on custom-designed electronics and specialized hardware components. A comparative overview of the key features of both ASICs is presented in Table 2.
The comparative analysis of the discussed ASICs can be summarized as follows. The APV25, while being a GEM-dedicated ASIC with an integrated analog signal processing path, is not suitable for the requirements of this development. Its relatively slow readout path limits its applicability, particularly in scenarios requiring high trigger rates and precise event reconstruction, both of which are essential for our intended application involving particle position tracking. On the other hand, the VMM3a, the third generation ASIC, represents a more promising solution and candidacy for the design. However, there are a few drawbacks. First of all, reliance on a custom ASIC introduces constraints to the system. The system architecture becomes tightly coupled to the chip, the entire design is bound to use the dedicated chip and a potentially reference hardware setup. This dependency increases vulnerability to supply chain issues, such as limited availability of replacement parts, and introduces challenges related to production tapeouts and budget provisioning for long-term maintenance or upgrades. Furthermore, the supplementary electronic subsystems are also a dedicated, custom solution. While redesigning the supporting electronics to accommodate the VMM3a could be considered, such an effort may not be optimal in terms of resource efficiency or system flexibility.
Another option considered was the development of a custom-designed ASIC tailored specifically to the project’s functional requirements. This approach aimed to expand the team’s expertise in ASIC technology and establish new collaborations within the microelectronics domain. The proposed ASIC would be designed and optimized for a narrow set of functionalities, mostly implementing an analog stage and a dedicated ADC stage with fast output. The Serializer/Deserializer (SERDES) compatible output format interface is envisaged to ease the integration with modern FPGA platforms and to enable a standardized modular hardware architecture based on commercially available Customer-Off-The-Shelf (COTS) elements. However, this approach would require sufficient funding, primarily to cover the ASIC design itself. The design timeline would also be significant, involving extensive verification, simulation, and iterating testing cycles. Any redesign needed during this process could significantly delay the development and risk exceeding the intended development timeframe. Given these limitations, particularly with regard to time and budget constraints, the custom ASIC solution, while promising in theory, was ultimately deemed infeasible and set aside in favor of more practical alternatives.

3.2. Present Generations of GEM-FPGA Based SXR Systems

Another approach we considered was to build on our previous designs. The team has developed two distinct generations of data acquisition systems for use with GEM detectors, both of which were successfully implemented on the CCFE JET and CEA WEST [97,99,100,101,102,103,104,105,106,107,108]. A comparative summary of their construction and key features is presented in Table 3.
One particularly noteworthy feature of the first-generation system is its capability to perform the real-time charge computations directly within the FPGAs. This functionality is especially relevant for the current project, given the need for efficient, on-board data processing. The system is also compact and could, in principle, be scaled to meet the required number of channels. However, several drawbacks limit its suitability. First, the technology is now outdated, and achieving full detector coverage would require a large number of FPGAs. Additionally, the original system was optimized for linear 1D and XY (2D) detector layouts, integrating all channels within a single DAQ module. Adapting this design to the current, more complex topology would be challenging. The system structure and its interconnections would also require a considerable effort to modernize and repurpose.
The second-generation system, currently in operation at the CEA WEST tokamak, offers a significantly more modern architecture compared to the previous system. It relies on the high-speed Peripheral Component Interconnect Express (PCIe) links to stream the data to an embedded server in DAQ unit through a custom PCIe switch supporting up to 8 FPGAs. The system is capable of transmitting raw analog signals that are captured directly at the GEM backplane within each time window consisting of 40 samples. This functionality is rarely reported in the literature. This architecture allows simultaneous real-time histogramming (for spectra output) and offline data processing, including full raw data storage for validation or algorithm development. The time resolution of the spectra is configurable by the user. In contrast to the first-generation system, most of the processing in this version is implemented in high-performance C++ or Matlab routines on the DAQ server. Despite its strengths, the system presents some areas for improvement. Specifically, there is a need to transition from custom components to more standardized, commercially available ones. Additionally, given the anticipated number of readout channels, more than 3000, more dense front-end boards (both analog and digital) are needed. This makes the compact factor important. Due to the potentially very high data throughout from all readout channels, the DAQ section on the server side has to be re-engineered. Nevertheless, the core algorithmic framework, particularly that implemented in the FPGAs has been extensively validated in a tokamak environment and can be effectively reused for the new-generation design.
Based on our extensive experience and a literature review, we have decided to develop a new third-generation system that uses the latest technologies to meet all requirements.
This new approach is primarily based on discrete electronics and intensive digital data processing, focusing on high-bandwidth streaming. As with previous generations, the analog path has been deliberately simplified, and the system architecture increasingly shifted towards digital signal processing.

4. System Architecture Concept and Model

Based on the analysis presented, we have decided to advance our electronics beyond previous generations by constructing a completely new system with a redesigned architecture. This design takes advantage of recent technological progress, particularly in the field of high-density integrated circuits, which now offer significantly enhanced functionality compared to a few years ago. The key design principles and parameters that guided the development of the system include:
  • Using COTS components wherever feasible;
  • ADCs that offer the highest possible number of integrated input channels per package, while maintaining an appropriate sampling frequency for the application;
  • Minimalization of analog path—functions such as filtering, amplification, etc., are either reduced to the bare minimum or embedded within the Integrated Circuits (ICs);
  • Maximization of channels processing per FPGA unit;
  • Strong emphasis on standardization of the design—minimal use of custom mechanical or communication solutions;
  • Cost-effective hardware design due to the number of channels;
  • Simplified data acquisition architecture, both in terms of component count and mechanical complexity;
  • No real-time requirement.
Based on the presented requirements and design philosophy, the conceptual architecture of the GEM3k system is presented in Figure 6. The proposed architecture meets all the key criteria outlined above, and incorporates the necessary elements of dedicated electronics design, taking into account our prior experience. Section 5 and Section 6 provide a detailed discussion of the hardware design and implementation of this layout.
The readout board, directly connected to the GEM detector, collects current signals resulting from the photon-electron conversion occurring in the detector gas medium [36]. These current signals need to be converted into voltage signals over time, a process typically accomplished using op-amps in a Trans-Impedance Amplifier configuration. This functionality is implemented in Multichannel Analog Front-End boards (M-AFE). Due to the high number of channels in the detector, this analog stage must be designed with high component density. Additionally, this stage incorporates essential analog functionalities, such as offset configuration or calibration pulses path.
Next, the signal is transferred to Multichannel Analog-Digital Boards (M-ADB), mostly to digitize the data for further processing. The ADCs must have an appropriate sampling frequency to meet signal requirements. Furthermore, these ADC chips must support as many input channels as possible to ensure cost-effectiveness of the system, while also reducing the occupied space and increasing the channel capacity per FPGA.
The subsequent stage involves signal processing and data streaming performed within FPGA boards (FPGA-PSBs). These boards are responsible for signal acquisition using appropriate triggering algorithms, thus significantly reducing the volume of data generated by the system. Precise time-synchronization across the boards is critical, as it enables accurate cluster reconstruction and identification of the photon energy and position. Additionally, the FPGAs are capable of executing initial data preprocessing to limit the data streams further. Modern models offer an increased number of high-speed Gigabit Transceivers (GBT) in parallel to an independent PCIe interface. Multiple stream points are essential, since the assumed amount of transmitted data is vast. For the newest generation, we focus on interfaces other than PCIe due to cost optimization and ease of implementation. PCIe IPs are much more limited in an FPGA than GBT links. With access to standardized Ethernet-like multigigabit connections, we enable a more distributed platform that optimizes performance across all system components and prevents data loss. FPGA boards are available in the form of COTS modules, along with extension boards, such as streaming units. Data output from the FPGA is streamed to the Data Processing Units (DPUs) for the final processing. The algorithms implemented on these servers are designed to manage the distributed nature of the pixel data and perform accurate cluster reconstruction.
Having multiple FPGA boards with acquisition and streaming modules is necessary for some carrier infrastructures. For this purpose, we use standard COTS elements that are available on the market. Among them, the Microtelecommunications Computing Architecture (µTCA) standard is particularly notable as a versatile, high-performance platform. It is widely adapted for high-bandwidth communication in most modern equipment, including its specialized revision for HEP experiments. It provides an integrated, all-in-one environment, which is particularly interesting for installation of FPGAs in Advanced Mezzanine Card (AMC) format. The main advantages of using the µTCA format are:
  • Availability of COTS boards with FPGAs
    Reduces the cost of the boards due to commercial continuous production (opposite to custom designs). Hardware functionality tested by manufacturer on the production stage.
  • Supports AMC boards with FPGAs
    Highly standardized design, including high-speed connections required by the design. Various available AMC models including FPGA Mezzanine Card (FMC) sockets are capable of handling a large number of input signals, foreseen for the designed detector.
  • High-speed backplane supporting various data streaming configurations
    Within the GEM3k system implementation it provides connections for output data streaming, including Rear-Transition Modules (RTMs), for external data distribution at high bandwidths.
  • Integrated management links
    For this kind of large-scale design, in-standard defined management links make for a more uniform design in scope of the complex configuration of the GEM3k system. This also includes a CPU platform supported by µTCA for management as well lines for distributing trigger or clock signals.
  • Integrated various signal distribution links
    In GEM3k it is necessary to keep the clock synchronized among the measurement boards (at minimal jitter?? for timestamping reasons). This feature is already provided in µTCA standard backplanes and compatible AMC boards, eliminating the need for additional design work.
  • Point-to-point communication
    It is possible to extend further the system for complex real-time data computation (clustering, etc.) due to in-standard point-to-point communication between AMC boards at high speeds.
  • Automatic system maintenance: power management, cooling control, failure detection, platform supervision
    No need to design additional protections for FPGA boards, since all the hardware parameters tracing is handled by the µTCA platform based on Intelligent Platform Management Interface (IPMI) standard.
  • A unified mechanical standard
    No need to define custom chassis, supports or other infrastructure. The mechanical design is clearly defined by the µTCA standard. Designing of the custom boards (for example FMC measurement cards) is strictly defined in mechanical scope by the µTCA, AMC and FMC standards.
The Data Streaming Backplane (DSB) serves as a data distribution platform between the FPGAs and final computation modules. The transmission is based on the selection of the final transmission link, which is described later. This can be a PCIe root-complex switch, a high-performance Ethernet switch, or a point-to-point gigabit connection. Although the primary function of backplane is to route data, it can also be used to provide quality of service structure and balance data loads. This layer does not modify the data and can generally be omitted.
The final stage consists of the DPUs. In the proposed architecture, high-performance servers are used, and costs are optimized. Building on the success of previous system generations, raw data analysis is again implemented on the server platform. However, due to the significantly increased number of channels, approximately 300 times more than in prior setups, a preliminary data preprocessing is now implemented directly in the FPGAs to reduce data streams. This strategy of moving from raw to much more processed data reduces the overall amount of data. Initially, raw data are essential for system verification and signal quality assessment. The servers are equipped with the required communication interfaces installed in order to receive the data streams in the same standard as FPGAs based on DSB. More details are provided in the following Sections.
The architectural approach of the system is directly aligned with the primary design objective of reducing costs without compromising performance. Starting from the top, manufacturing of the GEM readout board constitutes a relatively minor contribution to the overall budget. In contrast, the signal acquisition boards incorporates high-performance components in significant quantities and therefore represents a major cost factor due to their scale. The final number of channels influences the type of connectors, which must be of high-density and potentially of low resistance. The use of the FPGAs makes it possible to select COTS boards that have already been designed, reducing the total costs. However, this also necessitates the development of custom M-ADBs in a standardized format, most commonly using HPC-FMCs. The data computation layer (DPU) is cost-effective, since it uses common servers that are available in multiple configurations and at different prices. Taking all system parameters into account, the architecture is composed of carefully selected, cost-optimized components.
The next Section provides a detailed analysis of the hardware architecture, which is distinct from the data processing path due to different design analysis. This includes descriptions of the M-AFEs and M-ADBs boards in connection with FPGA-PSBs related to the previously discussed GEM readout board in Section 2.

System Adaptation for Real-Time Applications

The presented triple-GEM SXR imaging system was initially developed for offline technology validation, focusing on high-fidelity characterization of plasma emissivity, impurity transport, and RE beam dynamics rather than immediate integration into existing tokamak plasma control systems. However, real-time SXR diagnostics are essential for next-generation fusion devices, particularly DEMO, where 10 ms latency control cycles demand rapid detection of critical events with 10–100 ms of time resolution [109]. As mentioned, SXR imaging provides invaluable data for detecting impurity accumulation (W transport, core peaking), localizing RE beam (birth, growth, wall impact), monitoring transient events (edge localized modes, sawteeth, MHD precursors), and identifying disruption precursors (plasma electron temperature cooling, q-profile changes). In plasma devices, real-time feedback enables protective actuation via gas injection, impurity seeding, or magnetic reconfiguration, preventing damage during disruptions. DEMO baseline documentation identifies SXR tomography as mandatory for its safe steady-state and transient operation.
As it is difficult to describe how the system can be adapted for real-time use without a specific device description or knowledge of the physics requirements, we will elaborate on the topic to provide a general overview of how this could be achieved in the most optimal way.
Although the current prototype is designed for high-resolution physics studies, the system architecture can be adapted for real-time control loops with a latency of less than 10 ms. The primary challenge in the current configuration stems from the design of the GEM readout board itself, which features an extremely high resolution XYUV pixel structure. This sophisticated layout introduces three complexities:
  • Complex routing: Non-linear mapping where physical pixel neighbors are often routed to different electronic channels (e.g., GEM channel #1 is not necessarily adjacent to channel #2 on the ADC).
  • Cluster ambiguity: A single photon often activates multiple pixels (electron cloud spread), requiring computationally intensive clustering algorithms to determine the precise interaction point.
  • Data volume: The high pixel count generates a massive data stream.
In order to achieve real-time operation, it is mostly only necessary to optimize the GEM detector backplane board and/or GEM-foil stack-up in one of the following ways:
  • Channel distribution optimization: Linear arrangement of channels to the processing stage. The corresponding pixels on the readout backplane should be connected incrementally to the system measurement channels.
  • Clustering simplification: Individual pixels or interconnected pixels can be redesigned—direct influence on the cluster identification algorithm due to ambiguity of photon position.
  • Readout coordinates: Selection of the optimal coordinate system of the pixels, such as X, XY, XYU, XYUV, etc.—more dimensions require more interconnections between FPGAs.
  • Electron cloud size optimization: Adjusting by simulation the parameters of the GEM to obtain optimal size of amplified electronics with relation to the readout pixel geometry—reducing the number of activated pixels from a single photon absorption.
  • Channels optimization: Potentially reducing number of channels—depending on plasma physics requirements; reduces the required bandwidth and resources, improves the clustering algorithms as well the exchange of the data between FPGAs.
Due to the system’s modular and versatile design and optimized readout backplane structure, the rest of the system can be introduced into real-time operation mode with minimal effort and at low cost.
The system can benefit more from the chosen µTCA standard, since its backplane is designed to incorporate a few extra connections [110]:
  • FatPipe interconnections between MCH and AMC boards—fast external manager connected to MCH to compute edge cases for clusters (fabrics D, E, F, G).
  • Ports 12–15 Direct Connection (extended connector)—interconnections between neighboring FPGAs in AMC slots; it enables direct in-FPGA computation of edge cases for clusters.
  • RTM connector—add-on board, dynamic distribution of data using gigabit, low-latency serial interfaces, either in point-to-point streaming (i.e., Aurora interface [111]), or including low-latency Ethernet switches modes.
All of the aforementioned options introduce typically sub-microsecond latency to the data path, which is perfectly acceptable for spectra computations requiring millisecond time integrals.
The end-point connection typically varies between tokamaks; however, one can consider two different scenarios for end-consumer data distribution:
  • Custom data streaming based on Ethernet-compatible high bandwidth links (infrastructure specific).
  • Integration with real-time data distribution networks, like Dolphin RT network, PCIe Reflective Memory [112].
The first option includes data distribution over standard Ethernet network, tuned for low latency, that can be very efficient in real-time data distribution. The 10 Gbps Media Access Control (MAC) streaming IPcore (xGbE AXI4S-MAC; using Layer 2 Ethernet frames) designed at Warsaw University of Technology, achieves point-to-point latency of 328 ns, which makes it suitable for real-time data distribution. This approach does not require any specific hardware, only compatible FPGAs and SFP transceivers (available within GEM3k design).
The second approach, which uses PCIe cards, would require additional low-cost infrastructure to support extra PCIe cards for PCIe root-complex feature. Those can be normally found in servers; however, COTS AMC boards supporting hybrid CPU-FPGA designs are also available, such as AFCZ [113]. The cards include a Xilinx Zynq UltraScale+ MPSoC FPGA, which is capable of incorporating Linux, a PCIe root-complex and FPGA logic on a single chip. Additionally, the use of MCH and PCIe over Fabric makes it easier to arrange these elements mechanically.
It is worth mentioning that, in the present system, the FPGAs firmware consists of a real-time algorithm for computing the charge of each signal event independently on every measurement channel. The resulting triplet of charge, time and position, is to be transferred to the servers for cluster identification. Data transmission is expected to include the distribution over xGbE AXI4S-MAC of ultra-low latency, in real-time mode.
Details about the final FPGA firmware design and its performance under experimental conditions are planned for the next paper.

5. Hardware Architecture and Specification

The described hardware is structured into three primary segments: analog boards, digital boards, and the FPGA with the streaming part. This Section outlines the design motivations, key parameters, and technical decisions that guided the development of the hardware elements.

5.1. Multichannel Analog-Front-End Boards (M-AFE)

Given the significant number of GEM detector channels and the limited space available on readout boards, the analog electronic system must be as compact and simplified as possible. To meet this requirement, the core amplification stage uses a TIA configuration based on the high-speed OPA858 operational amplifier. This component was selected to provide a usable output voltage swing of 1 V. Based on an analysis of available space and component dimensions, each M-AFE board was designed to host 128 independent measurement channels.
To verify channel operation and facilitate initial gain calibration, a dedicated charge injection circuit is incorporated into each of the measurement paths. An SPI-controlled logic circuit generates the calibration signal (CAL_IN). Calibration and testing signals are distributed via daisy-chained HC595 registers, enabling the selective, individual generation of pulses on any of the 128 channels.
To compensate for dispersion in the parameters of the components and to set the optimal operating point for the subsequent ADC stage, each input channel features individual offset control via Digital-to-Analog Converters (DACs). The AD5674 (16-channel, SPI interface) was selected for this purpose. Consequently, eight DAC chips are required per 128-channel M-AFE card. To manage the high density of control lines effectively, an HC138 demultiplexer is used to reduce the number of required SPI chip-select lines significantly. Prior to the final board design, the OPA858 amplifier performance, particularly its pulse response function, was validated using a simplified measurement setup and a prototype GEM detector.
Power distribution presented a significant challenge. With a total of 24 M-AFE boards required for the full system, the cumulative current demand exceeds 60 A, so distributing this current at low voltage would be problematic. Therefore, a local Point-of-Load (PoL) converter strategy, powered by a higher voltage bus (approximately 12 V) was adopted. This layout is compatible with the ATX standard and allows using standard commercial cabling that is rated for high currents. The system uses high-efficiency converters with a built-in coil to minimize electromagnetic interference (EMI) and facilitate return current control on the PCB. To further suppress noise, a two-stage LC filter has been implemented at the output.
The final design challenge involved the transmission of high-density analog signals to the M-ADBs. AMC connectors were selected to address this. These connectors significantly facilitate the mechanical construction of the GEM backplane and enable the M-AFE module to maintain a standard width of 80 mm, as dictated by the mechanical constraints of the detector chassis. To ensure high signal integrity and isolation between channels during transmission, Samtec QSH/HQCD series micro-coaxial cable assemblies were selected, providing up to 192 micro-coaxial wires. The designed M-AFE board model is presented in Figure 7.

5.2. Multichannel Analog-Digital Boards (M-ADB)

In order to accommodate the full system requirement of over 3000 measurement channels, the M-ADB must interface these high-density signals with the FPGA processing units. Consequently, the board standard must support high-density connectivity and be compatible with COTS FPGA carriers. For this purpose, the FPGA FMC standard in the High-Pin Count (HPC) configuration was chosen.
Using the FMC standard inherently constrains the mechanical form factor to the single-width dimensions of 69 mm × 76.5 mm. This physical limitation necessitates a highly efficient design strategy to maximize channel integration. To achieve the required density, the AFE5832 integrated circuit series from Texas Instruments was selected. Although these ICs were originally designed for ultrasound applications, they are fully capable of operating in a standard ADC mode, which is suitable for this diagnostic application. Each IC integrates 32 channels, featuring not only the ADCs but also anti-aliasing filters and programmable low-noise Variable Gain Amplifiers (VGAs).
The M-AFE boards are interfaced using high-density micro-coaxial cables from the Samtec QSH/HQCD series. The 192-pin version was chosen to ensure high isolation between channels while maintaining the necessary connection density.
A critical design consideration is the input signal conditioning. Using the AFE circuits in DC-coupling mode requires the input common-mode offset to be set to 0.9 V, with the primary offset being generated by the DACs on the M-AFE board. However, voltage drops along the QSH cables can introduce inaccuracies. To compensate for this, additional DACs were implemented directly on the M-ADB enabling fine-tuning of the offset regulation at the ADC inputs.
Integrating four AFE5832 units on a single board provides a total of 128 input channels. However, routing 128 differential pairs uses almost all of the available pins on the FMC-HPC connector, leaving only a few LVCMOS lines for communication and control. Rigorous signal management was therefore essential, involving the implementation of various I2C-to-SPI converters to minimize the number of control lines required.
Finally, the power supply section faced significant space constraints due to the need to generate multiple voltage rails within a limited area. To address this, quadruple DC/DC modules with integrated inductors were employed, specifically, the MPM54304 ICs. This choice significantly reduces the footprint of the power topology. The complete acquisition board is shown in Figure 8.

5.3. FPGA Multichannel Streaming-Processing Boards FPGA-PSB

With over 3000 channels, a measurement system demands a carefully planned modular design to ensure ease of maintenance and management. It is proposed that the µTCA platform be used in the form of a 12-module cabinet. As previously mentioned, the boards are installed in a standardized form, such as AMC boards, with optional extra extensions provided by RTMs.
Using the µTCA standard ensures that power and thermal management are automatically handled by the crate’s built-in MCH infrastructure. The crate is also equipped with redundant power supplies, coolers, and a complete communication infrastructure. The backplane consists of numerous multigigabit interfaces for communicating with the host and point-to-point as well, both of which are standardized. There are also internal multidrop LVDS lines that can be used for the timestamping mechanism across the boards.
Another advantage of adopting the uTCA standard is the Central Processing Unit (CPU) and MicroTCA Carrier Hub (MCH) components. The MCH serves as the main system controller and can implement either a high-speed network or PCIe interfaces due to its PCIe root-complex of various generations, which is completely sufficient for control, or system preview analysis. The CPU can also act as an internal local controller, managing the entire system independently of the user’s PC. This approach therefore significantly minimizes reliance on external components or modules that need to be connected to the main acquisition part of the system.
At this stage of development, our aim is to make use of available COTS components, particularly in view of the uTCA standard revision for physics applications, which has resulted in FPGA AMC boards becoming available on the market. However, due to the design decisions made for the custom M-AFE and M-ADB electronic boards, we need to use an FPGA-AMC board with two fully routed FMC-HPC slots. This significantly reduces the number of available COTS modules. Based on the M-ADB design, featuring 128 channels per FMC board, 24 such boards are required for the complete system. When it comes to selecting a proper uTCA configuration, there are two options: deploying two separate crates, each housing 12 FMC-AMC modules, which significantly increases the overall system cost, or identifying an AMC module equipped with an FPGA capable of supporting two fully routed HPC-FMC slots.
In our evaluation of budget-friendly COTS FPGA boards, several candidates featuring various FPGA models and equipped with FMC connectors were identified, all at competitive prices. However, most boards offered only a single fully routed HPC-FMC slot, necessitating the use of 24 boards for signal processing. Conversely, boards with two connectors often lacked full signal connectivity to the FPGA, preventing them from utilizing the full functionality of the designed boards. Additional external streaming connectivity was also limited. After extensive analysis, a commercially available solution was identified, which fulfills all the system requirements. The FPGA AMC AFCKU board is equipped with a Xilinx UltraScale Kintex FPGA, featuring two fully populated HPC-FMC connectors and connectivity to RTM via high-speed GBTs. This provides up to 16 high-speed, multigigabit links for direct use with the expansion board and is entirely sufficient to optimally support all the measurement channels in a compact and modular design that is easy to maintain. Additionally, the entire design can be divided into acquisition and processing parts, which is often crucial in applications such as tokamaks, where the physical space around the machine is highly limited.
This configuration also includes versatile standardized RTM boards with various functionalities. The COTS RTM board RTM-4SFP-3QSFP were selected. This board provides 16 links in SFP+ standard, achieving a throughput of 10 Gbps per lane. Additionally, it is compatible with AMC AFCKU. This is important because proper signal mapping is necessary to match the FPGA Gigabit Transceivers type H (GTH) quad blocks. The complete setup, consisting of an FPGA-PSB with an attached RTM and one FMC board (M-ADB) is shown in Figure 9.

6. System Performance Estimations and Data Distribution

In order to accurately determine the specifications for data processing equipment and the data distribution infrastructure, it is essential to estimate the overall system performance. For the data acquisition architecture presented here, such an evaluation needs to consider the required bandwidth, cost optimization, ease of implementation, and availability of COTS components. The primary system requirement is the simultaneous processing of 3072 channels with an estimated peak event rate of up to 2 million events per second (2 Mevents/s). This performance target informs subsequent detailed analysis and system architecture decisions.
To properly estimate the performance within the context of relevant physics studies, it is necessary to examine a specific development use case. The initial design phase focused on plasma scenarios for the DTT tokamak [72]. Simulations presented in [72] (specifically Figure 4) map the plasma radiation flux intensity across the detector surface at the individual pixel. These results indicate that the central region of the detector is subject to significantly higher irradiation than the edges, with radiation rates reaching up to ~2 Mevents/s per pixel. To accommodate such intense localized radiation, the GEM detector readout board employs a specialized channel distribution strategy based on XYUV coordinates to distribute the radiation load equally among the readout channels. This design feature averages the event rate on channels across the system, ensuring that the channel load remains approximately uniform even in worst-case scenario where specific pixel regions are highly active.
It is worth noting that the initial system design aimed to exploit the technological limits of GEM technology, maximizing photon statistics and spatial resolution, while ensuring stable operation. Operating in a high-radiation environment with a high-resolution readout requires the proper definition of GEM operational conditions (gas, HV, geometry, etc.). These parameters govern the size of the corresponding photon-electron cloud for the measured energy range, which directly dictates the extent of pixel-channel activation and complexity of cluster identification algorithms. For the current development the pixel spatial distribution is constrained by a minimum inter-pixel spacing of 75 µm due to manufacturing technology limitations. Simulations of cluster size were conducted during the initial development phase, covering the typical SXR energy range of 1.2 to 15 keV. Utilizing the trigger threshold set near the noise floor, a spatial cluster distribution of 3–5 pixels for low photon energy and 7–9 for high photon energy was observed. Regarding the system performance, there is significant room for optimization, the effective data load can be compressed by rejecting the pixels with negligible signal contribution. By increasing the trigger level to discard pulses contributing less than 5% of the total photon energy, the cluster distribution is reduced to 2–3 pixels for low photon energies and 3–6 pixels for high energies. This paper presents the conceptual design of the acquisition and detection system, focusing on the critical architectural elements while referencing prior publications for specific details.
System performance is also dependent on the plasma duty cycle and instantaneous data rates. In existing fusion devices, plasma discharges typically last from seconds to a few minutes. While previous generations of these diagnostics demonstrated sufficient capabilities for current application, the significant increase in channel count for the GEM3k system introduces new challenges. Therefore, it is essential to implement data compression at the preprocessing stage in the hardware, specifically within the FPGAs logic in the GEM3k design.
The acquisition stage must be particularly sensitive, as the typical charge generated by the photon-electron cloud is expected to lie within the range of hundreds of fC to single pC. Therefore, to verify the integrity of the entire signal acquisition path, it is essential to operate initially with raw data, as successfully demonstrated in the second-generation system deployed at the WEST tokamak [92,104]. However, operating in full raw data acquisition mode can quickly saturate the available bandwidth of communication links. To manage this and enable scalable operation, the acquisition process is progressively refined into more discrete analysis modes. These include:
  • Full raw signal acquisition (as the base implementation);
  • Automatic data quality monitoring [114,115,116];
  • Offset computation mode;
  • Charge Computation Mode (CCM).
It is expected that each of these stages will enhance overall system performance by increasing the effective processed event rate while simultaneously reducing the volume of transferred data and the computational burden on the DPU section. For this reason, the system’s performance boundary is assessed under the most demanding scenario: full raw data acquisition. This approach provides a clear margin for future optimization through algorithm refinement and implementation of other components, ensuring the most conservative estimation.
To ensure simplicity of implementation, COTS availability, modularity, and the use of common standards 10 Gbps Ethernet links were selected. Typically, the FPGA transceivers are organized into quad blocks, with each block containing four transceivers that share a common clocking domain. Therefore, the number of channels per FPGA is divided by four GBT blocks. With 256 acquisition channels per FPGA, this results in 64 channels per link to stream to the DPUs. Each FPGA would stream using four GBT links. Furthermore, this approach allows for future upgrades, as the infrastructure can also be based on the compatible 25 Gbps standard depending on the Network Interface Cards (NICs) installed in the servers.
The primary objective is to support a photon event rate of approximately 2 MHz/s per channel. In the proposed system, an event is defined as the electronic response to a photon, resulting in a Gaussian-shaped pulse produced by the current-to-voltage analog shaping circuit. This sampling frequency was selected based on preliminary hardware performance validation (see Section 7). Using a 50 MHz sampling frequency for the ADCs, a reference time window of 32 samples per event and 16-bits resolution for each sample, the duration of a single event can be expressed as:
eventref = 20 ns × 32 samples = 640 ns,
Under the proposed ADC sampling, the time resolution is 20 ns. Therefore, we can discriminate the events, including those affected by the pile-up effect, which are separated in time by only 1 sample (a 40 ns gap).
The active signal, which contains information, is condensed into eight samples (based on the board design), where the rest is typically the noise floor shifted by the offset level. We call this mode “raw data acquisition”, and it is mostly used to verify the quality of the signals and analog input path (interferences, fluctuations, etc.) under certain system installations.
The value (1) is maintained as a reference baseline; and any refinement in pulse detection or data window optimization is expected to significantly increase throughput. Given this configuration, the theoretical maximum photon event rate per channel, assuming no pulse overlap and full utilization of the sampling window, is:
channel_perfref ≈ 1.6 Mevents/s/channel
This calculation assumes continuous data streaming per channel, resulting in a bandwidth requirement of approximately 800 Mbps per channel. Consequently, streaming 64 channels, representing half of a single M-AFE board, would require a throughput of over 50 Gbps. Minor optimization in the form of reducing the event window to 16 samples per event doubles the channel performance, channel_perf, yielding 3.1 Mevents/s/channel at the same level of required throughput, which is above the required level. Therefore, the system design targets a maximum rate of 2 MHz events per channel, in correlation with the budget-optimized design reaching ~32 Gbps per 64 channels in raw signal acquisition mode. Scaling this down in a naive way (by simply rejecting events above the limit) to a maximum of 9.5 Gbps (the maximum link output), would reduce the maximum input rate per channel, channel_perf, to ~660 kHz.
This is in fact a good result for a Diagnostic Acquisition Mode (DAM), which would be mostly used to verify the quality of the hardware design and perform validation with the isotopes. It could also be used during installation at tokamak sites to check for interferences and quality of the analog input signal.
Turning to real world scenarios, as mentioned earlier, the final CCM acquisition mode is the most optimized option. This mode produces a triplet of position, charge and time (PQT) taking maximum 64 bits per event (P—8 bits, Q—16 bits, T—48 bits). Latency is only introduced to each channel, without affecting the bandwidth. In the most optimized version, it is possible to produce a PQT event every clock cycle, including piled-up signals.
Since FPGAs operate on defined bus width and clocks, the gigabit output works on 64-bit bus width using 156 MHz clock. This gives an effective clock conversion ratio of 1:3 (as reference to sampling frequency), improving the real data consumption and distribution throughput.
To convert the large bus of 4096 bits (64 channels × 64 bits) of incoming events from the channels to 64-bit bus in time order, a specialized FPGA data serialization (DS) IPcore was implemented [117]. The performance of the DS unit is difficult to estimate, since it varies based on the occupancy ratio among the input channels during the consecutive clock cycles. Here, we consider the most intense scenario, where all the channels are activated simultaneously. The DS unit works in 64:1 conversion mode. We now introduce a parameter called time_gap, which defines the minimum interval between consecutive event registrations on a given channel. With this configuration, time_gap = 64 clock cycles for each channel with 1:1 clock ratio. In our case, however, the ratio is 3:1, meaning that the time_gap would be reduced to 21 clock cycles.
It can be computed that using direct streaming each channel can operate at a level of ~2.5 Mevents/s/channel, which is above the required limit. This also saturates the 10 Gbps links at almost maximum level.
It is worth mentioning that the system could be improved in the future to support even 25 Gbps per link, resulting in a very high event registration rate. Another option is to add extra buffering to provide a small local cache in the form of memory or First-In First-Out (FIFO) unit to temporarily store more intense data rates. However, the planned firmware design and testing will be subject of another paper. The estimations provided here are based on prior experience, as well as on components that have already been designed and tested.
In the event of extremely high radiation fluxes, either the DP component or the GBT links will become saturated. As mentioned, a small cache can be implemented to mimic this effect to some extent. Another option to prevent the GEM3k from losing the data is to apply a pinhole or a shutter in from of the GEM detector window, to proportionally reduce the radiation from the source until an acceptable level is achieved. The number of events per channel can also be reduced by optimizing the trigger level. In the worst case scenario, where the rate for each channel is too high, the system will start rejecting the data at some point. This would result in a drop in intensity, either locally or globally, in the final spectra. It is also possible that the cluster reconstruction will fail due to incomplete list of activated pixels; such events are included in the rejection statistics.
The system should be capable of supporting a sustainable throughput of 2.5 Mevent/s/channel, which is slightly better than the preliminary assumptions. Having established the data transmission links on the FPGA side, the next step is to optimize the hardware configuration of the DPU section. The DPU consists of standard COTS servers, selected for their cost-effectiveness. It is assumed that the system should be capable of registering data for between 30 s and 1 min under maximum load. Several critical factors must be considered:
  • Maximum number of required servers;
  • Number of gigabit ports per Network Interface Controller (NIC);
  • Generation of PCIe interface/bandwidth;
  • Number of CPUs and available PCIe slots;
  • Double Data Rate (DDR) memory type and performance;
  • Storage units.
Based on the previous analysis, each FPGA is configured to transmit data from 256 acquisition channels via four 10 Gbps links. Consequently, the NICs must support an equal or higher bandwidth. To this end, we selected the widely adopted Intel X520 network cards which offer two Small Form-Factor Pluggable (SFP+) ports onboard and use PCIe Gen3 standard with eight lanes. Two modules are required to support one FPGA. Modern server platforms, including cost-optimized, typically provide more PCIe slots. This enables additional flexibility in system design, allowing a single server to support multiple FPGAs.
In terms of data throughput, the PCIe Gen3 x8 offers a theoretical bandwidth of approximately 62 Gbps. Therefore, a configuration using two ports should utilize only ~30% of the card’s performance. An additional advantage of using these NICs is the availability of Software Development Kits (SDKs), which make software implementation more efficient and of high performance. Furthermore, popular server-grade NICs are often natively supported by advanced open-source High Performance Computing (HPC) frameworks, such as Data Plane Development Kit (DPDK) or Storage Performance Development Kit (SPDK), from which we can also benefit.
The next design step is to determine the maximum number of cards that can be installed based on the motherboard and the number of CPUs. Following a market analysis, we selected a platform that provides 48 PCIe-CPU direct lanes, ensuring usage of the full NICs bandwidth due to the absence of PCIe bridges or other components. Those links should be approximately 20% occupied. It is essential to keep this parameter at as low a level as possible, to minimize system complexity and reduce the burden of implementing highly optimized software. Additionally, it is worth noting that integrating high-performance storage units also requires substantial bandwidth in the server configuration.
The next step is to select an appropriate type of RAM memory to support real-time data streaming. In the proposed configuration, data are transferred from the FPGA via 10 Gbps links to the NICs, which then forward it to RAM for temporary storage, maintaining high performance. The setup supports two operational modes: real-time data processing can be implemented as the next step, or the data can be further sent to the storage units for offline analysis and computations. A comprehensive analysis was conducted on different types of DDR memory speed classes ranging from PC-2400 to PC-4400, evaluating them based on the ratio of the available bandwidth versus the required for a 256-channel data stream. The decision was made to adopt a platform that supports DDR4, since based on the estimations a 256-channel stream would utilize approximately 29.5% of the available transfer bandwidth. This leaves room for free bandwidth for storage implementation in such a configuration. Furthermore, DDR4 memory is both cost-effective and widely available, making it a suitable, budget-optimized choice that still meets performance requirements.
The final aspect of the server platform configuration relates to data storage. Traditional storage solutions, such as Hard Disk Drives (HDDs), Serial Attached SCSI (SAS) disks, or even Solid State Drives (SSDs), are insufficient to support real-time data storage due to their limited bandwidth. Although server platforms with Non-Volatile Memory Express (NVMe) backplanes offer much higher performance, they are too expensive, especially in our configuration, where multiple streams must be recorded simultaneously and thus multiple drives are required. However, customer-level NVMe disks, which offer high storage capacity, are significantly cheaper and are now widely available. A modern and cost-effective driver provides 1 TB of disk space, which is sufficient for our case. The bandwidth is typically a minimum PCIe Gen3 x4, which can suit our purposes, especially considering further stream reduction on the FPGA side. We have selected the Samsung 980Pro, which also works with a PCIe Gen4 interface and can achieve speeds of up to 7000 MBps. However, to maintain compatibility with budget-level servers, we will use Gen3 PCIe mode, as this is backwards compatible. This reduces throughput to around 2000 Mbps (~16 Gbps maximum). Preliminary hardware tests are underway, since NVMe drives are not typically designed to operate in high-streaming (continuous) mode, and the overall bandwidth may also depend on the internal cache construction. According to previous calculations, supporting one FPGA with a throughput of ~2 MHz events per channel, would require the links to stream 40 Gbps of data under peak conditions. This would require either at least four NVMes or a transition to the PCIe Gen4 server platform, both of which would significantly increase the costs. Given these constraints and in pursuit of cost-efficiency, we have chosen to continue with budget Gen3-based servers. To boost storage bandwidth without significantly increasing costs, we propose using slot bifurcation. The most popular method is to split PCIe links into two separate interfaces. This allows using cost-efficient NVMe adapters to install an additional drive per FPGA, thus securing the overall bandwidth (2 NVMe/slot per CPU) and doubling the bandwidth to an estimated maximum of approximately 32 Gbps per FPGA. This would represent a highly efficient result for such a cost- and performance-optimized system. However, it is essential to note that estimating the actual sustained bandwidth is inherently challenging. GEM detectors do not generate continuous signals, and the rate at which data are produced is highly dependent on particle type, event intensity, and geometrical distribution within the detector. As such, the calculations presented here are based on edge-case scenarios. Moving forward, we plan to gradually implement real-time data reduction and optimization mechanisms within the FPGAs.
By utilizing the server platform, we can also take advantage of the standard Redundant Array of Independent Disks (RAID), which is commonly supported by most servers by default. This feature, combined with standard HDDs, enables its use for the presented system and data storage purposes. This makes them entirely sufficient for storing a limited number of raw data acquisitions. This allows us to retain several complete measurement sets, which are invaluable for offline algorithm development and system debugging, all at minimal additional cost.
Based on the calculations and system requirements described, Dell PowerEdge R730 servers were selected. These are dual CPU servers working with Intel Xeon E5 processors and supporting DDR4 memory, including 4 × 1 Gbps general-purpose interface, and KVM for remote control (keyboard, video, mouse). Each server node is equipped with a dedicated hardware configuration, as presented in the evaluation. For a single FPGA, the following are added:
  • 2× Intel X520-DA2 NIC with 2 SFP+ connectors each;
  • a dedicated Samsung 980Pro 1 TB NVMe drive installed in PCIe x4 slot adapter.
The selected configuration uses a single Intel Xeon E5 CPU and DDR4-3733 DDR4 memory. The advantage of the selected platform is that all the mentioned components are directly linked to each CPU socket, eliminating the need for intermediate bridges. Therefore, it is possible to achieve maximum performance and data throughput. Following further tests, the system configuration was extended to include two NVMe drives for each FPGA as previously outlined. The Dell R730 platform supports PCIe slot bifurcation without compromising on the number of available slots. To leverage this capability fully, the configuration includes dedicated riser cards on the server platform, linking all necessary interfaces with the proper CPUs. Additionally, the Dell R730 platform offers RAID support with up to eight slots for HDDs, as well as compatibility with twelve Gbps SAS HDD drives, which are primarily used for OS and non-critical data storage. While the SAS drives are not suitable for data streaming, they are well-suited for post-acquisition data dumping and storage from NVMe drives.
The full configuration supporting 2 FPGAs in parallel streaming consists of:
  • 4× Intel X520-DA2 NIC—8 SFP+ connections
  • 2× Samsung 980Pro 1 TB NVMe drives
  • 2× Intel Xeon E5-2620 v3 @ 2.40 GHz
  • 64 GB DDR4 memory 1833 MHz DDR4-3733 (32 GB per CPU)
  • 4 × 1 Gbps general-purpose interface
  • RAID array with an HDD SAS disk for data storage
  • KVM dedicated interface
The configured server platform supports two FPGA units simultaneously streaming data over all links, resulting in data acquisition of 512 measurement channels with an 80 Gbps stream. To achieve the full system configuration, six such servers were installed. As part of the support infrastructure, the setup includes a dedicated rack, a remotely controlled power strip ATEN PE8208G and a temporary general-purpose TP-Link TL-SG1218MPE switch with 18 gigabit links. The switch is planned to be replaced with a more advanced solution in future iterations, following extended system testing.
Therefore, the project budget is optimized by using dual-CPU motherboards, which allow each server to support two FPGA units with separate PCIe interface connections. The system is presented in Figure 10.
The presented configuration results in a complete environment based on standard COTS components. A key advantage of using Intel NIC boards is that they have mature, widely supported drivers and SDK. These network cards are compatible with various Linux distributions and software implementations such as DPDK and SPDK. This highlights the strategic benefit of selecting widely adopted, commercially available components.

7. Prototype Hardware Laboratory Tests

A key feature of the system is its ability to precisely extract energy information from the raw detector signals, accurately compose clusters, and following proper calibration, determine the photon energies and positions [118,119]. This method is unique for this type of diagnostic system, enhancing the quality of the acquired spectra.
The system is currently under continuous development and undergoing initial testing. At this stage, preliminary tests have focused on developing and validating the main components, including functionality and performance checks. The results of these initial tests are beyond the scope of this paper and are planned to be submitted in the future as a detailed report. Here, we briefly summarize the preliminary hardware testing efforts undertaken:
  • Development of dedicated 10 Gbps Ethernet-compatible IPcore FPGA streams;
  • Development of FPGA firmware at the control level;
  • Development of FPGA firmware, including ADC drivers, with an integrated logic analyzer for test pattern testing;
  • Testing of hardware stream setup involving FPGAs with multiple 10 GbE links;
  • Server performance evaluations.
A crucial part of the preliminary testing was verifying that the selected ADC chips could accurately capture the signals produced by the GEM detector at the expected intensity levels. To achieve this, tests were conducted using the TI AFE5832 reference development board. The accompanying software enabled easy tuning of the ADC configuration, including an in-chip parametrizable analog stage. During testing, input signals emulating the GEM detector response were also fed into the TI AFE5832 input channels. With the software properly configured, we successfully acquired signals consisting of approximately 8 samples per event, with clear separation between pulses. The injected pulse frequency was set to 3 MHz, exceeding the system’s target requirements. An example of the acquired signal is shown in Figure 11. At a later stage, the 1D GEM detector was tested to verify the correctness of pulse registration using the isotope source 55Fe and the TI AFE5832 reference development board.

7.1. Laboratory Hardware Setup

The next stage involved laboratory testing of the prototype M-AFE and M-ADB boards, which form the core of the system. These boards were installed on the AFCKU board. The ADCs were clocked at 100 MHz to achieve a 50 MHz sampling frequency per channel. This higher clock frequency is necessary because the ADC internal architecture multiplexes two input signals into one ADC core.
Due to the complexity and high component density of the custom-designed M-AFE and M-ADB boards, which are intended for large-scale production, the primary focus was on verifying the first prototype units. The test setup included:
  • A 3072-channel GEM detector readout board;
  • One M-AFE board;
  • One M-ADB-FMC board;
  • One FPGA-AMC PSB (AFCKU) board;
  • Dedicated 2 m SAMTEC cabling.
The prototype setup, shown in Figure 12, includes all the elements mentioned above.
The setup was powered by a Rohde & Schwarz NGL202 laboratory power supply. Most of the hardware measurements were carried out using an oscilloscope and Xilinx Integrated Logic Analyzer (ILA) components. A key focus during testing was to verify the correct design and operation of all components on the boards. To this end, integrated firmware was used to validate the response of each element, as well as its communication and configuration. After validation, the boards underwent initial configuration, including offset adjustments, analog path calibration, and setup. Critical points along the signal path were measured to confirm correct operation. The results of these tests were used to revise the hardware project.

7.2. Analog Path Hardware Measurements

The system was controlled from a Linux computer. Thanks to the implementation of hardware calibration pulses, via capacitors directly installed on input channels, it was possible to verify the entire analog signal path, starting from the board connectors and extending up to the ADC input channels. This included the 2 m SAMTEC high-density cable used for signal transmission between the M-AFE and M-ADB in the final design. Figure 13 presents how the system behaved when the CAL_PULSE signal was triggered. The discharge of the capacitor generates the injected current. The system response, shown in the figure, was recorded directly at the analog input channel corresponding to the connected pixel (green plot). The yellow plot shows the signal measured at the ADC input channel after attenuation through the cable and complete signal path on the M-ADB. The observed voltage drop is approximately 70 mV, which is negligible.
It was crucial to generate and measure the correct signals on the oscilloscope in order to verify the correct reception of input channels at the FPGA. This verification process involved checking the design of the complex serialization interface within the FPGA logic. As the firmware is still in the prototype stage, the ILA components were implemented at the end of the ADC-FPGA interface in order to capture and visualize the data from the M-ADB board. To simplify testing, the CAL_ON signal was issued across all channels, allowing verification of all the channels after a few repetitions. The signals were tested with M-ADBs installed in FMC slots 1 and 2. An example output is shown in Figure 14.

7.3. Preliminary Boards Characterization

In order to evaluate the board parameters and the quality of the analog signal path, noise spectra were measured for both the M-ADB and M-AFE boards. The results are shown in Figure 15. The M-AFE board exhibits its highest peak at approximately −71 dB@1 MHz, with an average noise baseline ≈ −90.5 dB. Similarly, the M-ADB board’s highest noise harmonic reaches about −64 dB@1.35 MHz with an average noise level near −86.5 dB across a broad spectrum. These noise characteristics are important when working with low-amplitude analog signals, as they directly affect the signal quality.
The final critical functional test focused on the gigabit links between the FPGA-AMC and RTMs. It was essential to verify the correct routing of signals relative to the FPGA model and corresponding GTH quads. It was also important to assess the stability of the high-speed links, particularly given that the high-speed signals are bridged through RTM connectors before going to SFP connectors. To perform these checks, the FPGA was implemented with the reference Xilinx Integrated Bit Error Ratio Tester (IBERT) component mapped to the described configuration. Two test configurations were employed: local loopback and cable connections between SFP+ modules. Data transmission consisted of pseudorandom streams using a Pseudo-Random Binary Sequence (PRBS) 31-bit pattern, enabling proper error detection and testing of the TX-RX pairs’ (Transmit-Receive) communication due to its predictability. The tests monitored RX_PLL and TX_PLL locking as well as important BER values and detected errors over time. Initial tests with the transceiver set to their default settings revealed a low BER value, resulting in errors in transmission. Therefore, it was necessary to tune the parameters of the transceivers, especially TX_DIFF_SWING, TXPOST, and TXPRE via an automatic sweep in order to select optimal values. After parameter optimization, the system connection was tested continuously across Quad SFP (QSFP) and SFP+ links (8 × 10 Gbps links per FPGA) for one hour. During the tests, approximately 5 × 1013 bits were transmitted with no detected errors, yielding a BER of 1.88 × 10−14, which is well within acceptable limits. We also verified the correctness of the full hardware setup and the developed boards during the tests. Various measurements, including offset levels and Fast Fourier Transform (FFT) spectra, revealed several areas for improvement, which have been incorporated into the new board revision. Preliminary FPGA firmware was developed primarily for functional hardware testing and component compatibility verification. Final tests will be conducted after the new board revision is completed. The current capabilities of the system are summarized in the next Section. A future paper detailing more comprehensive system tests is planned.

7.4. Laboratory Characterization Results

Based on the laboratory measurement performed, it was possible to validate the performance of the components which would be used with the GEM detector using XYUV structure, as well as to characterize the main components important for the final measurements.
The measured noise parameters were obtained directly on the input channels, which with a calibrated offset voltage of 1.02 V had a standard deviation of RMS = 150 µV with an average noise baseline of around −90.5 dB over wide bandwidth. Since the calibration pulses reach their the maximum input voltage level, it was possible to provide the signal-to-noise ratio (SNR) which is ~70 dB for the following design.
The ADC used in the project is 12 bit. We use efficiently only half of the range, i.e., 2048 bins, for signal discrimination. To calculate the energy of a recorded particle, the signal is integrated over time, however one could assume the perfect signal in the form of delta-Dirac shaped pulse with discriminated energy. To provide energy resolution, two factors need to be included:
  • Observed energy range: The practical measurement range of the detector for the measured features (e.g., tungsten impurities at low energies, etc.) as determined by physicists. Once selected, this depends strictly on the selected high voltages and resulting gain of the detector.
  • Trigger level: Necessary to reject signals at near-noise levels, while narrowing the energy discrimination of low-level pulses.
Based on prior experience, the maximum useful energy range is up to 20 keV, which can be used as a reference for further calculations. Selecting a trigger at a level that is reasonably 5 mV above the offset, results in 2000 bins remaining for the effective energy discrimination, and thus the minimum registered energy is 179 eV. Computing the energy spectra in 100 bins results in a 0.2 keV energy resolution across the detection range.
The time resolution is limited only by the user setting since, as stated in Section 5, the minimum time gap between the two consecutive events needs to be only one sample, which equates to a spacing of 40 ns for a given sampling period in the pile-up effect scale. Furthermore, the composed spectra (e.g., energy, position or counts as a function of time), are set by the user parameter. The value needs to be realistic in order to achieve a good statistical distribution. For plasma experiments such as those at the WEST tokamak, a 10 ms spectrum time resolution is provided. As elaborated in Section 4, acquisition is continuous for an established event rate.
The characterization of the analog input path measured during laboratory tests is summarized in Table 4.
The tests performed define the performance of the produced hardware in the most specific region of the system’s acquisition part, i.e., the analog input path. Further system parameters regarding distribution of acquired data rely on 8 × 10 Gbps links from each FPGA board, each of which contains 256 measurement channels. Since it operates using hardware description language, it is possible to achieve an effective bandwidth of ~90–95% for measurement data.

8. Conclusions

In this paper, we present a novel third-generation, multichannel, FPGA-based Soft X-ray diagnostic system utilizing GEM detectors. The system is designed to meet the stringent spatial and temporal resolution requirements, as well as the challenging environmental conditions of modern fusion plasma experiments.
This diagnostic system has been developed in response to the physics requirements of advanced fusion experiments, in which capturing fast-evolving, anisotropic radiation patterns is essential. Enabling high-resolution, two-dimensional toroidal imaging allows the system to address the need to study phenomena such as magnetic reconnection, runaway electron dynamics, and azimuthal asymmetries with sub-millisecond precision. The architecture of the GEM-based detector and readout electronics has been shaped by these scientific goals, particularly with regard to spatial resolution, channel count, signal amplification and noise performance. The resulting platform provides a powerful and novel diagnostic tool for exploring the spatiotemporal radiation behavior of radiation in both tokamak and stellarator plasmas. Integrating a custom low-gain GEM detector with high-density, low-noise readout electronics enables the system to achieve the spatial and temporal resolutions required for modern tokamak and stellarator research, while remaining robust in harsh radiation environments.
As the system design is still ongoing, the focus of this work has been on presenting the proposed architecture and highlighting its complexity and the key engineering challenges involved. One of the most demanding aspects is designing the high-density GEM detector readout board. The extremely large number of individual pixels, above 30,000, makes the whole system layout very complicated. This significantly constraints the layout, routing, and manufacturability of the board. To address these challenges, the entire signal chain and interconnects were carefully planned from the early design stages. In parallel, system cost optimization was pursued by balancing data processing performance against hardware limitations. Building on experience from designing two previous generations of SXR diagnostics developed for the JET and WEST tokamaks, and leveraging technological advances from the past decade, we propose a significantly enhanced architecture tailored to meet the requirements of next-generation plasma diagnostics. There was a strong emphasis on reusing standard COTS components to reduce overall system costs, facilitate long-term maintenance, and ensure ease of scalability.
To meet the demands of high channel density and compact system integration, highly integrated multichannel ADCs, with complex embedded analog paths, were employed. This significantly reduced the footprint of the signal boards. Modern Xilinx Ultrascale FPGAs were selected for their ability to handle a large number of high-speed serial data links, including both SERDES interfaces and multi-gigabit GTH interfaces. While retaining the unique feature of raw data streaming for offline analysis, the architecture also allows preprocessing algorithms to be integrated directly within the FPGA’s logic, enabling hybrid processing.
Preliminary FPGA firmware has currently been developed to validate the integrity of signal routing and the overall system functionality, with a particular focus on the FPGA and PCB interface design. A range of measurements and verification tests were conducted using prototype hardware, and the resulting feedback informed improvements for the next board revision. Upcoming stages include comprehensive hardware testing with the new acquisition boards and further FPGA firmware development to enable full data processing from the GEM detector.
A summary of the key system parameters, based on the current architecture and test results is presented in Table 5.
As the system is still under active development, a series of publications is planned to cover the following topics in detail: hardware and performance evaluations; FPGA firmware development; data streaming and processing on the server platform; laboratory tests with SXR sources; and the development of algorithms for spectral analysis.

Author Contributions

Conceptualization, A.W. and G.K.; methodology, A.W. and G.K.; software, A.W.; validation, A.W.; formal analysis, G.K., M.C. and A.W.; investigation, G.K. and A.W.; resources, G.K. and A.W.; data curation, A.W.; writing—original draft preparation, A.W. and M.C.; writing—review and editing, G.K., M.C. and A.W.; visualization, A.W.; funding acquisition, M.C., G.K., and A.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the research project No. 2020/39/B/ST2/02681 financed by the National Science Center and also partially funded by the research project No. 1033/RND/4/2025 financed by the Warsaw University of Technology.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

This work has been partially carried out within the research project No. 2020/39/B/ST2/02681 financed by the National Science Center and also partially carried out within the research project No. 1033/RND/4/2025 financed by the Warsaw University of Technology. Authors would like to thank the Mesco company for providing associated research license for Ansys Electronics 2025 R1 software and for the technical support.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. MIT Energy Initiative. The Role of Fusion Energy in a Decarbonized Electricity System; MIT Energy Initiative: Cambridge, MA, USA, 2024. [Google Scholar]
  2. Shehabi, A.; Smith, S.; Sartor, D.; Richard, B.; Herrlin, M.; Koomey, J.; Masanet, E.; Horner, N.; Azevedo, I.; Lintner, W. United States Data Center Energy Usage Report; Lawrence Berkeley National Laboratory: Berkeley, CA, USA, 2016. [Google Scholar]
  3. Biel, W.; Albanese, R.; Ambrosino, R.; Ariola, M.; Berkel, M.V.; Bolshakova, I.; Brunner, K.J.; Cavazzana, R.; Cecconello, M.; Conroy, S.; et al. Diagnostics for Plasma Control—From ITER to DEMO. Fusion Eng. Des. 2019, 146, 465–472. [Google Scholar] [CrossRef]
  4. Saura, N.; Guirlet, R.; Koubiti, M.; Peyrusse, O.; Desgranges, C.; Mazzi, S.; Benkadda, S.; WEST Team. Machine Learning-Based Tungsten Spectroscopy Analysis in the WEST Tokamak. Phys. Plasmas 2025, 32, 083901. [Google Scholar] [CrossRef]
  5. Bucalossi, J.; Achard, J.; Agullo, O.; Alarcon, T.; Allegretti, L.; Ancher, H.; Antar, G.; Antusch, S.; Anzallo, V.; Arnas, C.; et al. Operating a Full Tungsten Actively Cooled Tokamak: Overview of WEST First Phase of Operation. Nucl. Fusion 2022, 62, 042007. [Google Scholar] [CrossRef]
  6. Bucalossi, J.; Ekedahl, A.; Team, W. WEST Full Tungsten Operation with an ITER Grade Divertor. Nucl. Fusion 2024, 64, 112022. [Google Scholar] [CrossRef]
  7. Garitta, S.; Batal, T.; Durif, A.; Firdaouss, M.; Missirlian, M.; Roche, H.; Testoni, P.; Tomarchio, V.; Richou, M. Thermal and Structural Analysis of JT-60SA Actively Cooled Divertor Target Submitted to High Heat Flux. Fusion Eng. Des. 2024, 199, 114133. [Google Scholar] [CrossRef]
  8. Rubino, G.; Calabrò, G.; Wischmeier, M. Assessment of Scrape-Off Layer and Divertor Plasma Conditions in JT-60SA with Tungsten Wall and Nitrogen Injection. Nucl. Mater. Energy 2021, 26, 100895. [Google Scholar] [CrossRef]
  9. Ongena, J.; Koch, R.; Wolf, R.; Zohm, H. Magnetic-Confinement Fusion. Nat. Phys. 2016, 12, 398–410. [Google Scholar] [CrossRef]
  10. Shimomura, Y.; Spears, W. Review of the ITER Project. IEEE Trans. Appl. Supercond. 2004, 14, 1369–1375. [Google Scholar] [CrossRef]
  11. Jardin, A.; Bielecki, J.; Dąbrowski, W.; Drozdowicz, K.; Dworak, D.; Gerenton, V.; Guibert, D.; Kantor, R.; Król, K.; Kulińska, A.; et al. Energy-Resolved x-Ray and Neutron Diagnostics in Tokamaks: Prospect for Plasma Parameters Determination. Phys. Plasmas 2024, 31, 082514. [Google Scholar] [CrossRef]
  12. Bando, T.; Matsunaga, G.; Takechi, M.; Isayama, A.; Oyama, N.; Inoue, S.; Yoshida, M.; Wakatsuki, T. Experimental Observations of an N=1 Helical Core Accompanied by a Saturated m/N=2/1 Tearing Mode with Low Mode Frequencies in JT-60U. Plasma Phys. Control. Fusion 2019, 61, 115014. [Google Scholar] [CrossRef]
  13. Salas-Suárez-Bárcena, J.; Delgado-Aparicio, L.F.; Segado-Fernández, J.; Rodríguez-González, A.; McKay, K.A.; Cruz-Zabala, D.J.; Hidalgo-Salaverri, J.; García-Domínguez, J.; García-Muñoz, M.; Viezzer, E.; et al. Radiated Power and Soft X-Ray Diagnostics in the SMART Tokamak. Rev. Sci. Instrum. 2024, 95, 093523. [Google Scholar] [CrossRef] [PubMed]
  14. Chernyshova, M.; Mazon, D.; Malinowski, K.; Czarski, T.; Ivanova-Stanik, I.; Jabłoński, S.; Wojeński, A.; Kowalska-Strzęciwilk, E.; Poźniak, K.T.; Malard, P.; et al. First Exploitation Results of Recently Developed SXR GEM-Based Diagnostics at the WEST Project. Nucl. Mater. Energy 2020, 25, 100850. [Google Scholar] [CrossRef]
  15. Brandt, C.; Schilling, J.; Thomsen, H.; Broszat, T.; Laube, R.; Schröder, T.; Andreeva, T.; Beurskens, M.N.A.; Bozhenkov, S.A.; Brunner, K.J.; et al. Soft X-Ray Tomography Measurements in the Wendelstein 7-X Stellarator. Plasma Phys. Control. Fusion 2020, 62, 035010. [Google Scholar] [CrossRef]
  16. Donaldson, T.P. Theory of Foil-Absorption Techniques for Plasma X-Ray Continuum Measurements. Plasma Phys. 1978, 20, 1279. [Google Scholar] [CrossRef]
  17. Celora, A.; Caruggi, F.; Putignano, O.; Cancelli, S.; Claps, G.; Cordella, F.; Garzotti, L.; Gorini, G.; Grosso, G.; Guiotto, F. Assessment of a Space and Energy Resolved Diagnostic Based on GEM Technology on MAST-U. Meas. Sci. Technol. 2024, 36, 016019. [Google Scholar] [CrossRef]
  18. Wesson, J. Tokamaks, 3rd ed.; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  19. Zhang, L.; Morita, S.; Xu, Z.; Zhang, P.F.; Zang, Q.; Duan, Y.M.; Liu, H.Q.; Zhao, H.L.; Ding, F.; Ohishi, T.; et al. Suppression of Tungsten Accumulation during ELMy H-Mode by Lower Hybrid Wave Heating in the EAST Tokamak. Nucl. Mater. Energy 2017, 12, 774–778. [Google Scholar] [CrossRef]
  20. Neu, R.; Brezinsek, S.; Beurskens, M.; Bobkov, V.; de Vries, P.; Giroud, C. Tungsten Experiences in ASDEX Upgrade and JET. In Proceedings of the 2013 IEEE 25th Symposium on Fusion Engineering (SOFE), San Francisco, CA, USA, 10–14 June 2013; pp. 1–8. [Google Scholar]
  21. de Vries, P.C.; Arnoux, G.; Huber, A.; Flanagan, J.; Lehnen, M.; Riccardo, V.; Reux, C.; Jachmich, S.; Lowry, C.; Calabro, G.; et al. The Impact of the ITER-like Wall at JET on Disruptions. Plasma Phys. Control. Fusion 2012, 54, 124032. [Google Scholar] [CrossRef]
  22. Zhu, D.; Guo, Z.; Xuan, C.; Yu, B.; Li, C.; Gao, B.; Ding, R.; Yan, R.; Wang, Y.; He, C.; et al. In Situ Melting Phenomena on W Plasma-Facing Components for Lower Divertor during Long-Pulse Plasma Operations in EAST. Nucl. Fusion 2023, 63, 036022. [Google Scholar] [CrossRef]
  23. Moazzemi-Ghamsari, M.; Torkiha, M.; Sadeghi, Y.; Rasouli, C.; Pourshahab, B. Soft X-Ray Tomography Using the Optimized Regularization Method in Alvand Tokamak. Fusion Eng. Des. 2023, 196, 113993. [Google Scholar] [CrossRef]
  24. Sano, R.; Homma, H.; Takechi, M.; Nakano, T. Soft X-Ray Diagnostics System for Electron Temperature Measurement in the Integrated Commissioning Phase of JT-60SA. Rev. Sci. Instrum. 2024, 95, 073532. [Google Scholar] [CrossRef]
  25. Palomba, S.; Belpane, A.; D’Agostino, V.; Gabellieri, L.; Marinelli, M.; Murari, A.; Peluso, E.; Verona, C.; Verona-Rinati, G.; Bombarda, F. The Conceptual Design of the Soft X-Ray Tomography for the DTT. Fusion Eng. Des. 2025, 219, 115269. [Google Scholar] [CrossRef]
  26. Chen, Z.Y.; Zhang, Y.; Zhang, X.Q.; Luo, Y.H.; Jin, W.; Li, J.C.; Chen, Z.P.; Wang, Z.J.; Yang, Z.J.; Zhuang, G. Note: Measurement of the Runaway Electrons in the J-TEXT Tokamak. Rev. Sci. Instrum. 2012, 83, 056108. [Google Scholar] [CrossRef] [PubMed]
  27. Linder, O. Self-Consistent Modeling of Electron Runaway in Tokamak Disruptions. PhD Thesis, Technische Universität München, Munich, Germany, 2021. [Google Scholar]
  28. Yan, W.; Zou, G.; Chen, Z.; Li, Y.; Fang, J.; Lin, Z.; Jiang, Z.; Wang, N.; Rao, B.; Li, Y. Influence of 3D Helical Magnetic Perturbations on Runaway Electron Generation in J-TEXT Tokamak. Plasma Sci. Technol. 2025, 27, 035104. [Google Scholar] [CrossRef]
  29. Delgado-Aparicio, L.F.; VanMeter, P.; Barbui, T.; Chellai, O.; Wallace, J.; Yamazaki, H.; Kojima, S.; Almagari, A.F.; Hurst, N.C.; Chapman, B.E.; et al. Multi-Energy Reconstructions, Central Electron Temperature Measurements, and Early Detection of the Birth and Growth of Runaway Electrons Using a Versatile Soft x-Ray Pinhole Camera at MST. Rev. Sci. Instrum. 2021, 92, 073502. [Google Scholar] [CrossRef]
  30. Séguin, F.H.; Petrasso, R.D.; Li, C.K. Radiation-Hardened x-Ray Imaging for Burning-Plasma Tokamaks. Rev. Sci. Instrum. 1997, 68, 753–756. [Google Scholar] [CrossRef]
  31. Ramsey, A.T. D-T Radiation Effects on TFTR Diagnostics (Invited). Rev. Sci. Instrum. 1995, 66, 871–876. [Google Scholar] [CrossRef]
  32. Vayakis, G.; Hodgson, E.R.; Voitsenya, V.; Walker, C.I. Chapter 12: Generic Diagnostic Issues for a Burning Plasma Experiment. Fusion Sci. Technol. 2008, 53, 699–750. [Google Scholar] [CrossRef]
  33. Hu, L.; Chen, K.; Chen, Y.; Cao, H.; Li, S.; Yu, H.; Zhan, J.; Shen, J.; Qin, S.; Sheng, X.; et al. Preliminary Design and R&D of ITER Diagnostic-Radial X-Ray Camera. Nucl. Instrum. Methods Phys. Res. Sect. A 2017, 870, 50–54. [Google Scholar] [CrossRef]
  34. Chen, K.; Hu, L.; Yu, H.; Cao, H.; Li, S.; Zhang, J.; Sheng, X.; Zhao, J.; Niu, L.; Zhang, Z.; et al. Manufacture Study of ITER Radial X-Ray Camera. Fusion Eng. Des. 2025, 215, 114895. [Google Scholar] [CrossRef]
  35. Chen, K.; Hu, L.; Yu, H.; Cao, H.; Li, S.; Zhang, J.; Sheng, X.; Zhao, J.; Niu, L.; Li, C.; et al. Progress on Final Design of ITER Radial X-Ray Camera. Fusion Eng. Des. 2021, 165, 112234. [Google Scholar] [CrossRef]
  36. Sauli, F. The Gas Electron Multiplier (GEM): Operating Principles and Applications. Nucl. Instrum. Methods Phys. Res. Sect. A 2016, 805, 2–24. [Google Scholar] [CrossRef]
  37. Chernyshova, M.; Malinowski, K.; Jabłoński, S.; Melikhov, Y.; Wojeński, A.; Kasprowicz, G.; Fornal, T.; Imríšek, M.; Jaulmes, F.; Weinzettl, V. 2D GEM-Based SXR Imaging Diagnostics for Plasma Radiation: Preliminary Design and Simulations. Nucl. Mater. Energy 2022, 33, 101306. [Google Scholar] [CrossRef]
  38. Chernyshova, M.; Dobrut, M.; Jablonski, S.; Malinowski, K.; Fornal, T. Multi-Chamber GEM-Based Concept of Radiated Power/SXR Measurement System for Use in High Radiation Environment of DEMO. J. Instrum. 2022, 17, C05013. [Google Scholar] [CrossRef]
  39. Pacella, D.; Pizzicaroli, G.; Gabellieri, L.; Leigheb, M.; Bellazzini, R.; Brez, A.; Gariano, G.; Latronico, L.; Lumb, N.; Spandre, G.; et al. Ultrafast Soft X-Ray Two-Dimensional Plasma Imaging System Based on Gas Electron Multiplier Detector with Pixel Readout. Rev. Sci. Instrum. 2001, 72, 1372–1378. [Google Scholar] [CrossRef]
  40. Mazon, D.; Chernyshova, M.; Jardin, A.; Peysson, Y.; Król, K.; Malard, P.; Czarski, T.; Wojeński, A.; Malinowski, K.; Colette, D.; et al. First GEM Measurements at WEST and Perspectives for Fast Electrons and Heavy Impurities Transport Studies in Tokamaks. J. Instrum. 2022, 17, C01073. [Google Scholar] [CrossRef]
  41. Gott, Y.V.; Stepanenko, M.M. A Low-Voltage Ionization Chamber for the ITER. Instrum. Exp. Tech. 2009, 52, 260–264. [Google Scholar] [CrossRef]
  42. Colette, D.; Mazon, D.; Barnsley, R.; Sirinelli, A.; Jardin, A.; O’Mullane, M.; Walsh, M. Modeling a Low Voltage Ionization Chamber-Based Tomography System on ITER. Rev. Sci. Instrum. 2020, 91, 073504. [Google Scholar] [CrossRef] [PubMed]
  43. Mazon, D.; Colette, D.; Soudet, E.; Malard, P.; Walsh, M.; Moreau, M.; Jardin, A. Using Low Voltage Ionization Chamber (LVIC) in Current Mode for Energy Spectrum Reconstruction: Experiments and Validation. Rev. Sci. Instrum. 2022, 93, 113544. [Google Scholar] [CrossRef]
  44. Sauli, F. Development and Applications of Gas Electron Multiplier Detectors. Nucl. Instrum. Methods Phys. Res. Sect. A 2003, 505, 195–198. [Google Scholar] [CrossRef]
  45. Murtas, F. Applications of Triple GEM Detectors beyond Particle and Nuclear Physics. J. Instrum. 2014, 9, C01058. [Google Scholar] [CrossRef]
  46. Ketzer, B.; Altunbas, M.C.; Dehmelt, K.; Ehlers, J.; Friedrich, J.; Grube, B. Triple GEM Tracking Detectors for COMPASS. IEEE Trans. Nucl. Sci. 2002, 49, 2403–2410. [Google Scholar] [CrossRef]
  47. Bachmann, S.; Bressan, A.; Ropelewski, L.; Sauli, F.; Mörmann, D. Recent Progress in GEM Manufacturing and Operation. Nucl. Instrum. Methods Phys. Res. Sect. A 1999, 433, 464–470. [Google Scholar] [CrossRef]
  48. Greene, S.V.; Velkovska, J.; Blankenship, B.; Reynolds, M.Z.; Tarafdar, S. Effective Gain and Ion Back Flow Study of Triple and Quadruple GEM Detector. J. Instrum. 2022, 17, T12004. [Google Scholar] [CrossRef]
  49. Roy, P.; Bhattacharya, P.; Rout, P.K.; Mukhopadhyay, S.; Majumdar, N.; Sarkar, S. Effect of Hole Geometry on Charge Sharing and Other Parameters in GEM-Based Detectors. J. Instrum. 2022, 17, P03016. [Google Scholar] [CrossRef]
  50. Chernyshova, M.; Malinowski, K.; Melikhov, Y.; Kowalska-Strzęciwilk, E.; Czarski, T.; Wojeński, A.; Linczuk, P.; Krawczyk, R. Study of the Optimal Configuration for a Gas Electron Multiplier Aimed at Plasma Impurity Radiation Monitoring. Fusion Eng. Des. 2018, 136, 592–596. [Google Scholar] [CrossRef]
  51. Fujiwara, T.; Mitsuya, Y.; Toyokawa, H. Fine-Pitch Glass GEM for High-Resolution X-Ray Imaging. J. Instrum. 2016, 11, C12050. [Google Scholar] [CrossRef]
  52. Chernyshova, M.; Malinowski, K.; Czarski, T.; Kowalska-Strzęciwilk, E.; Linczuk, P.; Wojeński, A.; Krawczyk, R.; Melikhov, Y. Advantages of Al Based GEM Detector Aimed at Plasma Soft-Semi Hard X-Ray Radiation Imaging. Fusion Eng. Des. 2019, 146, 1039–1042. [Google Scholar] [CrossRef]
  53. Caruggi, F.; Cancelli, S.; Celora, A.; Guiotto, F.; Croci, G.; Tardocchi, M.; Murtas, F.; de Oliveira, R.; Perelli Cippo, E.; Gorini, G.; et al. Performance of a Triple GEM Detector Equipped with Al-GEM Foils for X-Rays Detection. Nucl. Instrum. Methods Phys. Res. Sect. A 2023, 1047, 167855. [Google Scholar] [CrossRef]
  54. Mindur, B.; Fiutowski, T.; Koperny, S.; Wiącek, P.; Dąbrowski, W. Investigation of Copper-Less Gas Electron Multiplier Detectors Responses to Soft X-Rays. Sensors 2020, 20, 2784. [Google Scholar] [CrossRef]
  55. Dąbrowski, W.; Fiutowski, T.; Frączek, P.; Koperny, S.; Lankosz, M.; Mendys, A.; Mindur, B.; Świentek, K.; Wiącek, P.; Wróbel, P.M. Application of GEM-Based Detectors in Full-Field XRF Imaging. J. Instrum. 2016, 11, C12025. [Google Scholar] [CrossRef]
  56. Chernyshova, M.; Malinowski, K.; Czarski, T.; Demchenko, I.N.; Melikhov, Y.; Kowalska-Strzęciwilk, E.; Wojeński, A.; Krawczyk, R.D. Effect of Charging-up and Regular Usage on Performance of the Triple GEM Detector to Be Employed for Plasma Radiation Monitoring. Fusion Eng. Des. 2020, 158, 111755. [Google Scholar] [CrossRef]
  57. Corbetta, M.; Guida, R.; Mandelli, B. Studies on Triple-GEM Detectors in High Radiation Environment with Gas Recirculation. In Proceedings of the IEEE Nuclear Science Symposium and Medical Imaging Conference, Boston, MA, USA, 31 October–7 November 2020; pp. 1–3. [Google Scholar]
  58. Fujiwara, T.; Mitsuya, Y.; Yanagida, T.; Saito, T.; Toyokawa, H.; Takahashi, H. High-Photon-Yield Scintillation Detector with Ar/CF4 and Glass Gas Electron Multiplier. Jpn. J. Appl. Phys. 2016, 55, 106401. [Google Scholar] [CrossRef]
  59. Cao, L.-J.; Li, Y.-L.; Lai, Y.-F.; Li, J.; Li, Y.-J. Study of Gas Properties for GEM-Based TPC. Chin. Phys. C 2007, 31, 475–480. [Google Scholar] [CrossRef]
  60. Lee, C.S.; Ota, S.; Tokieda, H.; Kojima, R.; Watanabe, Y.N.; Uesaka, T. Properties of Thick GEM in Low-Pressure Deuterium. J. Instrum. 2014, 9, C05014. [Google Scholar] [CrossRef][Green Version]
  61. Chernyshova, M.; Czarski, T.; Malinowski, K.; Kowalska-Strzęciwilk, E.; Król, J.; Poźniak, K.T.; Kasprowicz, G.; Zabołotny, W.; Wojeński, A.; Krawczyk, A.D.; et al. Development of GEM Detector for Tokamak SXR Tomography System: Preliminary Laboratory Tests. Fusion Eng. Des. 2017, 123, 877–881. [Google Scholar] [CrossRef]
  62. Cancelli, S.; Alimagno, H.; Muraro, A.; Perelli Cippo, E.; Caruggi, F.; Grosso, G.; Gorini, G.; Kushoro, M.H.; Marcer, G.; Putignano, O.; et al. Characterisation of N2-GEM: A Beam Monitor Based on Ar-N2 Gas Mixture. J. Instrum. 2023, 18, C05005. [Google Scholar] [CrossRef]
  63. Abi Akl, M.; Bouhali, O.; Castaneda, A.; Maghrbi, Y.; Mohamed, T. Uniformity Studies in Large Area Triple-GEM Based Detectors. Nucl. Instrum. Methods Phys. Res. Sect. A 2016, 832, 1–7. [Google Scholar] [CrossRef]
  64. Gnanvo, K.; Liyanage, N.; Nelyubin, V.; Saenboonruang, K.; Sacher, S. Large Size GEM for Super Bigbite Spectrometer (SBS) Polarimeter for Hall A 12 GeV Program at JLab. Nucl. Instrum. Methods Phys. Res. Sect. A 2015, 782, 77–86. [Google Scholar] [CrossRef]
  65. Azmoun, B.; Aune, S.; Dehmelt, K.; Deshpande, A.; Fan, W.; Garg, P.; Hemmick, T.K.; Kebbiri, M.; Kiselev, A.; Mandjavidze, I.; et al. Design Studies of High-Resolution Readout Planes Using Zigzags With GEM Detectors. IEEE Trans. Nucl. Sci. 2020, 67, 1869–1876. [Google Scholar] [CrossRef]
  66. Bachmann, S.; Kappler, S.; Kappler, S.; Ketzer, B.; Müller, T.; Ropelewski, L.; Sauli, F.; Schulte, E. High Rate X-Ray Imaging Using Multi-GEM Detectors with a Novel Readout Design. Nucl. Instrum. Methods Phys. Res. Sect. A 2002, 478, 104–108. [Google Scholar] [CrossRef]
  67. Amoroso, A.; Baldini Ferroli, R.; Balossino, I.; Bertani, M.; Bettoni, D.; Bianchi, F.; Bortone, A.; Bugalho, R.; Calcaterra, A.; Cerioni, S.; et al. The CGEM-IT Readout Chain. J. Instrum. 2021, 16, P08065. [Google Scholar] [CrossRef]
  68. Caruggi, F.; Celora, A.; Cancelli, S.; Gorini, G.; Grosso, G.; Guiotto, F.; Muraro, A.; Perelli Cippo, E.; Petruzzo, M.; Putignano, O.; et al. Development of a Triple-GEM Detector with Strip Readout and GEMINI Chip for X Rays and Neutron Imaging. J. Instrum. 2024, 19, C02015. [Google Scholar] [CrossRef]
  69. Bressan, A.; Oliveira, R.; Gandi, A.; Labbé, J.; Ropelewski, L.; Sauli, F.; Mörmann, D.; Müller, T.; Simonis, H. Two-Dimensional Readout of GEM Detectors. Nucl. Instrum. Methods Phys. Res. Sect. A 1999, 425, 254–261. [Google Scholar] [CrossRef]
  70. Rittirong, A.; Saenboonruang, K. Gains, Uniformity and Signal Sharing in XY Readouts of the 10 Cm \times 10 Cm Gas Electron Multiplier (GEM) Detector. J. Phys. Sci. 2018, 29, 121–132. [Google Scholar] [CrossRef]
  71. Chernyshova, M.; Czarski, T.; Malinowski, K.; Melikhov, Y.; Kasprowicz, G.; Kowalska-Strzęciwilk, E.; Linczuk, P.; Wojeński, A.; Krawczyk, R.D. 2D GEM Based Imaging Detector Readout Capabilities from Perspective of Intense Soft X-Ray Plasma Radiation. Rev. Sci. Instrum. 2018, 89, 10G106. [Google Scholar] [CrossRef] [PubMed]
  72. Chernyshova, M.; Malinowski, K.; Jabłoński, S.; Casiraghi, I.; Demchenko, I.N.; Melikhov, Y. Development of 2D GEM-Based SXR Plasma Imaging for DTT Device: Focus on Readout Structure. Fusion Eng. Des. 2021, 169, 112443. [Google Scholar] [CrossRef]
  73. Malinowski, K.; Chernyshova, M.; Jabłoński, S.; Casiraghi, I. Optimization of GEM-Based Detector Readout Electrode Structure for SXR Imaging of Tokamak Plasma. J. Instrum. 2021, 16, C11014. [Google Scholar] [CrossRef]
  74. Alonso, A.; Alegre, D.; García Alonso, J.; Antón, R.; Arias-Camisón, A.; Ascasíbar, E.; Baciero, A.; Barcala, J.M.; Barnes, M.; Blanco, E.; et al. Density Profiles in Stellarators: An Overview of Particle Transport, Fuelling and Profile Shaping Studies at TJ-II. Nucl. Fusion 2024, 64, 112018. [Google Scholar] [CrossRef]
  75. McCarthy, K.J.; Team, T.-I. Plasma Diagnostic Systems and Methods Used on the Stellarator TJ-II. J. Instrum. 2021, 16, C12026. [Google Scholar] [CrossRef]
  76. Botrugno, A.; Gabellieri, L.; Mazon, D.; Pacella, D.; Romano, A. Soft X-Ray Measurements in Magnetic Fusion Plasma Physics. Nucl. Instrum. Methods Phys. Res. Sect. Accel. Spectrometers Detect. Assoc. Equip. 2010, 623, 747–749. [Google Scholar] [CrossRef]
  77. Mikszuta-Michalik, K.; Imríšek, M.; Svoboda, J.; Weinzettl, V.; Bílková, P.; Hron, M.; Pánek, R. Concept of the Bolometry Diagnostics Design for COMPASS-Upgrade. Fusion Eng. Des. 2021, 168, 112421. [Google Scholar] [CrossRef]
  78. Odstrcil, M.; Mlynar, J.; Odstrcil, T.; Alper, B.; Murari, A. Modern Numerical Methods for Plasma Tomography Optimisation. Nucl. Instrum. Methods Phys. Res. Sect. Accel. Spectrometers Detect. Assoc. Equip. 2012, 686, 156–161. [Google Scholar] [CrossRef]
  79. Mikszuta-Michalik, K.; Imrisek, M.; Esposito, B.; Marocco, D.; Mlynar, J.; Ficker, O. A Total Neutron Yield Constraint Implemented to the RNC Emissivity Reconstruction on ITER Tokamak. Fusion Eng. Des. 2020, 160, 111840. [Google Scholar] [CrossRef]
  80. Hansen, P.C. REGULARIZATION TOOLS: A Matlab Package for Analysis and Solution of Discrete Ill-Posed Problems. Numer. Algorithms 1994, 6, 1–35. [Google Scholar] [CrossRef]
  81. Bleyer, I.R. Novel Regularization Methods for Ill-Posed Problems in Hilbert and Banach Spaces; Colóquios Brasileiros de Matemática; Associação Instituto Nacional de Matemática Pura e Aplicada: Rio de Janeiro, Brazil, 2021. [Google Scholar]
  82. Pacella, D.; Romano, A.; Gabellieri, L.; Murtas, F.; Mazon, D. GEM Gas Detectors for Soft X-Ray Imaging in Fusion Devices with Neutron–Gamma Background. Nucl. Instrum. Methods Phys. Res. Sect. Accel. Spectrometers Detect. Assoc. Equip. 2013, 720, 53–57. [Google Scholar] [CrossRef]
  83. Pacella, D.; Romano, A.; Gabellieri, L.; Causa, F.; Murtas, F.; Claps, G.; Choe, W.; Lee, S.H.; Jang, S.; Jang, J.; et al. X-Ray Diagnostic Developments in the Perspective of DEMO; Villa Monastero: Varenna, Italy, 2014; pp. 23–30. [Google Scholar]
  84. Claps, G.; Pacella, D.; Murtas, F.; Jakubowska, K.; Boutoux, G.; Burgy, F.; Ducret, J.E.; Batani, D. The GEMpix Detector as New Soft X-Rays Diagnostic Tool for Laser Produced Plasmas. Rev. Sci. Instrum. 2016, 87, 103505. [Google Scholar] [CrossRef] [PubMed]
  85. Arena, P.; Aubert, J.; Aiello, G.; Boullon, R.; Jaboulay, J.C.; Di Maio, P.A.; Morin, A.; Rampal, G. Thermal Optimization of the Helium-Cooled Lithium Lead Breeding Zone Layout Design Regarding TBR Enhancement. Fusion Eng. Des. 2017, 124, 827–831. [Google Scholar] [CrossRef]
  86. Roy, P.; Rout, P.K.; Datta, J.; Bhattacharya, P.; Mukhopadhyay, S.; Majumdar, N.; Sarkar, S. Study of Space Charge Phenomena in GEM-Based Detectors. Nucl. Instrum. Methods Phys. Res. Sect. A 2023, 1047, 167838. [Google Scholar] [CrossRef]
  87. Malinowski, K.; Chernyshova, M.; Jabłoński, S.; Czarski, T.; Wojeński, A.; Kasprowicz, G. Two-Dimensional Plasma Soft X-Ray Radiation Imaging System: Optimization of Amplification Stage Based on Gas Electron Multiplier Technology. Sensors 2024, 24, 5113. [Google Scholar] [CrossRef]
  88. Tikhonov, V.; Veenhof, R. GEM Simulation Methods Development. Nucl. Instrum. Methods Phys. Res. Sect. A 2002, 478, 452–459. [Google Scholar] [CrossRef]
  89. Malinowski, K.; Chernyshova, M.; Czarski, T.; Kowalska-Strzęciwilk, E.; Linczuk, P.; Wojeński, A. Basics of Numerical Simulations of the Signals from GEM Detector. In Proceedings of the SPIE Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments, Wilga, Poland, 6 November 2019; Volume 11176, p. 111764G. [Google Scholar]
  90. Malinowski, K.; Chernyshova, M.; Czarski, T.; Kowalska-Strzęciwilk, E.; Linczuk, P.; Wojeński, A. Simulations of X-Ray Conversion into Primary Electrons in GEM-Based Detector. J. Instrum. 2020, 15, C02030. [Google Scholar] [CrossRef]
  91. Jagielski, M.; Malinowski, K.; Chernyshova, M. Hybrid Garfield++ Simulations of GEM Detectors for Tokamak Plasma Radiation Monitoring. Fusion Eng. Des. 2023, 195, 113970. [Google Scholar] [CrossRef]
  92. Wojeński, A.; Linczuk, P.; Kasprowicz, G.H. Multichannel Gas Electron Multiplier Based Soft X-Ray Field-Programmable Gate Array Measurement System for W-Environment in Steady-State Tokamak (WEST): Hardware, Installation, and First Plasma Acquisition. Rev. Sci. Instrum. 2021, 92, 054704. [Google Scholar] [CrossRef] [PubMed]
  93. Muller, H. SRS Systems from APV to VMM; CERN: Geneva, Switzerland, 14 December 2017. [Google Scholar]
  94. French, M.J. Others Design and Results from the APV25, a Deep Sub-Micron CMOS Front-End Chip for the CMS Tracker. Nucl. Instrum. Methods Phys. Res. Sect. A 2001, 466, 359–365. [Google Scholar] [CrossRef]
  95. Scharenberg, L. Development of a High-Rate Scalable Readout System for Gaseous Detectors. J. Instrum. 2022, 17, C12014. [Google Scholar] [CrossRef]
  96. de Geronimo, G.; Iakovidis, G.; Martoiu, S.; Polychronakos, V. The VMM3a ASIC. IEEE Trans. Nucl. Sci. 2022, 69, 976–985. [Google Scholar] [CrossRef]
  97. Wojeński, A.; Poźniak, K.; Kasprowicz, G.H. Multichannel Measurement System for Extended SXR Plasma Diagnostics Based on Novel Radiation-Hard Electronics. Fusion Eng. Des. 2017, 123, 727–731. [Google Scholar] [CrossRef]
  98. Iakovidis, G. VMM3a, an ASIC for Tracking Detectors. J. Phys. Conf. Ser. 2020, 1498, 012051. [Google Scholar] [CrossRef]
  99. Shumack, A.E.; Rzadkiewicz, J.; Chernyshova, M. X-Ray Crystal Spectrometer Upgrade for ITER-like Wall Experiments at JET. Rev. Sci. Instrum. 2014, 85, 11E425. [Google Scholar] [CrossRef]
  100. Poźniak, K.; Byszuk, A.; Chernyshova, M. FPGA Based Charge Fast Histogramming for GEM Detector. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2013; Romaniuk, R., Ed.; SPIE: Bellingham, WA, USA, 2013; Volume 8903. [Google Scholar]
  101. Kasprowicz, G.H.; Czarski, T.; Chernyshova, M. Fast ADC Based Multichannel Acquisition System for the GEM Detector. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2012; Romaniuk, R., Ed.; SPIE: Bellingham, WA, USA, 2012; Volume 8454. [Google Scholar]
  102. Chernyshova, M.; Czarski, T.; Dominik, W. Development of GEM Gas Detectors for X-Ray Crystal Spectrometry. J. Instrum. 2014, 9, C03003. [Google Scholar] [CrossRef]
  103. Kasprowicz, G.H.; Zabołotny, W.; Poźniak, K. Multichannel Data Acquisition System for GEM Detectors. J. Fusion Energy 2019, 38, 467–479. [Google Scholar] [CrossRef]
  104. Linczuk, P.; Wojeński, A.; Czarski, T. Heterogeneous Online Computational Platform for GEM-Based Plasma Impurity Monitoring Systems. Energies 2024, 17, 5539. [Google Scholar] [CrossRef]
  105. Wojenski, A.; Pozniak, K.T.; Mazon, D.; Chernyshova, M. FPGA-Based Novel Real-Time Evaluation and Data Quality Monitoring System for Tokamak High-Performance GEM Soft X-Ray Diagnostic. J. Instrum. 2018, 13, P12024. [Google Scholar] [CrossRef]
  106. Wojenski, A.; Pozniak, K.T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R.; Zabolotny, W.; Chernyshova, M.; Czarski, T.; Malinowski, K. FPGA-Based GEM Detector Signal Acquisition for SXR Spectroscopy System. J. Instrum. 2016, 11, C11035. [Google Scholar] [CrossRef]
  107. Wojenski, A.; Pozniak, K.; Kasprowicz, G.; Zabolotny, W.; Byszuk, A.; Zienkiewicz, P.; Chernyshova, M.; Czarski, T. Concept and Current Status of Data Acquisition Technique for GEM Detector–Based SXR Diagnostics. Fusion Sci. Technol. 2016, 69, 595–604. [Google Scholar] [CrossRef]
  108. Wojenski, A.J.; Kasprowicz, G.; Pozniak, K.T.; Byszuk, A.; Chernyshova, M.; Czarski, T.; Jablonski, S.; Juszczyk, B.; Zienkiewicz, P. Multichannel Reconfigurable Measurement System for Hot Plasma Diagnostics Based on GEM-2D Detector. Nucl. Instrum. Methods Phys. Res. Sect. B Beam Interact. Mater. At. 2015, 364, 49–53. [Google Scholar] [CrossRef]
  109. DEMO Eurofusion DEMO Diagnostics and Control System Requirements Document (SRD). Available online: https://idm.euro-fusion.org/?uid=2MNK4R (accessed on 30 November 2025).
  110. VadaTech Incorporated. VadaTech MicroTCA Overview; VadaTech Incorporated: Henderson, NV, USA, 2016. [Google Scholar]
  111. Xilinx. Aurora 64B/66B, version 13.0; Xilinx: San Jose, CA, USA, 2024. [Google Scholar]
  112. Dolphin ICS PCI Express Reflective Memory. Available online: https://www.dolphinics.com/solutions/embedded-system-reflective-memory.html (accessed on 30 November 2025).
  113. OpenHardware Repository AMC FMC Carrier AFCZ. Available online: https://ohwr.org/projects/afcz/ (accessed on 30 November 2025).
  114. Wojeński, A.; Poźniak, K.; Linczuk, P. Data Quality Monitoring Considerations for Implementation in High Performance Raw Signal Processing Real-Time Systems with Use in Tokamak Facilities. J. Fusion Energy 2020, 39, 221–229. [Google Scholar] [CrossRef]
  115. Wojenski, A.; Linczuk, P.; Kolasinski, P.; Chernyshova, M.; Mazon, D.; Kasprowicz, G.; Pozniak, K.T.; Gaska, M.; Czarski, T.; Krawczyk, R. Soft X-ray Diagnostic System Upgrades and Data Quality Monitoring Features for Tokamak Usage. Int. J. Electron. Telecommun. 2021, 67, 109–114. [Google Scholar] [CrossRef]
  116. Wojenski, A.; Pozniak, K.T.; Mazon, D.; Chernyshova, M. Advanced Real-time Evaluation and Data Quality Monitoring Model Integration with FPGAs for Tokamak High-performance Soft X-ray Diagnostic System. Int. J. Electron. Telecommun. 2018, 64, 473–479. [Google Scholar] [CrossRef]
  117. Kolasiński, P.; Poźniak, K.; Wojeński, A. High-Performance FPGA Streaming Data Concentrator for GEM Electronic Measurement System for WEST Tokamak. Electronics 2023, 12, 3649. [Google Scholar] [CrossRef]
  118. Chernyshova, M.; Czarski, T.; Malinowski, K. Charge Cluster Identification for Multidimensional GEM Detector Structures. In Proceedings of the SPIE: Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2018; Romaniuk, R., Linczuk, M.G., Eds.; SPIE: Bellingham, WA, USA, 2018; Volume 10808. [Google Scholar]
  119. Wojeński, A.; Zbroszczyk, H.; Kruszewski, M. Hardware Acceleration of Complex HEP Algorithms with HLS and FPGAs: Methodology and Preliminary Implementation. Comput. Phys. Commun. 2024, 295, 108997. [Google Scholar] [CrossRef]
Figure 1. D models imported from Altium Designer into Ansys Electronics Desktop. The left picture presents the GEM detector backplane, showing extremely dense multilayer connections between the connectors (rectangular pads) and pixels. The right image presents the model of the Analog Front-End (AFE) processing board with the connectors and channels connected to TIA (only the pads shown). Based on presented complexity, it was chosen to use advanced simulation tools to extract information about the parasitic parameters.
Figure 1. D models imported from Altium Designer into Ansys Electronics Desktop. The left picture presents the GEM detector backplane, showing extremely dense multilayer connections between the connectors (rectangular pads) and pixels. The right image presents the model of the Analog Front-End (AFE) processing board with the connectors and channels connected to TIA (only the pads shown). Based on presented complexity, it was chosen to use advanced simulation tools to extract information about the parasitic parameters.
Energies 19 00918 g001
Figure 2. Parasitic capacitance distribution for an example quarter of the GEM detector, analyzed for each channel composed of pixels connected in series. The presented results are based on Ansys Q3D 2025 R1 simulations. The internally established capacitance per channel limit was 10pF due to TIA performance. Most capacitance values are found to be below 1 pF, showing that the influence of this part of design is well below limits and does not introduce necessity of changes to the design.
Figure 2. Parasitic capacitance distribution for an example quarter of the GEM detector, analyzed for each channel composed of pixels connected in series. The presented results are based on Ansys Q3D 2025 R1 simulations. The internally established capacitance per channel limit was 10pF due to TIA performance. Most capacitance values are found to be below 1 pF, showing that the influence of this part of design is well below limits and does not introduce necessity of changes to the design.
Energies 19 00918 g002
Figure 3. Summary of parasitic capacitance for each pixel-channel line path from the GEM detector backplane to the TIA input on the M-AFE board for an example one-quarter of the full backplane PCB. The total capacitance includes also contribution from the connector. Results are based on Ansys Q3D 2025 R1 simulations. For this example, it can be shown that the main parasitic capacitance is introduced at the AFE stage. However, it is still within the design limits (Table 1).
Figure 3. Summary of parasitic capacitance for each pixel-channel line path from the GEM detector backplane to the TIA input on the M-AFE board for an example one-quarter of the full backplane PCB. The total capacitance includes also contribution from the connector. Results are based on Ansys Q3D 2025 R1 simulations. For this example, it can be shown that the main parasitic capacitance is introduced at the AFE stage. However, it is still within the design limits (Table 1).
Energies 19 00918 g003
Figure 4. GEM detector readout board consisting of 34,816 pixels. Front and rear views with card connectors are shown. The left picture shows the pixel array board, while the right picture displays the M-AFE module sockets with +12VDC power supply connectors.
Figure 4. GEM detector readout board consisting of 34,816 pixels. Front and rear views with card connectors are shown. The left picture shows the pixel array board, while the right picture displays the M-AFE module sockets with +12VDC power supply connectors.
Energies 19 00918 g004
Figure 5. Visualization of the GEM pixel board using the developed pixel Mapper software. The displayed example shows the M-AFE board number 10 with all 128 available channels. Each signal channel (pixels in series) is color-coded. Navigation controls allow browsing the full GEM detector matrix in 64 × 64 pixel regions. In this selected area, it is evident that it is almost impossible to build a cluster using only a single M-AFE board. The software is used for initial minimal setup of few M-AFE boards to cover single clusters.
Figure 5. Visualization of the GEM pixel board using the developed pixel Mapper software. The displayed example shows the M-AFE board number 10 with all 128 available channels. Each signal channel (pixels in series) is color-coded. Navigation controls allow browsing the full GEM detector matrix in 64 × 64 pixel regions. In this selected area, it is evident that it is almost impossible to build a cluster using only a single M-AFE board. The software is used for initial minimal setup of few M-AFE boards to cover single clusters.
Energies 19 00918 g005
Figure 6. Conceptual architecture of the GEM3k system with three data processing stages: (1) GEM detector readout section, (2) µTCA platform with real-time acquisition and streaming, and (3) data distribution and processing center. The section is divided into an optional Data Streaming Backplane (e.g., fast network switch) and Data Processing Units (e.g., high performance servers). The architecture is highly modular, including compatibility with various detector readout structures.
Figure 6. Conceptual architecture of the GEM3k system with three data processing stages: (1) GEM detector readout section, (2) µTCA platform with real-time acquisition and streaming, and (3) data distribution and processing center. The section is divided into an optional Data Streaming Backplane (e.g., fast network switch) and Data Processing Units (e.g., high performance servers). The architecture is highly modular, including compatibility with various detector readout structures.
Energies 19 00918 g006
Figure 7. The M-AFE board designed for the project. It consists of an analog stage and signal transmission connectors routed to the M-ADB boards. When installed directly onto the GEM detector backplane (dimensions of 112 mm × 81.5 mm), it makes the GEM detector chassis highly compact, essential for installation in a tokamak diagnostic port where available space is limited.
Figure 7. The M-AFE board designed for the project. It consists of an analog stage and signal transmission connectors routed to the M-ADB boards. When installed directly onto the GEM detector backplane (dimensions of 112 mm × 81.5 mm), it makes the GEM detector chassis highly compact, essential for installation in a tokamak diagnostic port where available space is limited.
Energies 19 00918 g007
Figure 8. M-ADB board. It connects with the M-AFE via the dense signal cabling. It is designed in a common COTS HPC-FMC standard and is installed directly on the FPGA baseboards. Using µTCA platform incorporates directly the compact analog-digital boards, that are high channels dense, on standardized FMC board space of 69 mm × 76.5 mm.
Figure 8. M-ADB board. It connects with the M-AFE via the dense signal cabling. It is designed in a common COTS HPC-FMC standard and is installed directly on the FPGA baseboards. Using µTCA platform incorporates directly the compact analog-digital boards, that are high channels dense, on standardized FMC board space of 69 mm × 76.5 mm.
Energies 19 00918 g008
Figure 9. The digital acquisition stage setup dedicated to installation in the uTCA crate. It consists of an AMC AFCKU board, one M-ADB board, and an RTM-4SFP-3QSFP. The design choice to use the AMC boards allows multiple multigigabit links to be installed with support of 256 measurement channels, including modern Xilinx UltraScale FPGAs in a single uTCA chassis.
Figure 9. The digital acquisition stage setup dedicated to installation in the uTCA crate. It consists of an AMC AFCKU board, one M-ADB board, and an RTM-4SFP-3QSFP. The design choice to use the AMC boards allows multiple multigigabit links to be installed with support of 256 measurement channels, including modern Xilinx UltraScale FPGAs in a single uTCA chassis.
Energies 19 00918 g009
Figure 10. The complete setup of the data processing system consisting of six Dell R730 units equipped with 4× NICs and 2× NVMe drives per FPGA each. The total configuration provides up to 48 SFP+ links for data reception from FPGAs. The design choice of this setup makes it modular. The data processing part of the system can be installed independently of the rest of the measurement system in a place better suited for server equipment.
Figure 10. The complete setup of the data processing system consisting of six Dell R730 units equipped with 4× NICs and 2× NVMe drives per FPGA each. The total configuration provides up to 48 SFP+ links for data reception from FPGAs. The design choice of this setup makes it modular. The data processing part of the system can be installed independently of the rest of the measurement system in a place better suited for server equipment.
Energies 19 00918 g010
Figure 11. The results of prototype tests using the TI AFE5832 evaluation board and accompanying software. The acquired signal corresponds to synthetic GEM-like pulses, correctly captured by the ADCs with the selected configuration. The test confirms that the TI AFE5832 can be used in the design and integrated with the rest of the electronics.
Figure 11. The results of prototype tests using the TI AFE5832 evaluation board and accompanying software. The acquired signal corresponds to synthetic GEM-like pulses, correctly captured by the ADCs with the selected configuration. The test confirms that the TI AFE5832 can be used in the design and integrated with the rest of the electronics.
Energies 19 00918 g011
Figure 12. The test setup for verification of complete system analog path, including GEM backplane board, M-AFE, and M-ADB board installed in AMC carrier with FPGA (AFCKU). The setup was used for the electronic boards’ revision and improvements.
Figure 12. The test setup for verification of complete system analog path, including GEM backplane board, M-AFE, and M-ADB board installed in AMC carrier with FPGA (AFCKU). The setup was used for the electronic boards’ revision and improvements.
Energies 19 00918 g012
Figure 13. The system response after issuing the calibration pulse (CAL_PULSE). The green line shows the signal measured directly at the discharged capacitor on the M-AFE board. The yellow line represents the signal measured at the ADC input channel after transmission through the 2 m cable. The test confirmed that the proposed cables with specified length can be used in the design to connect the detector with acquisition part of the system without significant loss of the signal.
Figure 13. The system response after issuing the calibration pulse (CAL_PULSE). The green line shows the signal measured directly at the discharged capacitor on the M-AFE board. The yellow line represents the signal measured at the ADC input channel after transmission through the 2 m cable. The test confirmed that the proposed cables with specified length can be used in the design to connect the detector with acquisition part of the system without significant loss of the signal.
Energies 19 00918 g013
Figure 14. Output from ILA component of the FPGA showing ADC data from the M-ADB board. Eight input channels from the fourth chip, channels 8–15, are displayed. The test was essential to verify the correctness of electronics design in scope of: specific HDL code development supporting high bandwidth measurement interfaces; correct system set up; correctness of preliminary measurements on capacitor discharges.
Figure 14. Output from ILA component of the FPGA showing ADC data from the M-ADB board. Eight input channels from the fourth chip, channels 8–15, are displayed. The test was essential to verify the correctness of electronics design in scope of: specific HDL code development supporting high bandwidth measurement interfaces; correct system set up; correctness of preliminary measurements on capacitor discharges.
Energies 19 00918 g014
Figure 15. The results of spectral measurements of the measurement boards connected together under normal operation. The FFT spectra correspond to (a) the M-AFE and (b) the M-ADB board.
Figure 15. The results of spectral measurements of the measurement boards connected together under normal operation. The FFT spectra correspond to (a) the M-AFE and (b) the M-ADB board.
Energies 19 00918 g015
Table 1. Summary of the parameters obtained from Ansys (Q3D) simulations showing the distribution of values among the channels grouped as AFE boards.
Table 1. Summary of the parameters obtained from Ansys (Q3D) simulations showing the distribution of values among the channels grouped as AFE boards.
AFE BoardAverage Capacitance
[pF]
Minimum Capacitance
[pF]
Maximum Capacitance
[pF]
Standard Deviation [pF]
195.862.5610.191.95
205.892.4410.502.00
215.912.4310.832.01
225.882.4110.551.91
236.192.4014.302.25
247.352.8914.422.76
Table 2. Comparison of available GEM-dedicated ASIC chips [27,93,94,95,96].
Table 2. Comparison of available GEM-dedicated ASIC chips [27,93,94,95,96].
Parameter APV25VMM3a
# channels12864
DomainAnalogAnalog-digital
Configurable elementsMostly fixedGain
Tail suppression
Peaking time
TriggeringGlobal per chip
External trigger req.
Individual per channel
Performance5 kHz trigger rate
128 channels multiplexed output
3.2 µs/chip readout
3.6 MHz/channel
OutputRaw analog data from the analog multiplexer
No logic (e.g., clustering)
Peak detection
Time detection
Cluster registration (side triggers)
Various modes of operation
InterfaceRequires external
ADC for digitalization and postprocessing
Direct serial interface to digital stage
FeaturesNo active cooling needed3 ADCs per channel
Extra signal monitoring
Convection cooling required
Table 3. Comparison of the two designed generations of GEM-FPGA SXR measurement systems [97,99,100,101,102,103,104,105,106,107,108].
Table 3. Comparison of the two designed generations of GEM-FPGA SXR measurement systems [97,99,100,101,102,103,104,105,106,107,108].
Parameter Hardware Histogramming System
1st Generation
Hybrid Streaming System
2nd Generation
InstallationCCFE JET tokamakCEA WEST tokamak
# channelsUp to 256Up to 512
Sampling frequency77.78 MHz80 MHz
Samples resolution10 bit10–12 bit
Histogram resolution10 ms
Real-time computations in FPGAs
User-defined
(µs—sec. range)
# FPGAs21 Xilinx Spartan62 Xilinx Artix7
Data interfacesFast serial links
PCI-Express
SERDES
PCIe x4 per FPGA
System controlC/Python
Matlab
C low-level control software
Bash, Matlab (postproc.)
Key features
  • Real-time spectra
    construction in FPGAs
  • Modular design
    (16 channels modules/FPGA)
  • Raw data streaming
  • Simultaneous real-time (HPC) and offline modes
  • Modular (16-channel modules, 64 ch/FPGA)
Limitations
  • Limited access to raw data
  • Complex FPGA interconnection structure
  • No logical resources left on FPGAs
  • Mostly custom design in terms of electronics and mechanics
  • Medium integration scale
Table 4. Characterization of the analog input path of the GEM3k.
Table 4. Characterization of the analog input path of the GEM3k.
Parameter
Sampling frequency50 MHz
ADC resolution12 bits
ADC effective range2048 bins
Noise floor RMS150 µV
Wideband noise floor (FFT)−90.5 dB (peak: −71 dB@1 MHz)
SNR~70 dB
Trigger level+5 mV
[relative ~3.5% of effective ADC range]
Minimum energy resolution179 eV
Energy resolution0.2 keV
(100 bins, 20 keV range)
Energy rangeUp to 20 keV (typical)
Event time resolution40 ns
Spectra time resolution10 ms (typical)
Table 5. Summary of the key parameters of the newly designed GEM3k system based on preliminary hardware tests.
Table 5. Summary of the key parameters of the newly designed GEM3k system based on preliminary hardware tests.
Parameter GEM3k
# channels/FPGA module128–256
Sampling frequency50 MHz
Dedicated #channels3072
GEM detector typeXYUV
FPGA typeXilinx UltraScale Kintex
FFT characteristicsM-AFE:
  • Noise floor performance: −90.5 dB
  • Highest noise peak: −71 dB@1 MHz
M-ADB:
  • Noise floor performance: −86.5 dB
  • Highest noise peak: −64 dB@1.35 MHz
Signal drop M-ADB-cable-M-ADBΔ70 mV
Gigabit links BER1.8 × 10−14
COTS properties
  • Compatible with the uTCA ecosystem
  • AMC-standard FPGA boards
  • FMC boards (M-ADB)
  • Ethernet infrastructure compatible (transmission link)
  • Standard, modular computation servers
  • Standard NICs and NVMe storage
  • Use of dedicated SDKs
Key features
  • Real-time data stream 80 Gbps/FPGA
  • Highly modular
  • Maximum number of channels per uTCA crate: 3072
  • Modular design with 128-channel granularity, for smaller measurement systems
  • Synchronized timings by standard backplane design
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wojeński, A.; Kasprowicz, G.; Chernyshova, M. GEM3k: Architecture and Design of a Novel 3rd Generation High Channel Density Soft X-Ray Diagnostic System Towards Commercial Fusion Power Plants. Energies 2026, 19, 918. https://doi.org/10.3390/en19040918

AMA Style

Wojeński A, Kasprowicz G, Chernyshova M. GEM3k: Architecture and Design of a Novel 3rd Generation High Channel Density Soft X-Ray Diagnostic System Towards Commercial Fusion Power Plants. Energies. 2026; 19(4):918. https://doi.org/10.3390/en19040918

Chicago/Turabian Style

Wojeński, Andrzej, Grzegorz Kasprowicz, and Maryna Chernyshova. 2026. "GEM3k: Architecture and Design of a Novel 3rd Generation High Channel Density Soft X-Ray Diagnostic System Towards Commercial Fusion Power Plants" Energies 19, no. 4: 918. https://doi.org/10.3390/en19040918

APA Style

Wojeński, A., Kasprowicz, G., & Chernyshova, M. (2026). GEM3k: Architecture and Design of a Novel 3rd Generation High Channel Density Soft X-Ray Diagnostic System Towards Commercial Fusion Power Plants. Energies, 19(4), 918. https://doi.org/10.3390/en19040918

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop