Next Article in Journal
A Hybrid Deep Learning Framework for Non-Intrusive Load Monitoring
Previous Article in Journal
Automatic Threshold Selection Guided by Maximizing Homologous Isomeric Similarity Under Unified Transformation Toward Unimodal Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Implementation of a High-Speed Storage System Based on SATA Interface

State Key Laboratory of Electronic Testing Technology, North University of China, Taiyuan 030051, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(2), 452; https://doi.org/10.3390/electronics15020452
Submission received: 11 December 2025 / Revised: 7 January 2026 / Accepted: 16 January 2026 / Published: 20 January 2026
(This article belongs to the Section Computer Science & Engineering)

Abstract

In flight tests, to meet the requirements of consistent acquisition and storage of multiple targets, multiple systems, and multiple data types, various data types are processed into Pulse Code Modulation (PCM) data streams using PCM encoding for storage. Aiming at the requirement of real-time storage of high-bit-rate PCM data streams, a large-capacity storage system based on Serial Advanced Technology Attachment 3.0 (SATA3.0) is designed. The system uses the Kintex 7 series Field-Programmable Gate Array (FPGA) as the control core, receives PCM data streams through the Low-Voltage Differential Signaling (LVDS) low-voltage differential interface, stores the received PCM data streams into the mSATA disk via the SATA3.0 transmission bus, and transmits the stored data back to the host computer through the USB3.0 interface for analysis. Meanwhile, to solve the problem of complex data export, the storage system constructs a FAT32 file system through the MicroBlaze soft core to optimize the management and operation of the large-capacity storage system. Test results show that the storage system can perform stable high-rate storage at −40 °C~80 °C.

1. Introduction

In critical fields such as aviation flight testing, nuclear physics detection, and industrial measurement and control, with the iterative upgrading of detection technologies and sensing devices, data acquisition has exhibited the characteristics of multi-target, multi-system, and multi-data type, imposing stringent requirements on the consistent acquisition and real-time storage of data. In such scenarios, various types of data generally need to be converted into PCM data streams for unified storage, and the continuous data of hundreds of megabytes per second in flight testing has become a core bottleneck restricting system performance.
Current high-speed storage systems are confronted with three core challenges: First, insufficient adaptability to extreme environments—scenarios like aviation flight testing require endurance of harsh conditions with a wide temperature range from −40 °C to 80 °C. Second, poor coordination between high-speed interfaces and storage management—controllers dominated by commercial IP cores in mainstream solutions have weak customizability, making it difficult to match the storage and export requirements of high-rate data streams. Third, difficulty in balancing reliability and transmission efficiency—traditional storage systems either prioritize speed but lack redundancy design, or rely on complex external protection to achieve environmental adaptation, leading to high costs or limited practicality. Therefore, developing a high-capacity storage system with high speed, high reliability, and strong environmental adaptability is of great practical significance for meeting the data storage needs in critical fields.
In recent years, scholars at home and abroad have conducted extensive research on high-speed storage systems, forming various technical solutions:
In terms of storage interface protocols and controller implementation, the development of high-speed serial communication technology has driven the rate of upgrading of storage interfaces. Fudan University proposed a 1.25 Gbps Ethernet receiver design scheme, laying a foundation for the high-speed transmission of storage systems [1]. As early as 2013, Gorman et al. developed an open-source SATA core suitable for Virtex-4 series FPGAs, providing early technical reference for the hardware implementation of SATA protocols on FPGA platforms [2]. The Kumar team carried out in-depth analysis on the application challenges of SAS 4.0 (22.5 Gbps) in server platforms, offering a performance optimization direction for the hardware adaptation of high-bandwidth storage interfaces [3]. However, the above solutions mostly focus on general scenarios or rely on specific FPGA models, making it difficult to adapt to the customized needs in extreme environments such as aviation testing. In addition, some solutions are implemented using commercial IP cores, resulting in limited scalability and scenario adaptability.
In terms of storage medium and system architecture optimization, Meng Yuan proposed an acquisition and storage system based on eMMC storage, with a maximum storage rate of 160 MB/s, featuring high integration and scalability, but the rate still cannot meet the storage requirements of high-rate PCM data streams [4]. Some studies have improved the array data security and I/O performance through Solid-State Drive-based (SSD-based) RAID-5 fast online reconstruction schemes and machine learning-assisted RAID scheduling algorithms [5,6]. For extreme environments, scholars have developed aerospace-specific high-speed data recorders and radiation-resistant on-board FPGA schemes, providing reliability support for special scenarios [7,8,9]. Nevertheless, these solutions either focus on storage reliability optimization while ignoring rate improvement, or lead to increased costs due to complex architectures, failing to achieve the coordinated unification of high speed, high reliability, and strong environmental adaptability.
In terms of transmission stability and data security, technologies such as LVDS interfaces, 8 b/10 b encoding optimization, and power line electromagnetic interference suppression have provided diversified solutions for data transmission [10,11,12,13]. Bulbul et al. proposed a privacy-preserving multi-user searchable encryption scheme. The network attack detection technology developed by the Najafabadi team and the IoT-oriented AES module designed by Lee M et al. have enhanced the access security of stored data and the network-side and terminal-side data security of embedded storage systems, respectively [14,15,16]. However, such studies mostly focus on performance optimization of a single link, making it difficult to solve the end-to-end performance bottleneck in the storage of high-rate data streams.
In summary, existing research still has room for improvement in terms of storage rate, environmental adaptability, and software-hardware coordination, and cannot fully meet the real-time storage requirements of high-rate PCM data streams in scenarios such as flight testing. This paper designs a SATA3.0 high-speed mass storage system with Kintex-7 series FPGA as the core. The system implements a SATA3.0 full-protocol stack controller by independently writing Verilog code, integrates the FAT32 file system, and is equipped with ruggedized mSATA hard disks. It can achieve stable storage in a wide temperature range of −40 °C to 80 °C, meeting the multi-type data acquisition needs in scenarios such as flight testing. Compared with the problems of mainstream storage systems, such as reliance on commercial IP cores, poor customizability, and insufficient adaptability to extreme environments, the innovation points of this system focus on the design of an independently controllable SATA controller and the direct mapping management between the file system and hardware, which effectively break through the technical limitations of traditional solutions and have important engineering application value.
The structure of this paper is as follows: Section 2 introduces the overall design scheme of the system, clarifying the composition and core functions of each functional module; Section 3 elaborates on the hardware system design, including the circuit implementation of power management, SATA interface, Double Data Rate 3 Synchronous Dynamic Random-Access Memory (DDR3) cache, and communication interface; Section 4 expounds on the software system design, focusing on the detailed analysis of the SATA controller logic, module integration, and the construction of the FAT32 file system; Section 5 verifies the system performance through experimental tests.

2. Overall System Scheme Design

The overall design scheme of the large-capacity storage system based on SATA3.0 is shown in Figure 1. The storage system mainly consists of a power supply system, an FPGA logic control module, a USB3.0 interface circuit module, a DDR cache module, and other components. After the storage system is powered on, the PCM signal data stream is transmitted to the storage system through the LVDS differential interface circuit. During the initialization period of the mSATA solid-state drive, two groups of DDR are used to cache data. After the initialization is completed, the FPGA-controlled SATA3.0 controller stores the data into the mSATA drive. After data storage is finished, the host computer can read back the stored data in the mSATA drive through the USB 3.0 interface circuit and the Ethernet interface.

3. System Hardware Design

3.1. Power Management System

The total power supply of the hardware of this storage system is provided by the 5 V DC power input through the backplane, which is converted into the voltages required by each module via step-down power chips such as LTM4644 and TPS51200. The power supply topology of the power system is shown in Figure 2: the 5 V DC power transmitted to the board through the backplane is converted into 1 V, 1.2 V, 3.3 V, 1.5 V and other voltage outputs through two LTM4644 chips and one LTM4622 chip, respectively, and the 3.3 V voltage is further converted into 0.75 V voltage through the next-stage step-down circuit.
Among these voltages, 1 V is mainly used to power the FPGA core circuit and internal BRAM resources; 1.8 V supplies power to the FPGA auxiliary circuits and the built-in ADC of the chip; 3.3 V, 2.5 V, and 1.5 V provide power for the IOs of different BANKs; 1.2 V is used to power the relevant power pins MGTAVTT of the GTX interface and the digital power pins of the Ethernet PHY chip, respectively. The second-stage voltage of 0.75 V serves as the reference voltage for the data signals of the DDR3 chip.

3.2. SATA Interface Circuit

The FPGA of the hardware circuit adopts XC7K325T-2FFG900I from Xilinx’s Kintex-7 series. The high-speed physical layer of SATA3.0 exceeds the capability of ordinary I/O interfaces, but this FPGA integrates 16-channel Multi-Gigabit transceivers with a maximum single-channel rate of 12.5 Gb/s [17], which fully meets the 6.0 Gb/s requirement of SATA3.0. Moreover, the GTX transceiver integrates a clock and data recovery (CDR) circuit, supports 8 b/10 b encoding and Out-Of-Band (OOB) signal detection, eliminating the need for an external PHY chip to implement the physical layer of the SATA protocol.
The SATA 3.0 protocol transmits data in full-duplex mode, which is essentially a high-speed serial differential transmission technology. It inherently possesses strong anti-noise and electromagnetic interference suppression capabilities. Therefore, when using the GTX serial transceiver, it is only necessary to ensure a high-quality clock signal to drive the GTX transceiver. In this design, the SIT9120AI-2D1-33E150 chip from Sitime Corporation is selected to provide the reference clock for the on-chip phase-locked loop (CPLL). This chip can directly output a 150 MHz differential clock signal, i.e., achieving the 150 MHz SATA reference clock required by the design, and has advantages such as high-frequency stability and low jitter. The frequency stability can reach ±20 ppm within the operating temperature range of −40 °C to 85 °C. Meanwhile, to meet the requirements of the airborne environment in flight tests, a rugged mSATA drive is used as the storage medium, and the entire storage system is integrated onto a single board. The mSATA drive is fixed to the PCB via an mSATA connector and surface-mount copper pillars, which can effectively solve problems such as poor contact and signal glitches caused by vibration and shock. The schematic diagram of the hardware circuit is shown in Figure 3.
In Figure 3, for the mSATA connector, apart from the transmission signal lines connected to the FPGA’s GTX interface, the SATA_DET signal is a device detection signal, which determines whether a storage device is connected by detecting changes in its level; the SATA_DSS signal is a hard disk activity indicator signal, which outputs a low-level signal when the storage device performs read/write operations; the SATA_DEVSLP signal is a sleep signal used to control the storage device to enter a deep power-saving mode.

3.3. DDR3 Interface Circuit

When the system transmits and stores data through the SATA interface, the data transmission speed is high, and the amount of transmitted data is large, making the internal FIFO of the FPGA unable to meet the data buffering requirements. To prevent data loss, an external DDR3 chip is used as the cache for stored data. Figure 4 shows the hardware connection schematic diagram between DDR3 and FPGA. The DDR3 chip selected is MT41K256M16JT-125IT:E, which has a storage capacity of 4 Gbit, a data bit width of 16 bits, and an operating temperature range of −40~95 °C.
This DDR3 chip adopts address multiplexing technology. The FPGA first sends the row address A[14:0] and bank address BA[2:0], and then sends the column address A[9:0] to locate the specific address for reading and writing. DQ[15:0] is a bidirectional data bus. The FPGA controls a series of operations such as reading and writing of DDR3 through CSN, RAS, CAS, and WE signal lines. The DDR3 chip uses differential clock lines CK and CKN, and all operations are sampled at the intersection of the rising edge of CK and the falling edge of CKN to ensure timing stability. The ODT and ZQ pins ensure the integrity of DDR3 read/write signals. By controlling the ODT signal, the value of the matching resistor and its switch state can be controlled, thereby achieving the integrity of read/write signals [18]. The ZQ pin is externally connected to a precision resistor, which serves as a reference for the internal drive voltage.

3.4. Communication Interface Circuit

The communication interfaces between the storage system and the host computer adopt Gigabit Ethernet and USB3.0 interfaces. During the data storage process, the host computer can detect the device status through these two communication methods. After the data storage is completed, in the process of the host computer reading back data, the USB3.0 and Gigabit Ethernet communication methods serve as redundant backups for each other [19]. The Ethernet interface circuit is built with the physical layer (PHY) chip 88E1111-BAB2I000 and the network transformer GST5009BLF, while the USB3.0 interface circuit is built with the FT601Q chip.

4. System Software Design

The internal logic modules of the FPGA are shown in Figure 5: they mainly consist of the MicroBlaze control module, SATA controller module, and other components. All internal logic modules are uniformly controlled and coordinated by the MicroBlaze soft core via the Advanced eXtensible Interface (AXI) bus. When the PCM data stream is received through the LVDS communication module and needs to be stored, or when the host computer reads back data, the MicroBlaze sends commands to the SATA controller to control data reading or writing. During data transmission, the DDR3 controller IP is used to control the DDR3 chip for data buffering, and real-time status detection is also performed.

4.1. SATA Controller Logic

The SATA controller serves as the central hub for data transmission and storage in the entire storage system. Its performance directly determines the communication efficiency between the SATA 3.0 bus and the mSATA drive, the stability of data transmission, and the collaborative adaptability with other system modules, making it a critical technical enabler for the real-time storage of high-bit-rate PCM data streams. To clearly present the design logic and implementation approach of the SATA controller, the subsequent sections will first systematically analyze the core functionalities, protocol specifications, and interaction mechanisms of each layer based on the hierarchical architecture of the SATA protocol (Physical Layer, Link Layer, Transport Layer, Application Layer), thereby establishing the theoretical foundation for the controller’s logical design. This will be followed by a detailed explanation of the internal logical architecture of the controller, which is independently implemented using the Verilog language, including the division of functional modules, finite state machine design, and the control flow for data transmission/reception. The collaborative working mechanism between the controller and other modules, such as the MicroBlaze soft-core processor and DDR3 cache, will also be clarified.

4.1.1. SATA Protocol Analysis

The full English name of SATA is Serial Advanced Technology Attachment, which is a high-speed storage interface standard based on serial transmission technology. As an upgraded replacement for parallel interfaces, the SATA interface adopts an LVDS transmission mode, boasting advantages such as high transmission rate and strong anti-interference capability. The SATA 3.0 version adopted in this paper is an iterative release of this interface standard, with a theoretical transmission rate of up to 6.0 Gb/s.
With reference to the OSI seven-layer model, the SATA interface protocol can be divided into the following four layers: the Physical Layer, Link Layer, Transport Layer, and Application Layer [20]. The schematic diagram of its hierarchical structure is shown in Figure 6.
As the lowest layer of the SATA protocol, the Physical Layer mainly functions to transmit and receive serial data streams, extract data and clock signals from serial data streams, and initialize the SATA interfaces of the host and device, as well as negotiate transmission speeds. The Physical Layer establishes communication between the host and device via OOB signals [21]. OOB signals are low-frequency signals composed of ALIGN primitive signals and idle levels in the SATA protocol. According to differences in idle level time intervals and functions, they can be divided into three types: COMRESET, COMINIT, and COMWAKE signals. The COMRESET signal is sent from the host to the device to implement hardware reset of the device; the COMINIT signal is sent from the device to the host to request communication initialization; and COMWAKE, as a bidirectional signal, is used to activate the Physical Layer in a power-off state.
The Link Layer is mainly responsible for implementing data frame transmission control. During transmission, the Link Layer does not need to identify the content of frames; instead, it controls the frame transmission process by delivering primitives. When the Transport Layer issues a frame transmission request, the Link Layer first negotiates with its peer Link Layer to ensure that the device gains priority in transmission when both the host and device have data transmission requirements. It then receives data input from the Transport Layer in double-word units, adds frame envelope information such as Start of Frame (SOF) and End of Frame (EOF) to the data, completes CRC verification of the data, scrambles the data, and finally performs 8 b/10 b encoding before sending the frame to the Physical Layer.
The Transport Layer is primarily used for processing Frame Information Structure (FIS), and its main operations include two items: first, simply constructing FIS for transmission, and second, decomposing received FIS. When receiving a request to encapsulate FIS from the Application Layer, the Transport Layer first identifies the type of FIS to be constructed. Common FIS types and their corresponding type numbers are shown in Table 1 below, then acquires corresponding data from designated registers, encapsulates the data into FIS, and sends it to the Link Layer. When receiving FIS transmitted from the Link Layer, the Transport Layer decapsulates it to obtain corresponding data and commands, and then notifies the Application Layer to receive the decapsulated information.
The Application Layer mainly directly controls the device’s command block register group to drive hardware operations by receiving and parsing ATA commands issued by the operating system. According to the commands for controlling the device, the commands of the Application Layer are mainly divided into NON-Data commands, PIO commands, and DMA commands.

4.1.2. Implementation of SATA Controller Logic

Based on the principles of the Physical Layer, Link Layer, Transport Layer, and Application Layer of the SATA protocol, the implementation was coded in Verilog within the Vivado development environment. The internal logic framework of the SATA controller is shown in Figure 7: the MicroBlaze soft core sends control instructions and data to the SATA controller via the AXI.
Its Physical Layer mainly consists of a GTX transceiver and an OOB control module. The establishment of a physical path is primarily achieved by the OOB control module controlling the transmission and reception of OOB signals. The state machine transition flow chart in the control module is shown in Figure 8: after the system is powered on or reset, the state machine enters the initial state OOB_IDLE; in this state, the OOB controller sends a COMRESET signal, and after transmission is completed, it waits for the storage device to send a COMINIT signal to the FPGA. If the waiting time exceeds 10 ms, it returns to the initial state to resend the COMRESET signal; if the COMINIT signal is received, it proceeds to the next state. After the OOB controller sends a COMWAKE signal to the storage device, it detects whether the storage device returns a COMWAKE signal. Upon detecting the COMWAKE signal, it sends the D10.2 character, then waits to receive the ALIGNP primitive sent by the storage device within a time limit of 880μs. If the time limit is exceeded, the OOB state machine resets and returns to the initial state; if the ALIGNP primitive is received, it indicates that the transmission rate matching is completed, after which the OOB controller sends the ALIGNP primitive and starts detecting the synchronous SYNC primitive. When three consecutive back-to-back SYNC primitives are detected, it jumps to the LINKUP state. In this state, the link has been fully established, and the OOB controller pulls high the PHYRDY flag bit indicating successful data path establishment. When the link is just established, only ALIGNP and SYNC primitives are transmitted alternately, with no data transmission occurring.
As shown in Figure 7, when the SATA controller controls data frame transmission, two state machines are mainly used to control each underlying module. Among them, the data frame generation state machine is primarily responsible for encapsulating data frames when receiving a transmission request from the Application Layer. The state machine determines the type of data frame to be sent (e.g., 27 h, 34 h, 41 h, 46 h, etc.) based on commands from the Application Layer. If the FIS type is 46 h (i.e., data transmission), data is read from the write FIFO; for other types, data is read from registers in the Application Layer. Then, after encapsulation according to the corresponding frame structure, a CRC check code, frame envelope information, etc., are added. Once the data frame is constructed, the DATA_RDY request bit is sent.
The data frame transmission control state machine is mainly responsible for controlling the transmission of data frames, as shown in Figure 9: after the physical path is established and the PHYRDY flag bit is detected to be pulled high, this state machine enters the idle state. When the transmission request DATA_RDY is detected to be set to 1, it controls the primitive transmitter to send the X_RDY primitive. After receiving the R_RDY primitive in response from the storage device side, it jumps to the transmission state and controls the frame transmission FIFO to send data frames. If the frame transmission FIFO is empty but the data frame transmission is not completed, it sends a HOLD primitive to control the storage device to pause data reception; when the data volume in the FIFO reaches a certain capacity, it sends the HOLD primitive again to inform the storage device to resume data reception. After data transmission is completed, it waits for feedback from the storage device and then returns to the idle state to wait for the next transmission.
During data reception, only one data reception state machine is used for control, and its flow chart is shown in Figure 10; its main function is to control the data reception process. After detecting the successful establishment of the Physical Layer path, this state machine detects the X_RDY primitive. Upon detecting this primitive, it enters the data reception state; when the SOF primitive is detected, it starts receiving data frames. While receiving data frames, it judges the type of the received data frame and performs corresponding operations based on different types. If the receive FIFO is full, but the data frame reception is not completed during data reception, it sends a HOLD primitive to notify the device side to pause transmission; when sufficient space becomes available in the FIFO, it sends the HOLD primitive again to notify the device side to resume transmission. After completing the reception of one data frame, it performs a CRC check upon detecting the EOF primitive and uploads the check result; finally, it returns to the initial state after receiving the SYNC synchronization primitive.
The application layer implements a subset of the ATA command set [22] by means of a finite state machine (FSM), including commands such as ReadDMAExt, WriteDMAExt, FPDMARead, FPDMAWrite, SetFeatures and IdentifyDevice. The command layer parses read/write requests and issues corresponding sector operation instructions to the transport layer. The execution of each command involves the exchange of a series of frame information structures. The MicroBlaze processor embedded in the FPGA writes commands and related parameters to the mirror registers of the application layer via the AXI, thereby controlling the operation of the underlying modules.

4.2. Block Design

The storage system supports a file system, with the MicroBlaze soft core serving as the control and scheduling module of the large-capacity storage system, through which the construction of the file system and overall control of the storage system are implemented [23]. The implementation of the MicroBlaze module design is accomplished by invoking the embedded processor built into Xilinx. According to system requirements, Figure 11 shows the Block Design diagram: interrupt controllers, MIG, UART, and other system-built IP cores are integrated into the MicroBlaze soft core system. During the testing phase, commands are sent via the UART serial port to test the performance of the storage system. To simplify the design of the high-speed interface between the FPGA and DDR3, the MIG core is adopted in this design; it features a standardized AXI4 interface that can be directly controlled by the MicroBlaze soft core. Meanwhile, the MicroBlaze soft core module controls the data input channel module and SATA controller module via the AXI.

4.3. File System Design

The system adopts Vitis software for the development of the MicroBlaze embedded system and implements a FAT32 file system manager capable of directly manipulating disk sectors. The adoption of the FAT32 file system in this design is primarily driven by scenario-specific requirements. Specifically, it offers robust cross-platform compatibility, allowing direct data access on diverse host systems, including Windows and Linux, as well as industrial test equipment without the need for additional drivers, which aligns with the demand for rapid analysis of flight test data. Its core logic, encompassing the management of the File Allocation Table (FAT) and directory entries, can be readily implemented on the MicroBlaze soft core without consuming excessive FPGA hardware resources. Moreover, it supports direct mapping to the Logical Block Addressing (LBA) operations of the SATA controller, thereby guaranteeing real-time storage performance under high data rate conditions. When integrated with a software-based data backup mechanism, the system is able to meet the reliability requirements of airborne applications. In contrast, while the exFAT file system is free from capacity constraints, it exhibits inadequate compatibility with legacy industrial control devices and entails high protocol complexity, which adds to the development overhead of the embedded system. Raw data storage schemes that dispense with a formal file system rely on custom parsing programs, resulting in inflexible data management and difficulties in verification using universal tools, thus limiting their engineering applicability. Meanwhile, the FAT32 file system has inherent limitations: the constraints of a maximum single-file size of 4 GB and a maximum partition size of 2 TB give rise to insufficient scalability for large-scale data storage scenarios.
The file system primarily consists of two main components: file/directory management and FAT32 structure parsing. Structure parsing is crucial for initialization, as it reads the disk’s Master Boot Record (MBR) and FAT32 Boot Sector (BPB) to extract corresponding storage device information, such as cluster size, FAT location, and the starting cluster of the root directory.
The file/directory management function is mainly to find corresponding empty slots (0x00 or 0xE5) in the directory sector and write a 32-byte “directory entry” data structure, including file name, attributes, starting cluster number, and file size. This information enables precise positioning of the corresponding file. When storing data in a file, it is sequentially stored in the data area according to the corresponding file information. The content composition of the mSATA drive is shown in Figure 12 below:
The operations of the file system are mainly implemented by converting corresponding logical requests (reading/writing specific LBAs) into read/write operations on the memory-mapped registers of the SATA controller’s Application Layer. By writing specific parameters for file system operations to the memory-mapped addresses of the Application Layer, such as the LBA register and command register, the SATA controller is driven to perform corresponding operations on the mSATA solid-state
To prevent irreversible damage to file system data caused by sudden system power failures, this system performs backup processing on the overall file system information, file information descriptor structure table, and FAT stored in the reserved area and FAT area. The backup data is stored immediately after the corresponding original data blocks. Each time the system powers on and mounts the file system, it reads the data from the reserved area and FAT area on the hard disk for verification and initialization. If data corruption or unavailability is detected, the backup data will be read and used to replace the damaged original data. The detailed workflow is illustrated in Figure 13.
When the system mounts the file system, it first reads the information from the reserved area and FAT area on the hard disk into the local storage structure and then performs the following checks: based on the starting cluster number of free clusters in the overall file system information, traverse the free cluster linked list in the FAT to verify whether the number of free clusters matches the record in the overall file system information and whether each entry in the free cluster linked list is marked as free, and verify whether the total number of valid files matches the file count recorded in the overall file system information; if all the above checks are passed without errors, it indicates that the original data on the hard disk is not damaged, the system then overwrites the data in the backup tables with the original data and completes the file system mounting, whereas if any check fails, the original data is deemed corrupted and the system reads the backup data of the reserved area and FAT area from the hard disk, after which the same data verification process performed on the original data is applied to the backup data, if the backup data passes the verification, it will overwrite the original data, and if the backup data also fails the verification, it means that the file system data on the hard disk is completely damaged, in which case the hard disk can only be formatted and the file system reinitialized.

4.4. On-Chip Resource Utilization

The on-chip resource utilization of this design is generally moderate, with no risk of resource bottlenecks. Among the core logic resources, the utilization rate of look-up tables (LUTs) is 12.82%, and that of registers is 5.55%. Among the storage resources, the utilization rate of LUTRAM is 6.57%, and that of block RAM (BRAM) is 14.61%. Among the interface and clock resources, the utilization rate of input/output (I/O) interfaces is 16.00%, that of high-speed serial transceivers is 12.50%, that of mixed-mode clock managers (MMCMs) is 30.00%, and that of phase-locked loops (PLLs) is 10.00%. The utilization rate of digital signal processors (DSPs) is only 0.48%, As shown in Figure 14 below:

5. Test Results

5.1. Performance Testing

To verify the functionality of the storage system, an analog signal source was constructed to send incrementing numbers to a single drive at a rate of 600 MB/s. Each incrementing number consists of a 4-byte frame counter followed by EB 90 as the frame end [24], with the frame header being incrementing data from 00 to FF; each frame is 256 bytes in total. Meanwhile, the FAT32 file system was configured with a fixed single-file size of 1 GB and a total storage capacity of 100 GB for testing. The DOS Boot Record (DBR) parameters were set as follows: 512 bytes per sector, 64 sectors per cluster, and 32 reserved sectors. After powering on the system for data storage, the built-in serial port debugging assistant of Vitis was used to test the storage system. As shown in Figure 15 below, the corresponding storage file TEST1 was created via the serial port prior to data storage.
After storage was completed, the storage system was connected to a computer via a USB interface, and data was read back through the USB3.0 interface. The read-back data shown in Figure 16 exhibited continuously increasing numbers with no data loss.
The mSATA drive was removed and recognized via a converter. The drive was opened with WinHex to inspect its file system structure and boot sector. The template manager built into WinHex was used to view the specific parameters corresponding to the DBR in the FAT32 boot sector, as shown in Figure 17. It can be observed that the DBR parameters of the FAT32 file system in this design were consistent with the configured parameters.

5.2. Single-Disk Read/Write Speed Test

To satisfy the high-rate storage requirements of the system for storing different data volumes, single-disk read/write speed tests were conducted for varying data volumes, with the test results presented in Figure 18 and Figure 19 below. The system can stably maintain a write speed of over 420 MB/s and a read speed of over 300 MB/s across different data volumes.

5.3. High and Low Temperature Test

To verify the environmental adaptability of the storage system, high and low temperature tests were conducted on it to validate its operational performance within the temperature range of −40 °C to 80 °C. As shown in Figure 20 below, the system was tested in a high and low temperature chamber.
An oscilloscope was employed to sample the eye diagram of differential signals during data transmission for analyzing the quality of the communication link. As shown in Figure 21 and Figure 22 below, the signal transmission test results are under low-temperature and high-temperature environments, respectively. According to the experimental results, the signal peak value reached 708 mV at low temperatures, with a regular waveform, no obvious jitter, and a fully open eye diagram. This indicates that the low-temperature condition had almost no impact on the channel quality, thus ensuring long-term stable data transmission. At high temperatures, the signal peak value decreased to 582 mV, which, however, still fell within the range specified by the SATA protocol standard. The high-temperature condition did not exert a significant impact on the signal quality, and the system could still maintain transmission stability.

6. Conclusions

The FPGA-controlled storage system designed in this paper takes the Kintex 7 series FPGA as the core control unit, realizes data transmission via the SATA3.0 protocol, and ports the FAT32 file system to manage stored data in rugged mSATA solid-state drives. It also integrates LVDS data reception, DDR3 data caching, and USB3.0/Gigabit Ethernet data read-back functions, forming a complete high-speed storage solution. Test results show that the system can operate stably at a high rate within the temperature range of −40 °C~80 °C, fully meeting the requirements for consistent acquisition and storage of multi-system and multi-data-type data in flight test scenarios, and thus possessing significant engineering application value. However, it still has limitations: the FAT32 file system is restricted by the 4 GB single-file size limit and 2 TB maximum partition capacity, and the data security protection mechanism only relies on local backup of key file system structures, lacking fault-tolerance design at the hardware level. Future research will focus on upgrading the file system to support the exFAT file system with no capacity limitations and introducing a hardware-level RAID-5 redundant storage architecture to optimize the hardware fault-tolerance mechanism.

Author Contributions

Conceptualization, J.L. and J.B.; methodology, J.L.; software, J.L.; validation, J.L.; formal analysis, J.B.; investigation, S.S.; resources, S.S.; data curation, J.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L.; visualization, S.S.; supervision, S.S.; project administration, S.S.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant number 62405294.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FPGAField-Programmable Gate Array
PCMPulse Code Modulation
LVDSLow-Voltage Differential Signaling
SSDSolid-State Drive
SATASerial Advanced Technology Attachment
DDRDouble Data Rate Synchronous Dynamic Random-Access Memory
OOBOut-Of-Band
CPLLClock Phase-Locked Loop
EOFEnd of Frame
SOFStart of Frame
FISFrame Information Structure
FSMFinite State Machine
AXIAdvanced eXtensible Interface
LBALogical Block Addressing
LUTLook-Up Table
MMCMMixed-Mode Clock Manager
PLLPhase-Locked Loop
DSPDigital Signal Processor
BPBBIOS Parameter Block
DBRDOS Boot Record
MBRMaster Boot Record

References

  1. Liu, L.X.; Qiao, N.N.; Tong, Q.; Wang, C.Y.; Ma, K.; Li, S.; Yang, J.M. Design of USB3.0 communication interface based on FPGA. J. Test Meas. Technol. 2021, 35, 261–265. [Google Scholar]
  2. Gorman, C.; Siqueira, P.; Tessier, R. An open-source SATA core for Virtex-4 FPGAs. In 2013 International Conference on Field-Programmable Technology (FPT); IEEE: Piscataway, NJ, USA, 2013; pp. 454–457. [Google Scholar]
  3. Kumar, V.; Anand, G.; Kumar, S.; Vasa, M.; Wallace, D.; Mutnury, B. SAS 4.0 (22.5Gbps) challenges for server platforms. In 2017 IEEE 26th Conference on Electrical Performance of Electronic Packaging and Systems (EPEPS); IEEE: Piscataway, NJ, USA, 2017; pp. 1–3. [Google Scholar]
  4. Meng, Y. Design and Implementation of Multi-Channel Measurement System Based on FPGA and eMMC. Master’s Thesis, North University of China, Taiyuan, China, 2023. [Google Scholar]
  5. Lin, H.; Luo, J.; Li, J.; Sha, Z.; Cai, Z.; Shi, Y.; Liao, J. Fast online reconstruction for SSD-based RAID-5 storage systems. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2024, 43, 1886–1899. [Google Scholar] [CrossRef]
  6. Dutta, P.K.; Singh, S.V.; Nandi, A. Effective selection of completely fair scheduler algorithm in RAID kernel for improved I/O performance using machine learning. In 7th IET Smart Cities Symposium (SCS 2023), Manama, Bahrain, 3–5 December 2023; Hybrid Conference; The Institution of Engineering and Technology: Stevenage, UK, 2023; pp. 97–104. [Google Scholar]
  7. Taveniku, M. High-Speed Data Recorder for Space, Geodesy, and Other High-Speed Recording Applications; NASA Tech Briefs: New York, NY, USA, 2013; Volume 37, pp. 87–92. [Google Scholar]
  8. Li, Y.; Chen, Z.C.; Shen, Y.Y.; Wang, Y.Q. Improved hybrid scrubbing scheme for spaceborne static random access memory-based field programmable gate arrays. J. Eng. 2019, 19, 218–224. [Google Scholar] [CrossRef]
  9. Zhang, H.X.; Lin, Y.K.; Fan, W.T. High-speed and reliable data transmission system based on Gigabit Ethernet. Chin. J. Electron Devices 2023, 46, 927–931. [Google Scholar]
  10. Yan, W.X.; Zhang, H.X.; Kang, C.S. Design of high-speed discrete parameter parallel redundant storage system based on LVDS. Ship Electron. Eng. 2025, 45, 162–165+191. [Google Scholar]
  11. Zhang, W.; Hu, Y.; Ding, R.; Yang, B. Research on high-speed asynchronous serial transmission based on 8b10b. In Proceedings of the International Conference on Applied Informatics and Communication, Xi’an, China, 20–21 August 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 586–592. [Google Scholar]
  12. Chou, C.C.; Weng, S.S.; Lu, Y.C.; Wu, T.L. EMI-reduction coding based on 8b/10b. IEEE Trans. Electromagn. Compat. 2018, 61, 1007–1014. [Google Scholar] [CrossRef]
  13. El Hajji Moine, M.; Mahmoudi Hassane, H.; Labbadi Moussa, M. The electromagnetic interference caused by high voltage power lines along the electrical railway equipment. Int. J. Electr. Comput. Eng. (IJECE) 2020, 10, 15–24. [Google Scholar] [CrossRef]
  14. Bulbul, S.S.; Abduljabbar, A.Z.; Najem, F.D.; Nyangaresi, V.O.; Ma, J.; Aldarwish, A.J. Fast multi-user searchable encryption with forward and backward private access control. J. Sens. Actuator Netw. 2024, 13, 12. [Google Scholar] [CrossRef]
  15. Najafabadi, M.M.; Khoshgoftaar, M.T.; Napolitano, A. Detecting network attacks based on behavioral commonalities. Int. J. Reliab. Qual. Saf. Eng. 2016, 23, 1650005. [Google Scholar] [CrossRef]
  16. Lee, M.; Lim, H.; Yang, Y.; Kim, S. A fast AES hardware security module for internet of things applications. Int. J. Adv. Sci. Eng. Inf. Technol. 2020, 10, 1346–1352. [Google Scholar] [CrossRef]
  17. Li, J.T.; Ren, Y.F.; Yang, Z.W.; Li, H.J. Optimization design of storage system based on SATA3.0. Appl. Electron. Tech. 2021, 47, 86–90. [Google Scholar]
  18. Wang, G.Z.; Liu, L.; Chu, C.Q.; Ren, Y.F.; Jiao, X.Q. Design of high-speed image data transmission system based on USB3.0. Instrum. Tech. Sens. 2019, 3, 106–109+113. [Google Scholar]
  19. Dang, R.Y.; Zhang, H.X.; Zhang, H.; Yan, W.X. Ultra-high-speed multi-mode intelligent storage system based on FT601Q. Instrum. Tech. Sens. 2024, 6, 61–65. [Google Scholar]
  20. Lu, X.L. Design and Implementation of High-Speed and Large-Capacity Storage System Based on FPGA and SATA3.0 Interface. Master’s Thesis, Nanjing University of Posts and Telecommunications, Nanjing, China, 2017. [Google Scholar]
  21. Wang, J.; Zhao, Y.P.; Chi, C. An automated test method for Power Cycle function of SATA solid state drive. Microelectron. Comput. 2021, 38, 40–44. [Google Scholar]
  22. Wang, Z.; Huang, C.P.; Chen, W.W. Design and implementation of SATA image acquisition system based on FPGA. Electron. Meas. Technol. 2025, 48, 26–34. [Google Scholar]
  23. Cheng, X.H. Design and Implementation of Large-Capacity Storage System for SATA Disk Based on FPGA. Master’s Thesis, Xidian University, Xi’an, China, 2019. [Google Scholar]
  24. Li, L.; Wu, F.; Yang, H.X.; Ma, Y.H.; He, B. Design of small-size embedded high-speed storage system based on FPGA. Microcontrollers Embed. Syst. 2022, 22, 25–28. [Google Scholar]
Figure 1. Overall System Framework.
Figure 1. Overall System Framework.
Electronics 15 00452 g001
Figure 2. Power Management System.
Figure 2. Power Management System.
Electronics 15 00452 g002
Figure 3. SATA Hardware Interface Circuit.
Figure 3. SATA Hardware Interface Circuit.
Electronics 15 00452 g003
Figure 4. DDR3 Interface Circuit.
Figure 4. DDR3 Interface Circuit.
Electronics 15 00452 g004
Figure 5. Overall Logic Framework.
Figure 5. Overall Logic Framework.
Electronics 15 00452 g005
Figure 6. SATA Protocol Hierarchical Structure.
Figure 6. SATA Protocol Hierarchical Structure.
Electronics 15 00452 g006
Figure 7. SATA Controller Block Diagram.
Figure 7. SATA Controller Block Diagram.
Electronics 15 00452 g007
Figure 8. OOB Controller State Machine.
Figure 8. OOB Controller State Machine.
Electronics 15 00452 g008
Figure 9. Data Transmission Control State Machine.
Figure 9. Data Transmission Control State Machine.
Electronics 15 00452 g009
Figure 10. Data Reception Control State Machine.
Figure 10. Data Reception Control State Machine.
Electronics 15 00452 g010
Figure 11. Block Design.
Figure 11. Block Design.
Electronics 15 00452 g011
Figure 12. mSATA Drive Content Composition.
Figure 12. mSATA Drive Content Composition.
Electronics 15 00452 g012
Figure 13. File System Mounting Process.
Figure 13. File System Mounting Process.
Electronics 15 00452 g013
Figure 14. On-chip Resource Utilization.
Figure 14. On-chip Resource Utilization.
Electronics 15 00452 g014
Figure 15. Create Storage File.
Figure 15. Create Storage File.
Electronics 15 00452 g015
Figure 16. Read Back Data.
Figure 16. Read Back Data.
Electronics 15 00452 g016
Figure 17. mSATA Drive Identification Parameters.
Figure 17. mSATA Drive Identification Parameters.
Electronics 15 00452 g017
Figure 18. Single-disk Write Speed.
Figure 18. Single-disk Write Speed.
Electronics 15 00452 g018
Figure 19. Single-disk Read Speed.
Figure 19. Single-disk Read Speed.
Electronics 15 00452 g019
Figure 20. Wide Temperature Range Test. (a) Low Temperature Test. (b) High Temperature Test.
Figure 20. Wide Temperature Range Test. (a) Low Temperature Test. (b) High Temperature Test.
Electronics 15 00452 g020
Figure 21. Low Temperature Test Chart.
Figure 21. Low Temperature Test Chart.
Electronics 15 00452 g021
Figure 22. High Temperature Test Chart.
Figure 22. High Temperature Test Chart.
Electronics 15 00452 g022
Table 1. FIS Type Number.
Table 1. FIS Type Number.
Type NumberDescription
27 hThe host transmits the value of the image register group to the device
34 hThe device transmits the value of the image register group to the host
39 hThe device’s response to the host’s request for DMA-based data transmission
41 hNotify the other party to update DMA operation parameters
46 hTransmit data
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, J.; Bai, J.; Shen, S. Design and Implementation of a High-Speed Storage System Based on SATA Interface. Electronics 2026, 15, 452. https://doi.org/10.3390/electronics15020452

AMA Style

Lu J, Bai J, Shen S. Design and Implementation of a High-Speed Storage System Based on SATA Interface. Electronics. 2026; 15(2):452. https://doi.org/10.3390/electronics15020452

Chicago/Turabian Style

Lu, Junwei, Jie Bai, and Sanmin Shen. 2026. "Design and Implementation of a High-Speed Storage System Based on SATA Interface" Electronics 15, no. 2: 452. https://doi.org/10.3390/electronics15020452

APA Style

Lu, J., Bai, J., & Shen, S. (2026). Design and Implementation of a High-Speed Storage System Based on SATA Interface. Electronics, 15(2), 452. https://doi.org/10.3390/electronics15020452

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop