Next Article in Journal
Processing Maps and Nano-IR Diagnostics of Type I Modifications in Mid-IR Germanate-Based Optical Glass
Previous Article in Journal
Analysis and Design of a Hybrid Graphene/Vanadium-Dioxide Terahertz Metasurface with Independently Reconfigurable Reflection Phase and Magnitude
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Evaluation and Aggregation Curve Modelling for Multimedia Services in the GPON

1
Department of Telecommunications, Faculty of Electrical Engineering and Computer Science, VSB—Technical University of Ostrava, 17. Listopadu 2172/15, 708 00 Ostrava, Czech Republic
2
CESNET, a.l.e., Zikova 1903/4, 160 00 Prague, Czech Republic
*
Author to whom correspondence should be addressed.
Photonics 2026, 13(2), 196; https://doi.org/10.3390/photonics13020196
Submission received: 12 January 2026 / Revised: 10 February 2026 / Accepted: 14 February 2026 / Published: 16 February 2026
(This article belongs to the Section Optical Communication and Network)

Abstract

With the increasing number of end users that are using multimedia services, demand for access network high-bitrate systems with sufficient quality of services is also increasing. However, this might not always be ensured by telecom operators, as they must optimize networks according to the Quality of Service (QoS) and multimedia data transmission. In this work, we tested Gigabit Passive Optical Network (GPON) performance with the help of various tools (iPerf, RFC 6349 or ITU-T Y.1564). The Grafana software v7.3.3 tool is used to monitor data streams. Measurements were made to limit the downstream bitrate of up to 20 end users at 1 Gbit/s, 500 Mbit/s, 300 Mbit/s and 100 Mbit/s. Based on repeated measurements, an aggregation curve was modelled, indicating the available bitrate with respect to the network load.

1. Introduction

This work focuses on the measurement of performance parameters in optical networks [1], with particular emphasis on transmission characteristics, Quality of Service aspects [2,3,4,5], and the analysis and understanding of aggregation curves under various network configurations and traffic loads. Performance testing in Internet networks is essential for verifying that a network delivers the declared speed, stability, and quality of service under real traffic conditions. Such testing helps to identify bottlenecks, shared-capacity congestion, and configuration errors, and it plays a key role in capacity planning and customer troubleshooting. Commonly used methodologies include RFC 6349, which focuses on measuring TCP performance and takes transport protocol behavior into account (latency, packet loss, flow control), and ITU-T Y.1564, which enables comprehensive multi-service testing and verification of Service Level Agreement (SLA) compliance using parameters such as throughput, latency, jitter, and packet loss.
Optical access networks denoted as FTTx form a key part of next-generation access (NGA) technologies, enabling high-capacity, low-latency broadband connectivity through the deployment of optical fiber in the access segment. By extending fiber closer to end users, FTTx networks provide higher transmission rates, lower energy consumption, and greater immunity to electromagnetic interference compared to traditional coaxial solutions. Their scalable architecture, indicated by the flexible termination point “x”, supports increasing bandwidth demands driven by cloud services, smart-city applications, and 5G technologies, while allowing future network evolution [6].
The presented measurements were performed on a GPON network, standardized by ITU-T and widely deployed in FTTH architectures. GPON supports triple-play services (data, voice, and video) and operates with downstream wavelengths of 1480–1500 nm and upstream wavelengths of 1260–1360 nm, with optional RF video transmission in the 1550–1560 nm band. The use of Forward Error Correction (FEC) improves the link budget by approximately 3–4 dB, enabling longer transmission distances and higher split ratios. Bandwidth allocation and QoS are ensured through T-CONTs, Dynamic Bandwidth Allocation (DBA), and GPON Encapsulation Mode (GEM). Recent studies indicate that Software-Defined Networking (SDN)-based approaches using OpenFlow can further enhance GPON flexibility by dynamically adapting service policies to QoS requirements [7,8,9,10,11,12,13,14,15,16,17].
The possibilities of GPON reconfiguration enabled by OpenFlow [17] allow key service configuration policies to be migrated to an external SDN controller. This SDN management layer dynamically adapts bandwidth allocation and QoS strategies according to the requirements of residential and enterprise users.
It ensures that the bandwidth corresponding to a given priority can be provided in both directions. QoS can be implemented on each ONT port within the GPON network [18]. In addition, statistical methods can be applied to more effectively optimize the performance, reliability, and scalability of optical networks in an era of rapidly increasing data demand [19].
This article also deals with the aggregation of data flows, which is an essential process when collecting data from individual users toward the backbone network and further to the Internet. It enables efficient network operation and construction by concentrating traffic, since the network is not dimensioned according to the sum of all users’ maximum bitrates but according to their average consumption. A key parameter is the so-called aggregation (concentration) ratio, which expresses how many users share the capacity of a given access point. The value of the aggregation ratio depends on the type of data flows (burst vs. continuous), the number of active users, their behavior (e.g., short transfers typical for web browsing), and the number of data sources. For example, an aggregation ratio of 1:20 means that the bandwidth is shared among 20 users. If the maximum link bitrate is 1000 Mbit/s, the real bitrate for an individual user may range between 50 and 1000 Mbit/s, depending on network load [20,21].
This article describes a network performance measurement carried out using four different methods.
1.
The first method employed the RFC 6349 [22] methodology to measure TCP throughput (bitrate) in the downstream direction.
2.
The second method was based on the ITU-T Y.1564 [23] standard, where the UDP protocol is used to determine network latency and jitter.
These two methods are considered reference methods for measuring transmission parameters in a network, as they follow the guidelines of BEREC BoR 112: Implementation of the Open Internet Regulation, which relates to critical network parameters and Quality of Service, such as transmission speed, latency, delay variation (jitter), and packet loss.
3.
The third approach used the iPerf tool to measure TCP throughput. All three of these measurements were always performed on a single ONT (Optical Network Termination), while the other ONTs were used to download a large data file from a server located in the laboratory.
4.
The fourth method consisted of monitoring the data throughput on active ONT interfaces.
The main contribution of the paper is a comprehensive experimental evaluation of GPON network behavior under aggregated multimedia traffic and varying end-user load, complemented by the design and validation of aggregation curves. Based on real laboratory measurements, several commonly used performance testing methods (RFC 6349, ITU-T Y.1564, iPerf, and monitoring using Grafana) were compared, and their applicability under conditions of shared capacity and high load was analyzed. The paper demonstrates the limitations of active testing methods in a fully loaded GPON network and shows that passive traffic monitoring provides the most realistic view of the actually achievable throughput. The result is a set of practically applicable aggregation curves for different maximum per-user data rates, which can serve as a basis for access network dimensioning, QoS optimization, and capacity planning by operators.
The remainder of this article is organized as follows. Section 2 describes the experimental network setup and key parameters of the optical infrastructure. Section 3 introduces the applied measurement methodologies. The measured results are analyzed in Section 4, followed by a discussion in Section 5. Finally, Section 6 summarizes the main findings and conclusions of the study.

2. System Architecture

All presented measurements were carried out on a GPON network with a total length of 25.2 km. As illustrated in the following Figure 1, the network was split between two separate laboratories using a two-stage split. In the first laboratory, measurements on ten optical network terminals were performed using splitters with ratios of 1:2 and subsequently 1:16. From this laboratory, a connection led through a 1:2 splitter to the second laboratory, where measurements on another ten terminals were conducted using a 1:32 splitter. All the network devices including OLT, L3 switch (also operates at the third layer of the ISO/OSI model) and LAN server were located in the first laboratory. Both laboratories were located within the same building. The local network server was implemented on a computer running the XAMPP 3.3.0 software package, including the Apache web server. The server machine was equipped with an Intel® Core™ i7-3930K processor clocked at 3.20 GHz, 16 GB of RAM, and ran the 64-bit version of the Windows 10 Pro operating system.
The optical network had a total length of 25.2 km and was built using G.652.D fiber. The integrity of the entire network was tested. The measured attenuation values were 14.65 dB at a wavelength of 1310 nm, 10.55 dB at 1550 nm, and 11.37 dB at 1625 nm. Chromatic dispersion (CD) was 16.844 ps/nm· km, and the polarization mode dispersion (PMD) was 0.0566 ps · km . The experimental network was installed under laboratory conditions. In the first laboratory, the connection was implemented using a 1:2 splitter (insertion loss 3.68 dB at 1310 nm), followed by a 1:16 splitter (IL = 13.35 dB at 1310 nm), where measurements were carried out on ten terminal units. From this laboratory, an optical link led through a 1:2 splitter to the second laboratory, where a 1:32 splitter (IL = 16.25 dB at 1310 nm) was installed. Here, measurements of another ten terminal units were performed. Regarding the interconnected interfaces between the individual active elements, the interfaces between the LAN server and the MikroTik L3 switch had a throughput of 1 Gbit/s; the interfaces between the FTB measurement device and the L3 switch also operated at 1 Gbit/s; on the OLT, the downstream interface had a throughput of 2.4 Gbit/s, while the upstream interface operated at 1.2 Gbit/s. All ONTs were equipped with interfaces supporting 1 Gbit/s.

3. Measuring Methods of Performance Testing

As part of the experiment, four different methods were used to measure the performance parameters of the optical network. The first method was based on the RFC 6349 [22] recommendation (Framework for TCP Throughput Testing), which specifies the procedure for evaluating performance at the transport layer (Layer 4) of the ISO/OSI reference model. The method uses the characteristics of the TCP protocol—particularly data acknowledgment and flow control—and enables the calculation of both theoretical and practical TCP throughput values. The practical value, referred to as the TCP Actual Throughput Rate (TCPaTR), is calculated using the following formula:
T C P a T R = T C P R W N D · 8 D e l a y ( a v g ) [ b / s ; B , s ] ,
where TCP RWND denotes the size of the TCP receive window used for acknowledging packets, indicating how much data the sender can transmit without having to wait for an acknowledgment from the receiver. Delay(avg) represents the average time it takes for a packet to reach the other side and for the acknowledgment to return.
Another important parameter is TCP Efficiency (TCP EFF), which quantifies the percentage of bits successfully transmitted without requiring retransmission. This metric provides an overall indication of the reliability and error performance of a TCP connection. TCP transfer efficiency can be calculated using the following formula:
T C P E F F = T B r T B T B [ % ; b , b ] ,
where TB denotes the number of transmitted bits and rTB denotes the number of bits that had to be retransmitted after a detected error.
Buffer Delay (BD) represents the relationship between the increase in the average delay, Delay(avg), during a given measurement process and the baseline delay, Delay(baseline), established prior to the start of the test. The resulting BD value can be defined as follows:
B D = D e l a y ( a v g ) D e l a y ( b a s e l i n e ) D e l a y ( b a s e l i n e ) ; [ % ; b , b ] .
Measurements using this method were carried out on an EXFO FTB-1v2 PRO device with NetBlazer Series 2.139 software. The second method used the iPerf3 (version 3.1.3) tool, which is a widely used software tool for evaluating data network performance, particularly in terms of throughput, jitter, and packet loss between two network nodes. It is based on a client–server architecture, where the server listens on a predefined port and the client initiates a test session by generating synthetic network traffic. The tool supports performance measurements using both TCP and UDP. In TCP mode, the achievable throughput is measured while being influenced by flow control and congestion control mechanisms, providing a realistic estimate of application-level transmission capacity. In UDP mode, data are transmitted at a constant rate without delivery feedback, enabling the evaluation of packet loss and jitter, which are essential for assessing real-time transmissions. During testing, iperf3 periodically evaluates transmission statistics and reports aggregated metrics, such as average throughput and total transmitted data. The results reflect end-to-end network performance and may be affected by device computational capabilities, operating system configuration, and current network load [24]. The third method used was Grafana (see Figure 2), an open-source application designed for real-time data visualization. In combination with the InfluxDB database, it enabled the monitoring and plotting of the data throughput across individual OLT interfaces, which provide detailed throughput statistics. This method was primarily used for long-term traffic monitoring.
The final method was based on the ITU-T Y.1564 recommendation (Ethernet Service Activation Test Methodology), again carried out on the EXFO FTB-1v2 PRO device. This specification is suitable, for example, for testing the readiness of NGA networks for network sharing (wholesale service delivery) as well as for testing end-user services. The method enables a comprehensive evaluation of key network parameters such as throughput, load, frame loss and frame errors, latency, jitter, and others. The test is performed in two steps. The ramp test is used to identify the threshold values of CIR (Committed Information Rate) and EIR (Excess Information Rate), as shown in Figure 3.
The service performance test then evaluates KPI (Key Performance Indicator) parameters for all services simultaneously, generating parallel data streams according to the configured parameters (e.g., frame size, data rate). The measured values on the second layer of the ISO/OSI model were subsequently assessed according to the MEF 23.2 standard issued by the Metro Ethernet Forum, which defines service quality levels (High, Medium, Low) depending on distance and the strictness of the required criteria. The High designation is intended for applications that are highly sensitive to loss, delay, and delay variation, such as VoIP services. The Medium designation is intended for applications that are sensitive to losses but more tolerant of delay and delay variation, such as near real-time applications or critical data applications. The Low designation is intended for applications that are more tolerant of loss, as well as delay and delay variation, such as non-critical data applications. The entire measurement process was repeated three times. The measured values from all methods (RFC 6349, ITU-T Y.1564, and iPerf) were averaged, recorded in tables, and further processed into graphical form. To monitor network traffic, the Grafana tool was used, which recorded the average data bitrate over time on all active ONT interfaces. The median value was then calculated from these measurements, and the results were presented in the form of aggregation curves.
For a better understanding of the overall procedure, Figure 4 shows the testing diagram.

4. Measured Results

4.1. Results from RFC 6349 and iPerf

From Figure 5, Figure 6, Figure 7 and Figure 8, it is evident that this method provides reliable results, primarily under network conditions without significant load. However, once the network becomes heavily loaded and the available transmission capacity is shared among multiple users, the buffer delay begins to increase dramatically—often exceeding 1000%—while very low throughput values are recorded. To illustrate this behavior in more detail, figures with measured results were added. First, Figure 9 shows the network load generated by a single end user. Second, Figure 10 shows the network load with three users. Third, Figure 11 shows that the measured throughput drops to 95.3 Mb/s and the buffer delay exceeds 244%, even though only four users are downloading data. Finally, Figure 12 shows a throughput of 9.4 Mb/s and a buffer delay exceeding 1109% when the network is loaded by ten users. As mentioned in Section 3, it is important to understand that RFC 6349 does not directly measure throughput, but instead calculates it based on the TCP window size and the measured network delay. One possible reason for the high buffer delay and the low calculated throughput may be increased latency caused by network congestion or insufficient buffering capacity in some active network elements.
From the values measured using the iPerf tool, it is evident that this method exhibits significant limitations for measuring TCP throughput under fully loaded network conditions. This method provides meaningful results only when there is no competition for transmission capacity in the network—meaning when the capacity is not being dynamically shared among multiple users.

4.2. Results from Grafana

Using Grafana, traffic on all active ONT interfaces was monitored, and the average bitrates during data downloads from the server were recorded. The measurement was carried out three times, and the results are presented in box plots in Figure 13, Figure 14, Figure 15 and Figure 16. A box plot is a graphical method used for the descriptive statistical analysis of data distributions. It provides a compact summary of a dataset by visualizing its central tendency, dispersion, and potential outliers. Specifically, a box plot represents the median, the lower and upper quartiles, and the interquartile range, which characterizes the variability of the data, while observations outside the typical range are identified as outliers. Boxplots are particularly useful for comparing the distributions of a variable across multiple groups, assessing variability and skewness of the data, or identifying outliers or anomalous behavior in measured values.
The total capacity of the transmission path is 1 Gbit/s, and it is important to note that once the combined bitrates of all active end devices exceed this limit, the available capacity begins to be shared and contested among them. As previously shown in the graphs in Figure 13, Figure 14, Figure 15 and Figure 16, the bitrate values vary significantly, especially in situations where such sharing occurs. Outliers also appear, indicating that some devices achieved noticeably higher bitrates than others within the measured interval. This confirms that the bandwidth allocation occurred dynamically and that individual end devices behaved stochastically. The observed outliers indicate stochastic bandwidth allocation effects inherent to shared-capacity access networks. The measurement results were also influenced by the different technical configurations of the computers used. As shown in Table 1, half were running with Windows 10, while the other half used Ubuntu 18.04. Factors such as the lower computing performance of network devices, outdated hardware, or slower caches could also have contributed to the differences in speeds. Based on these measurements, aggregation curves were constructed and are shown in Figure 17. The measured throughput values correspond to deliberately configured network states with different numbers of active ONTs and therefore do not represent independent random samples. The observed throughput variation is primarily driven by deterministic changes in aggregation load rather than by random measurement noise, making classical statistical measures such as confidence intervals for the mean inappropriate. Regression analysis is thus employed to model the functional relationship between the number of active ONTs and the measured throughput at the first ONT. A hyperbolic regression model is used to capture the expected bandwidth-sharing behavior, providing a continuous and physically interpretable performance characterization of the system [25].
For clarity, Table 2 provides the coefficients of determination, expressing how well the regression curve fits the actual data. The closer the coefficient of determination to 1, the better the curve represents the data.

4.3. Results from ITU-T Y.1564

According to the MEF 23.2 (Metro Ethernet Forum) specification [26], the Class of Service (CoS) parameter is divided into three levels: High (H), Medium (M), and Low (L). These levels differ in their performance requirements, as shown in Table 3. The High level is intended for applications that are highly sensitive to packet loss, delay, and jitter—typical examples include VoIP. The Medium level is suitable for applications that are sensitive to data loss but can tolerate delay and delay variation better, such as near-real-time applications or bandwidth-intensive data transfers. The Low level is designed for applications that are not significantly affected by loss or delay, such as common, less critical data transmissions.
Figure 18, Figure 19, Figure 20 and Figure 21 show the measured latency and jitter values at all four bitrates.
In our measurements, the traffic corresponds to the Low CoS class, which corresponds to the specifications for standard data transmission. In the graphs, it can be seen that the measured latency values do not meet the requirements of the MEF 23.2 standard for metropolitan networks up to 250 km, which defines a maximum allowable delay of 37 ms. When the bitrate was limited to 100 Mbit/s per user (Figure 18), latency ranged from 106 to 145 ms; with a limit of 300 Mbit/s (Figure 19), it fluctuated between 68 and 120 ms; at the 500 Mbit/s limit (Figure 20), it was around 100 and 120 ms; and when increased to 1000 Mbit/s (Figure 21), latency values were approximately 64 ms. On the other hand, the jitter values remained within the MEF 23.2 specification. The elevated latency values can likely be attributed to higher network load or lower-quality buffering in some of the active network components. Another reason is that some active elements along the path convert signals from electrical to optical and vice versa, which also affects the measured parameters.

5. Discussion

This work has confirmed the presence of aggregation in a network in practice. Aggregation can occur at many points within a network, such as on the public Internet, the operator’s backbone network, the aggregation (concentration) point, or within the local user network. Due to the variable behavior of users and data flows, traffic spikes may occur that certain parts of the network are unable to handle. This leads to collisions, buffer overflows, and collectively, packet loss. To prevent uncontrolled packet loss (so-called uncontrolled aggregation), controlled flow limiting is performed at the so-called aggregation point (controlled aggregation). Theoretically, traffic aggregation should work such that when two users operate simultaneously, capacity is redistributed between them. In practice, however, one must account for imperfections in the environment—typically lower-performance or outdated user devices, or insufficient buffer capacity of network elements. It is also important to recognize that, for example, when downloading files from the Internet, the path is often “long,” passing through many active elements, and the files themselves may come from a source with limitations unknown to the end user. Another contributing factor may arise when a network is dimensioned for “normal” usage conditions, but suddenly, many people are forced to work from home, potentially overloading the network. Regarding the testing methods RFC 6349 and ITU-T Y.1564, proper window size and buffer capacity settings are necessary for TCP tests. In the case of the iPerf tool, it is particularly suitable for measuring throughput in unloaded networks, but it cannot be recommended for testing under more demanding operational conditions.

6. Conclusions

The downstream bitrate on the tested network was monitored using the Grafana tool, which collected data from the active input interface of the ONT. Grafana proved to be the most accurate method for evaluating results, as it allows real-time monitoring of bitrates. Based on these measured values, aggregation curves were created for various maximum per-user bitrate limits (1 Gbit/s, 500 Mbit/s, 300 Mbit/s, and 100 Mbit/s), with aggregation ratios ranging from 1:2 to 1:20. Aggregation ratios of 1:2 to 1:20 were based on the number of available end units, i.e., twenty. Due to equipment constraints, the number of end units was limited to twenty.
Another method used was measurement according to the RFC 6349 specification, which serves as a reference method for evaluating transmission parameters at the TCP network layer. However, in a fully loaded network, it became apparent that this method is suitable only until transfer capacity begins to be redistributed among multiple users. In such cases, the measured bitrates dropped dramatically, often nearly to zero. The reason is that RFC 6349 does not directly measure bitrate speed but calculates it based on the size of the TCP window (RWND) and network delay.
Latency and jitter were further measured using the ITU-T Y.1564 methodology. The results showed that latency exceeded the values specified by the MEF 23.2 standard, which may be caused, for example, by insufficient buffering performance of some devices in the network. The final measurement was carried out using the iPerf tool version 3.1.3 for Windows. The issue observed with RFC 6349 occurred again here as well—when the network was loaded, and capacity had to be shared among multiple users, the measured speeds dropped significantly. Therefore, iPerf appears suitable only for measuring throughput in unloaded networks and cannot be recommended for testing under more demanding operational conditions [27].

Author Contributions

Conceptualization, K.T. and J.L.; methodology, K.T. and J.L.; validation, K.T. and J.L. and P.Š.; formal analysis, K.T. and J.L.; investigation, K.T., J.L., J.Š. and P.Š.; resources, K.T. and J.L.; data curation, K.T.; writing—original draft preparation, K.T., J.L., J.Š. and P.Š.; writing—review and editing, J.L., P.Š. and J.V.; visualization, K.T.; supervision, J.L.; project administration, K.T.; funding acquisition, J.L., J.N. and J.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was carried out as part of CESNET 743/2023 and SP2026/028. Also, the paper was supported by the e-Infra LM2023054 project.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank Martin Pustka and his team for their cooperation.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AABRActually Achieved Bit Rate
BDBuffer Delay
CDChromatic Dispersion
CIRCommitted Information Rate
CoSClass of Service
DBADynamic Bandwidth Allocation
EIRExcess Information Rate
FECForward Error Correction
FTTHFiber To The Home
GEMGPON Encapsulation Mode
GPONGigabit Passive Optical Network
ILInsertion Loss
ISO/OSIInternational Organization for Standardization / Open Systems Interconnection
ITU-TInternational Telecommunication Union-Telecommunication Standardization Sector
KPIKey Performance Indicator
L3Layer 3
LANLocal Area Network
MEFMetro Ethernet Forum
NGANext Generation Access
OLTOptical Line Terminal
ONTOptical Network Terminal
PMDPolarization Mode Dispersion
QoSQuality of Services
RFCRequest For Comments
SDNSoftware-Defined Networking
SLAService Level Agreement
TCPTransmission Control Protocol
TCPaTRTCP Actual Throughput Rate
TCP EFFTCP Efficiency
TCP RWNDTransmission Control Protocol Receive Window
UDPUser Datagram Protocol

References

  1. Cucka, M.; Grenar, D.; Frolka, J.; Vavra, J.; Slavicek, K.; Kyselak, M. Simulation and Measurement of Optical Networks 10 and 100 Gb/s. In Proceedings of the International Conference on New Trends in Signal Processing (NTSP 2022), Demanovska Dolina, Slovakia, 12–14 October 2022. [Google Scholar] [CrossRef]
  2. Lipenbergs, E.; Smirnova, I.; Stafecka, A.; Ivanovs, G.; Gavars, P. Quality of service parameter measurements data analysis in the scope of net neutrality. In Proceedings of the 2017 Progress in Electromagnetics Research Symposium—Fall (PIERS-FALL), Singapore, 19–22 November 2017; pp. 1230–1234. [Google Scholar] [CrossRef]
  3. Smirnova, I.; Lipenbergs, E.; Bobrovs, V.; Ivanovs, G. The Analysis of the Impact of Measurement Reference Points in the Assessment of Internet Access Service Quality. In Proceedings of the 2019 Photonics & Electromagnetics Research Symposium-Fall (PIERS-Fall), Xiamen, China, 17–20 December 2019; pp. 2972–2977. [Google Scholar] [CrossRef]
  4. Vagale, I.; Lipenbergs, E.; Bobrovs, V.; Ivanovs, G. Development of internet measurement principles for representation of measured provision of service (QoS-2). J. Inf. Telecommun. 2021, 5, 267–277. [Google Scholar] [CrossRef]
  5. Vermeulen, B.; Wellen, J.; Geilhardt, F.; Weis, E.; Mas, C.; Dhoedt, B.; Demeester, P. End-to-end QoS resource management for an IP-based DWDM access network. J. Light. Technol. 2004, 22, 2592–2605. [Google Scholar] [CrossRef]
  6. Larry. Comprehensive Understanding of FTTx Network. *FS.com Blog*. 17 December 2014. Available online: https://www.fs.com/blog/comprehensive-understanding-of-fttx-network-8176.html (accessed on 12 February 2026).
  7. Latal, J.; Wilcek, Z.; Kolar, J.; Vojtech, J.; Šarlej, F. Measurement of performance parameters of multimedia services on a hybrid access network xPON/xDSL. In Proceedings of the International Conference on Transparent Optical Networks, Angers, France, 9–13 July 2019; Volume 2019. [Google Scholar] [CrossRef]
  8. PROMAX. What Do PON, GPON, XG-PON, 10G-EPON Stand for? Which Analyzers Are Compatible with Them? March 2019. Available online: https://www.promaxelectronics.com/ing/news/562/what-do-pon-gpon-xg-pon-10g-epon-stand-for-which-analyzers-are-compatible-with-them (accessed on 12 February 2026).
  9. Selmanovic, F.; Skaljo, E. GPON in Telecommunication Network. In Proceedings of the International Congress on Ultra Modern Telecommunications and Control Systems, Moscow, Russia, 18–20 October 2010; pp. 1012–1016. [Google Scholar] [CrossRef]
  10. Horvath, T.; Munster, P.; Oujezsky, V.; Bao, N.-H. Passive Optical Networks Progress: A Tutorial. Electronics 2020, 9, 1081. [Google Scholar] [CrossRef]
  11. Ubaidillah, A.; Alfita, R.; Toyyibah. Link Power Budget and Traffict QoS Performance Analysis of Gygabit Passive Optical Network. J. Phys. Conf. Ser. 2018, 953, 012129. [Google Scholar] [CrossRef]
  12. Maheswaravenkatesh, P.; Raja, A.S. A QoS-Aware Dynamic Bandwidth Allocation in PON Networks. Wirel. Pers. Commun. 2017, 94, 2499–2512. [Google Scholar] [CrossRef]
  13. Sales, V.; Segarra, J.; Prat, J. An efficient dynamic bandwidth allocation for GPON long-reach extension systems. Opt. Switch. Netw. 2014, 14, 69–77. [Google Scholar] [CrossRef]
  14. Radivojević, M.; Matavulj, P. The way toward truly QoS-aware EPON. Photon. Netw. Commun. 2024, 47, 125–138. [Google Scholar] [CrossRef]
  15. Memon, K.A.; Jaffer, S.S.; Qureshi, M.A.; Qureshi, K.K. Dynamic bandwidth allocation in time division multiplexed passive optical networks: A dual-standard analysis of ITU-T and IEEE standard algorithms. PeerJ Comput. Sci. 2025, 11, e2863. [Google Scholar] [CrossRef] [PubMed]
  16. Tasneem, N.; Hossen, M. Improving QoS of Peer to Peer Multimedia Services by Employing Multiple Upstream Wavelengths in EPON. In Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 5–7 June 2020; pp. 1090–1093. [Google Scholar] [CrossRef]
  17. Rafiq, A.; Hayat, M.F. QoS-Based DWBA Algorithm for NG-EPON. Electronics 2019, 8, 230. [Google Scholar] [CrossRef]
  18. Cale, I.; Salihovic, A.; Ivekovic, M. Gigabit Passive Optical Network-GPON. In Proceedings of the 29th International Conference on Information Technology Interfaces (ITI), Cavtat, Croatia, 25–28 June 2007; pp. 679–684. [Google Scholar] [CrossRef]
  19. Routray, S.K.; Sahin, G.; da Rocha, J.R.F.; Pinto, A.N. Statistical Analysis and Modeling for Optical Networks. Electronics 2025, 14, 2950. [Google Scholar] [CrossRef]
  20. Rejzek, J. Aggregation of Internet Connections Is an Unnecessary Scare. Lupa.cz, Internet Info, s.r.o. February 2021. Available online: https://www.lupa.cz/clanky/agregace-internetoveho-pripojeni-je-zbytecny-strasak/?fbclid=IwAR0VNLOUlM-XEu8Rq8o0kXlYFS--dGunIfHgS-RKFF8WOpSul4JnqzrfHrw (accessed on 12 February 2026).
  21. Vodrážka, J.; Jareš, P. Issues of Transmission Speed and Data Aggregation, Testing of New Generation Internet Connections (NGA), Inovace VOV. March 2019. Available online: https://www.vovcr.cz/odz/tech/521/page16.html (accessed on 12 February 2026).
  22. Schrage, R.; Forget, G.; Geib, R.; Constantine, B. Framework for TCP Throughput Testing, RFC 6349, RFC Editor. August 2011. 27p. Available online: https://www.rfc-editor.org/info/rfc6349 (accessed on 12 February 2026).
  23. Ethernet Service Activation Test Methodology, Recommendation Y.1564, International Telecommunication Union. 2011. Available online: http://handle.itu.int/11.1002/1000/11830-en (accessed on 12 February 2026).
  24. iPerf, iPerf–The TCP, UDP and SCTP Network Bandwidth Measurement Tool, Official Website of the iPerf Tool for Measuring Network Bandwidth. Available online: https://iperf.fr/ (accessed on 3 February 2026).
  25. Ngo, V.V.; Chu, N.H.T.; Vu, M.T.H.; Tran, H.T.; Duong, L.T.H. Anomaly Detection with Linear Regression in IP Multimedia Subsystem. In Proceedings of the 23rd International Symposium on Communications and Information Technologies (ISCIT), Bangkok, Thailand, 23–25 September 2024; pp. 271–274. [Google Scholar] [CrossRef]
  26. MEF 23.2-Carrier Ethernet Class of Service, MEF Forum. August 2016. Available online: https://wiki.mef.net/display/CESG/MEF+23.2+-+Carrier+Ethernet+Class+of+Service (accessed on 12 February 2026).
  27. Trubák, K. Measurement of Performance Parameters of Optical Networks. Master’s Thesis, VSB–Technical University of Ostrava, Faculty of Electrical Engineering and Computer Science, Ostrava, Czech Republic, 2022. Available online: http://hdl.handle.net/10084/147601 (accessed on 12 February 2026).
Figure 1. Performance parameters measurement of the GPON topology.
Figure 1. Performance parameters measurement of the GPON topology.
Photonics 13 00196 g001
Figure 2. Graphical representation of measured values using the Grafana tool.
Figure 2. Graphical representation of measured values using the Grafana tool.
Photonics 13 00196 g002
Figure 3. Measurement of CIR and EIR.
Figure 3. Measurement of CIR and EIR.
Photonics 13 00196 g003
Figure 4. Diagram of the overall testing process.
Figure 4. Diagram of the overall testing process.
Photonics 13 00196 g004
Figure 5. Comparison diagram of all measurement methods for 1000 Mbit/s.
Figure 5. Comparison diagram of all measurement methods for 1000 Mbit/s.
Photonics 13 00196 g005
Figure 6. Comparison diagram of all measurement methods for 500 Mbit/s.
Figure 6. Comparison diagram of all measurement methods for 500 Mbit/s.
Photonics 13 00196 g006
Figure 7. Comparison diagram of all measurement methods for 300 Mbit/s.
Figure 7. Comparison diagram of all measurement methods for 300 Mbit/s.
Photonics 13 00196 g007
Figure 8. Comparison diagram of all measurement methods for 100 Mbit/s.
Figure 8. Comparison diagram of all measurement methods for 100 Mbit/s.
Photonics 13 00196 g008
Figure 9. RFC 6349 measurement while 1 endpoint is downloading with 300 Mbit/s bandwidth limitation.
Figure 9. RFC 6349 measurement while 1 endpoint is downloading with 300 Mbit/s bandwidth limitation.
Photonics 13 00196 g009
Figure 10. RFC 6349 measurement while 3 endpoints are downloading with 300 Mbit/s bandwidth limitation.
Figure 10. RFC 6349 measurement while 3 endpoints are downloading with 300 Mbit/s bandwidth limitation.
Photonics 13 00196 g010
Figure 11. RFC 6349 measurement while 4 endpoints are downloading with 300 Mbit/s bandwidth limitation.
Figure 11. RFC 6349 measurement while 4 endpoints are downloading with 300 Mbit/s bandwidth limitation.
Photonics 13 00196 g011
Figure 12. RFC 6349 measurement while 10 endpoints are downloading with 300 Mbit/s bandwidth limitation.
Figure 12. RFC 6349 measurement while 10 endpoints are downloading with 300 Mbit/s bandwidth limitation.
Photonics 13 00196 g012
Figure 13. Box plots of download bitrates measurements at limited capacity of 100 Mbit/s.
Figure 13. Box plots of download bitrates measurements at limited capacity of 100 Mbit/s.
Photonics 13 00196 g013
Figure 14. Box plots of download bitrates measurements at limited capacity of 300 Mbit/s.
Figure 14. Box plots of download bitrates measurements at limited capacity of 300 Mbit/s.
Photonics 13 00196 g014
Figure 15. Box plots of download bitrates measurements at limited capacity of 500 Mbit/s.
Figure 15. Box plots of download bitrates measurements at limited capacity of 500 Mbit/s.
Photonics 13 00196 g015
Figure 16. Box plots of download bitrates measurements at limited capacity of 1000 Mbit/s.
Figure 16. Box plots of download bitrates measurements at limited capacity of 1000 Mbit/s.
Photonics 13 00196 g016
Figure 17. The 1:20 aggregation curves during limited bitrate per user.
Figure 17. The 1:20 aggregation curves during limited bitrate per user.
Photonics 13 00196 g017
Figure 18. ITU-T Y.1564 measurements at 100 Mbit/s bitrate per user.
Figure 18. ITU-T Y.1564 measurements at 100 Mbit/s bitrate per user.
Photonics 13 00196 g018
Figure 19. ITU-T Y.1564 measurements at 300 Mbit/s bitrate per user.
Figure 19. ITU-T Y.1564 measurements at 300 Mbit/s bitrate per user.
Photonics 13 00196 g019
Figure 20. ITU-T Y.1564 measurements at 500 Mbit/s bitrate per user.
Figure 20. ITU-T Y.1564 measurements at 500 Mbit/s bitrate per user.
Photonics 13 00196 g020
Figure 21. ITU-T Y.1564 measurements at 1000 Mbit/s bitrate per user.
Figure 21. ITU-T Y.1564 measurements at 1000 Mbit/s bitrate per user.
Photonics 13 00196 g021
Table 1. Technical configuration of computers used in experimental study.
Table 1. Technical configuration of computers used in experimental study.
QuantityOperating SystemCPURAMNumber of CoresNumber of Streams
1Windows 10 HomeIntel Core i5 7200U
2.5 GHz
8 GB24
9Windows 10 ProIntel Core i3 3110M
2.4 GHz
8 GB24
10Linux
4.15.0-166-generic
Ubuntu
18.04.6.LTS
AMD A10-7700K
APU with
Radeon™ R7
Graphics
16 GB44
Table 2. Coefficient of determination and the regression curve for the individual bitrates. The value in green best represents the data, while the value in red represents the data the worst.
Table 2. Coefficient of determination and the regression curve for the individual bitrates. The value in green best represents the data, while the value in red represents the data the worst.
Maximum Bitrate [Mbit/s]
1003005001000
Regression curve y 156.73 x 0.307 743.64 x 0.881 897.1 x 0.947 1264.9 x 1.123
Coefficient of determination R 2 0.54090.89060.99430.976
Table 3. Required CoS values according to MEF 23.2.
Table 3. Required CoS values according to MEF 23.2.
LabelLatencyJitter
H10 ms5 ms
M20 ms10 ms
L37 ms16 ms
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Trubák, K.; Látal, J.; Štípal, J.; Šiška, P.; Nedoma, J.; Vojtěch, J. Performance Evaluation and Aggregation Curve Modelling for Multimedia Services in the GPON. Photonics 2026, 13, 196. https://doi.org/10.3390/photonics13020196

AMA Style

Trubák K, Látal J, Štípal J, Šiška P, Nedoma J, Vojtěch J. Performance Evaluation and Aggregation Curve Modelling for Multimedia Services in the GPON. Photonics. 2026; 13(2):196. https://doi.org/10.3390/photonics13020196

Chicago/Turabian Style

Trubák, Kamil, Jan Látal, Jiří Štípal, Petr Šiška, Jan Nedoma, and Josef Vojtěch. 2026. "Performance Evaluation and Aggregation Curve Modelling for Multimedia Services in the GPON" Photonics 13, no. 2: 196. https://doi.org/10.3390/photonics13020196

APA Style

Trubák, K., Látal, J., Štípal, J., Šiška, P., Nedoma, J., & Vojtěch, J. (2026). Performance Evaluation and Aggregation Curve Modelling for Multimedia Services in the GPON. Photonics, 13(2), 196. https://doi.org/10.3390/photonics13020196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop