Quality of Service (QoS) Management for Local Area Network (LAN) Using Tra ﬃ c Policy Technique to Secure Congestion

: This study presents the proposed testbed implementation for the Advanced Technology Training Center (ADTEC) Batu Pahat, one of Malaysia’s industrial training institutes. The objectives of this study are to discover the issues regarding network congestion, propose a suitable method to overcome such issues, and generate output data for the comparison of the results before and after the proposed implementation. The internet is directly connected to internet service providers (ISPs), which neither impose any rule nor ﬁlter the tra ﬃ c components; all connections comply on the basis of the base e ﬀ ort services provided by the ISP. The congestion problem has been raised several times and the information technology (IT) department has been receiving complaints about poor and sometimes intermittent internet connection. Such issues provide some ideas for a possible solution because the end client is a human resource core business. In addition, budget constraints contribute to this problem. After a comprehensive review of related literature and discussion with experts, the implementation of quality of service through add-on rules, such as tra ﬃ c policing on network tra ﬃ c, was proposed. The proposed testbed also classiﬁed the tra ﬃ c. Results show that the proposed testbed is stable. After the implementation of the generated solution, the IT department no longer receives any complaints, and thus fulﬁlls the goal of having zero internet connection issues. funding acquisition, R.H. A.H.M.A.; methodology, W.M.H.A.; project administration, W.M.H.A. and R.H.; resources, W.M.H.A., and A.S.A.-K.; software, A.S.A.-K.; supervision, R.H. and A.H.M.A.; visualization, W.M.H.A. and M.K.H.;


Introduction
Network technology users practice a variety of methods for searching for information, such as reading books from the library or reading an online article through internet access. Users need a unique id called an internet protocol (IP) address when they gather data from the internet. However, when billions of users try to gain simultaneous internet access to the same data, congestion traffic occurs. This phenomenon also happened in our training institute, which experiences congested internet connectivity during peak or non-peak hours. P.K. Dey et al. mentioned that the solution for increasing network traffic without it becoming congested is increasing the amount of bandwidth [1]. However, increasing the amount of bandwidth would increase our monthly cost. M. Marcon et al. mentioned that using a method that will yield a traffic-shaping network can resolve traffic congestion issues [2]. This proposal, however, will, affect voice and video transmissions because real-time communication is necessary for such processes [3]. After comparing related works and considering the institutional budget constraints, this study has

Related Works
Many solutions have been proposed to evaluate network system performance, including an implementation of QoS using throughput, which is a QoS based on the bandwidth utilization. The assessment of the performance of this QoS is managed by the ISP. F. Amato et al. established some guidelines for processing large quantities of data from multimedia applications on social media and developed a technique that is based on a user-centered approach [11]. F. Amato et al. proposed a new solution by using the Flickr technique to generate multimedia stories [12]. However, this method focuses on visual analytics. The details of each QoS element will be discussed in the following subsection.

QoS and Network Convergence through Throughput
Network convergence involves different types of traffic that has specific requirements. Given that multiple traffics react based on their own behaviors, I. Zakariyya and M.N. A Rahman developed a new scheme for controlling the internet on the ingress router to measure the throughput, utilization in an IP-based network by considering traffic flows, and the corresponding processing times by using the adaptive throughput policy (ATP) algorithm [6]. This new algorithm has overcome several issues, such as congested hyper text transfer protocol (HTTP) and bandwidth performance under bursty traffic. The results showed that the ATP algorithm led to high bandwidth savings and fast traffic processing time under the threshold (P1). Bursty traffic throughput can be resolved by managing the network performance on the basis of the implementation policies on the ATP algorithm. In conclusion, this technique can control such throughput using the implementation policy in the development system.
Another objective of QoS-related studies is to assess the network performance with respect to voice delivery applications. The specific goal of such studies is to ensure that the packet loss will be discarded because the amount of packet delay will affect the transmission [13]. Voice applications use voice channels for transmission at a specific time. Voice-based applications achieve a satisfactory performance when they operate on a time-division multiplexing network application; such applications have run on best-effort service network as voice over IP. Best-effort service networks have numerous packets and large amounts of delays. Service network providers that use such networks do not have the required performance for voice applications. Therefore, G. Mojib et al. suggested that QoS technologies can be used to ensure that applications can be properly supported by using network multiservice IP [14].

QoS Based on Bandwidth Utilization
A previous study emphasized that the deployment of QoS is not necessary because increasing the bandwidth can resolve the network performance issues. The author argued that implementing QoS is complicated and adding bandwidth is relatively simple. However, we must look closely at the QoS problems to verify the above inference. I. Zakariyya et al. enforced a class-based weighted fair queue (CBWFQ) queuing discipline for fairness in sharing bandwidth among different traffic classes in the network, as well as a CBWFQ algorithm to control network congestion [6]. This technique is similar to the weighted round robin queuing discipline. Congestion occurs when all network connections have large bandwidths; thus, applying QoS technologies is appropriate. Researchers stated that the current carrier networks have large amounts of bandwidth and are designed to minimize traffic congestion. Moreover, adding minimum bandwidth will guarantee that each traffic's bandwidth requirement is met and the traffic classes are shaped based on their services.
Another related research conclusion is that aggressively adding bandwidth to IP networks will exploit the demand of the internet [9]. The carrier network may offer low latency connections across metropolitan area networks. The traffic that goes through a network should therefore be classified to achieve QoS. If congestion happens in the ingress router, the QoS level will be rated poorly even if the network providers offer excellent QoS performance.

Online Sequential Extreme Learning Machine
Z. Ali et al. stated that service providers have embedded QoS technologies in their services. Malaysian ISPs, such as Telekom Malaysia, own the network connection's copper, fiber, and wireless technologies [13]. Adding bandwidth will render QoS attractive to customer demands that require this technology to achieve the highest-quality transmission. ISP technologies are using dense wavelength division multiplexing in fiber optic connections to ensure that additional bandwidth can be immediately provided at an affordable cost when a customer requests a bandwidth upgrade. In comparison, other ISP technologies, such as mobile wireless and satellite communication, are highly constrained by the limited frequency spectrum.
Another researcher mentioned that another simple and cost-effective way to increase bandwidth is to increase the wavelengths. In addition, they suggested that service providers merge with another operator network technology (e.g., Maxis Communications) to provide best-effort services. By merging with an operator telco network, service providers can offer premium data services with performance guarantees. As applications that use QoS as their performance index, telco networks differentiate services on the basis of the classes between premium data service and best-effort subscribers. For a minimal user that does not use maximum bandwidth, the network can accomplish required throughput without throttling bandwidth. For those who need more bandwidth to reach their QoS satisfaction, the network can offer additional bandwidths at an additional price.
Measuring the QoS parameter with constraining bandwidth must be performed using a real traffic environment [15] to improve the performance of real-time traffic in a constrained bandwidth network. Among the queuing disciplines, such as round robin (RR), priority-based, and token bucket (TB), TB has the most satisfactory ability to receive the packets smoothly, and RR exhibited better performance than the other approaches [16]. However, these queuing disciplines were not compared simultaneously. Moreover, no policy or classification packet was applied to deploy the QoS technologies to obtain a comprehensive comparison of network performances under different queuing scenarios. Applying multiple techniques, such as first-in, first-out (FIFO), CBWFQ, low-latency queuing, class-based weighted random early detection, explicit congestion notification, and link fragmentation and interleaving (LFI), can result in high network performance levels [17]. The more advanced the method, the better the quality of the transmission [18]. The issues in using advanced methods include packet delay in transmission.

Methodology
The implementation of QoS via traffic policing involves four phases. This study utilized quantitative analysis to evaluate the network performance in our institution (i.e., Advanced Technology Training Center (ADTEC) Batu Pahat).

Phase 1-Data Collection and Survey
For phase 1, the necessary data and information is gathered by conducting a survey among the end users regarding their complaints. Some concepts and methodologies from previous literature are adjusted to achieve the research objectives. This phase is crucial in identifying the issues that happen in the current network communication. The questionnaire used in the survey is shown in Figure 1. The inputs that can be gathered from the questionnaire include the number of internet users during daytime, purpose of using the internet, and the number of times that connection issues happen in a week. A software is also used to obtain and analyze end user data. Another researcher mentioned that another simple and cost-effective way to increase bandwidth is to increase the wavelengths. In addition, they suggested that service providers merge with another operator network technology (e.g., Maxis Communications) to provide best-effort services. By merging with an operator telco network, service providers can offer premium data services with performance guarantees. As applications that use QoS as their performance index, telco networks differentiate services on the basis of the classes between premium data service and best-effort subscribers. For a minimal user that does not use maximum bandwidth, the network can accomplish required throughput without throttling bandwidth. For those who need more bandwidth to reach their QoS satisfaction, the network can offer additional bandwidths at an additional price.
Measuring the QoS parameter with constraining bandwidth must be performed using a real traffic environment [15] to improve the performance of real-time traffic in a constrained bandwidth network. Among the queuing disciplines, such as round robin (RR), priority-based, and token bucket (TB), TB has the most satisfactory ability to receive the packets smoothly, and RR exhibited better performance than the other approaches [16]. However, these queuing disciplines were not compared simultaneously. Moreover, no policy or classification packet was applied to deploy the QoS technologies to obtain a comprehensive comparison of network performances under different queuing scenarios. Applying multiple techniques, such as first-in, first-out (FIFO), CBWFQ, lowlatency queuing, class-based weighted random early detection, explicit congestion notification, and link fragmentation and interleaving (LFI), can result in high network performance levels [17]. The more advanced the method, the better the quality of the transmission [18]. The issues in using advanced methods include packet delay in transmission.

Methodology
The implementation of QoS via traffic policing involves four phases. This study utilized quantitative analysis to evaluate the network performance in our institution (i.e., Advanced Technology Training Center (ADTEC) Batu Pahat).

Phase 1-Data Collection and Survey
For phase 1, the necessary data and information is gathered by conducting a survey among the end users regarding their complaints. Some concepts and methodologies from previous literature are adjusted to achieve the research objectives. This phase is crucial in identifying the issues that happen in the current network communication. The questionnaire used in the survey is shown in Figure 1. The inputs that can be gathered from the questionnaire include the number of internet users during daytime, purpose of using the internet, and the number of times that connection issues happen in a week. A software is also used to obtain and analyze end user data. A survey form has been distributed among students and staff as it is shown in Figure 2. A total of 149 respondents participated (Figure 3), where 70.5% are male and the remaining 29.5% are female. Figure 4 shows that students comprise the majority of the respondents (31.5%). Approximately 53.7% of the respondents (80) specified that their main issue is the poor internet service, 31.5% (47) stated that the internet connection is bad, and 13.4% (20) and 1.3% (2) stated that the internet connection is A survey form has been distributed among students and staff as it is shown in Figure 2. A total of 149 respondents participated (Figure 3), where 70.5% are male and the remaining 29.5% are female. Figure 4 shows that students comprise the majority of the respondents (31.5%). Approximately 53.7% Computers 2020, 9, 39 5 of 15 of the respondents (80) specified that their main issue is the poor internet service, 31.5% (47) stated that the internet connection is bad, and 13.4% (20) and 1.3% (2) stated that the internet connection is in moderate and in good condition, respectively. Figure 5 suggests that almost 85% experienced poor internet connection, which must be addressed. These data are inputted to the next phase to generate the solution to the connection issues in the current interconnection in ADTEC BP.
Computers 2020, 9, x FOR PEER REVIEW 5 of 15 in moderate and in good condition, respectively. Figure 5 suggests that almost 85% experienced poor internet connection, which must be addressed. These data are inputted to the next phase to generate the solution to the connection issues in the current interconnection in ADTEC BP.    in moderate and in good condition, respectively. Figure 5 suggests that almost 85% experienced poor internet connection, which must be addressed. These data are inputted to the next phase to generate the solution to the connection issues in the current interconnection in ADTEC BP.    in moderate and in good condition, respectively. Figure 5 suggests that almost 85% experienced poor internet connection, which must be addressed. These data are inputted to the next phase to generate the solution to the connection issues in the current interconnection in ADTEC BP.

. Phase 2-CISCO Router-on-a-stick Cross-Origin Resource Sharing (CORS) Development
The development in this phase is supported by the concepts from phase 1. As shown in Figure  6, the testbed setup is implemented on an initial network, which requires upgrade through the application of the QoS mechanism. Several modifications are applied to this testbed, such as applying virtual LAN and CROS for measuring the network performance at the egress router. The testbed started by designing a network diagram that must be upgraded and selecting suitable equipment for the hardware and software. Several routers and switches are used to set up the experimental QoS in our network infrastructure. No policy and rules are implemented before the testbed setup, which, according to the collected data, is the main cause of network congestion. The testbed utilizes several approaches through classification and remarking and traffic policing, which are configured on a testbed setup by applying a new approach to the current traffic network. Table 1 lists the three types of testing that must be generated for CROS development.

Phase 2-CISCO Router-on-a-stick Cross-Origin Resource Sharing (CORS) Development
The development in this phase is supported by the concepts from phase 1. As shown in Figure 6, the testbed setup is implemented on an initial network, which requires upgrade through the application of the QoS mechanism. Several modifications are applied to this testbed, such as applying virtual LAN and CROS for measuring the network performance at the egress router. The testbed started by designing a network diagram that must be upgraded and selecting suitable equipment for the hardware and software. Several routers and switches are used to set up the experimental QoS in our network infrastructure. No policy and rules are implemented before the testbed setup, which, according to the collected data, is the main cause of network congestion.

. Phase 2-CISCO Router-on-a-stick Cross-Origin Resource Sharing (CORS) Development
The development in this phase is supported by the concepts from phase 1. As shown in Figure  6, the testbed setup is implemented on an initial network, which requires upgrade through the application of the QoS mechanism. Several modifications are applied to this testbed, such as applying virtual LAN and CROS for measuring the network performance at the egress router. The testbed started by designing a network diagram that must be upgraded and selecting suitable equipment for the hardware and software. Several routers and switches are used to set up the experimental QoS in our network infrastructure. No policy and rules are implemented before the testbed setup, which, according to the collected data, is the main cause of network congestion. The testbed utilizes several approaches through classification and remarking and traffic policing, which are configured on a testbed setup by applying a new approach to the current traffic network. Table 1 lists the three types of testing that must be generated for CROS development. The testbed utilizes several approaches through classification and remarking and traffic policing, which are configured on a testbed setup by applying a new approach to the current traffic network. Table 1 lists the three types of testing that must be generated for CROS development.

Test 1 No classification and policy Test 2
With classification but no policy Test 3 With classification and policy (a) Test 1: No classification and policy. The FIFO queuing discipline without any rules on the router configuration is used for all incoming packet data. This discipline is the baseline for comparing transmission control protocol (TCP) and user datagram protocol (UDP) testing. None of the traffic is classified or remarked with any policy. (b) Test 2: Traffic classification method. This second methodology is based on the classification of the packets at the ingress router, which are categorized on the basis of their service classes. After grouping, the packets are differentiated depending on their values; packets that contain streaming data after the classification received high priority transmission. This testing, which mainly focuses on the classification of the packet, can be applied only by using a single TCP stream of a packet. (c) Test 3: Traffic policing method. This step aims to inspect, classify, and categorize the packets that arrive at the incoming port of the router to ensure that they will have unique differentiated services code point values. The remarking process is executed after the categorization at the egress router and each packet obtained is assigned its corresponding ToS value. Although the packet has been remarked, it can still obtain an agreement to enter the neighboring router. After exiting the egress router, all packets are remarked on the basis of the information that has been previously agreed on.

CROS Analysis
The results are compared with the theoretical concepts discussed in the literature review. The output measurements are collected through multi-testing on the basis of the various scenarios. Subsequently, the throughput, delay, and jitter are measured. Two protocols, namely TCP and UDP, are used as the standard for the experiment. The two tests are compared to obtain the best solution and to determine which protocol is suitable for QoS mechanism implementation. The analysis is conducted using simulation software, including Cisco Packet Tracer, jperf, and GNS3.

Results and Discussion
Three tests have been performed in the testbed to compare the differences in the network performances. Before the implementation of QoS in the ADTEC BP network, the network systems are often interrupted during peak hours and the network always goes offline. After the three improvement tests, many positive implications were observed. The detailed discussions of the results are presented in the subsequent subsections.  Figure 7 shows that 4000 packets have been traced at a gateway router without any control mechanism. The variety of input that has been inserted at the gateway router can substantially influence Computers 2020, 9,39 8 of 15 the current network traffic. At 20 s, the traffic is linearly sustained for at most 2 min and then collapses due to congestion. Hence, an improved solution will be developed in the next testing process.  Table 2 presents the data transfer per packet of the 4000 packets that reached the egress router (gateway router). A packet is analyzed every 10 s; the transfer rate for each packet is around 69 MB and all packets produce throughput at 58 Mbps. Sampling data are analyzed for 100 s and will be continued for another 100 s for the second testing.   Table 2 are plotted as packet size (MB) versus time (s) in Figure 8. The illustration shows that the minimum and maximum values for the packet size on a single TCP stream are 58 and 58.5 MB, respectively. The average packet size is 58 MB and the communication is linear in the TCP testing using single-stream transmission. This value can serve as a benchmark for this testbed because the data possess high bandwidth and high packet size during the single TCP transmission.  Table 2 presents the data transfer per packet of the 4000 packets that reached the egress router (gateway router). A packet is analyzed every 10 s; the transfer rate for each packet is around 69 MB and all packets produce throughput at 58 Mbps. Sampling data are analyzed for 100 s and will be continued for another 100 s for the second testing.  The results presented in Table 2 are plotted as packet size (MB) versus time (s) in Figure 8. The illustration shows that the minimum and maximum values for the packet size on a single TCP stream are 58 and 58.5 MB, respectively. The average packet size is 58 MB and the communication is linear in the TCP testing using single-stream transmission. This value can serve as a benchmark for this testbed because the data possess high bandwidth and high packet size during the single TCP transmission.  Table 2 presents the data transfer per packet of the 4000 packets that reached the egress router (gateway router). A packet is analyzed every 10 s; the transfer rate for each packet is around 69 MB and all packets produce throughput at 58 Mbps. Sampling data are analyzed for 100 s and will be continued for another 100 s for the second testing.   Table 2 are plotted as packet size (MB) versus time (s) in Figure 8. The illustration shows that the minimum and maximum values for the packet size on a single TCP stream are 58 and 58.5 MB, respectively. The average packet size is 58 MB and the communication is linear in the TCP testing using single-stream transmission. This value can serve as a benchmark for this testbed because the data possess high bandwidth and high packet size during the single TCP transmission.  Figure 9 shows the simultaneous run of two parallel streams on the TCP test to measure the network performance; the stated transmission of the TCP protocol has a value. The two streams have their corresponding bandwidth transmission. Stream 1 (red line) has transmitted at most 5000 packets, whereas Stream 2 used approximately 2000 packets. The former is for video transmission, whereas the latter is for normal data transmission; video produces more packets compared to Stream 2 transmission and the graph is still linear.

Two Parallel Stream Implementation
Computers 2020, 9, x FOR PEER REVIEW 9 of 15 Figure 9 shows the simultaneous run of two parallel streams on the TCP test to measure the network performance; the stated transmission of the TCP protocol has a value. The two streams have their corresponding bandwidth transmission. Stream 1 (red line) has transmitted at most 5000 packets, whereas Stream 2 used approximately 2000 packets. The former is for video transmission, whereas the latter is for normal data transmission; video produces more packets compared to Stream 2 transmission and the graph is still linear.  Table 3 shows that two input TCP transmissions have been analyzed at the egress router; each traffic has its bandwidth. For TCP Stream 1, the transfer rate starts at 52.70 MB and the bandwidth usage is at 46.40 MB. For TCP Stream 2, the transfer rate and bandwidth usage are 58.4 and 47.80 MB, respectively. The findings indicate that higher transfer consumes higher bandwidth. The transfer rate is higher in the previous test than in this test, which might be because the latter involves both streams that have to share the total throughput to obtain an effective success.  Figure 10 shows that two-stream traffic has been generated on the testbed testing. From time 0, Stream 1 produces 32 MB of packet size, whereas Stream 2 produces 31.5 MB. The results indicate that the value of the production packet size in this transmission is less than that in the single-stream TCP transmission. This phenomenon occurred as the traffics of Streams 1 and 2 share major bandwidth to communicate; both streams involve data and video transmission. In conclusion, implementing the QoS mechanism can support more traffic transmission compared with the singlestream transmission. Although the packet size of the former is less than that of the latter, no communication issue occurred.  Table 3 shows that two input TCP transmissions have been analyzed at the egress router; each traffic has its bandwidth. For TCP Stream 1, the transfer rate starts at 52.70 MB and the bandwidth usage is at 46.40 MB. For TCP Stream 2, the transfer rate and bandwidth usage are 58.4 and 47.80 MB, respectively. The findings indicate that higher transfer consumes higher bandwidth. The transfer rate is higher in the previous test than in this test, which might be because the latter involves both streams that have to share the total throughput to obtain an effective success.  Figure 10 shows that two-stream traffic has been generated on the testbed testing. From time 0, Stream 1 produces 32 MB of packet size, whereas Stream 2 produces 31.5 MB. The results indicate that the value of the production packet size in this transmission is less than that in the single-stream TCP transmission. This phenomenon occurred as the traffics of Streams 1 and 2 share major bandwidth to communicate; both streams involve data and video transmission. In conclusion, implementing the QoS mechanism can support more traffic transmission compared with the single-stream transmission. Although the packet size of the former is less than that of the latter, no communication issue occurred.  Figure 11 illustrates the simultaneous run of three parallel streams in the TCP test to measure the network performance; the stated transmission of TCP protocol has a value. Similar to using two streams, the three streams will have their corresponding bandwidth transmission. Streams 1, 2, and 3 transmitted at 1700, 1600, and 1500 packets, respectively. Each stream represents data, audio, and video, respectively. From Figure 7, the more stream accessed by the traffic, the less packet size will be produced for network transmission. Figure 11. Throughput for three streams with no classification and policing. Figure 12 shows that each stream is still a linear graph, but has its corresponding transmission value. Stream 1 (red line) used approximately 31.40 Mbps bandwidth, Stream 2 (yellow line) used 32.10 Mbps, and Stream 3 (green line) used approximately 30.50 Mbps. The total TCP packet size transferred using the three parallel streams is 1122 MB, with a corresponding bandwidth utilization of 94.2 Mbps. A small gap in the packet size is produced during the three-stream transmission. The traffic performance is clear but the number of productions decreased compared with the previous cases. Therefore, this methodology, even without using the traffic policing method, still exhibits satisfactory performance, despite the reduced packet size.  Figure 11 illustrates the simultaneous run of three parallel streams in the TCP test to measure the network performance; the stated transmission of TCP protocol has a value. Similar to using two streams, the three streams will have their corresponding bandwidth transmission. Streams 1, 2, and 3 transmitted at 1700, 1600, and 1500 packets, respectively. Each stream represents data, audio, and video, respectively. From Figure 7, the more stream accessed by the traffic, the less packet size will be produced for network transmission.  Figure 11 illustrates the simultaneous run of three parallel streams in the TCP test to measure the network performance; the stated transmission of TCP protocol has a value. Similar to using two streams, the three streams will have their corresponding bandwidth transmission. Streams 1, 2, and 3 transmitted at 1700, 1600, and 1500 packets, respectively. Each stream represents data, audio, and video, respectively. From Figure 7, the more stream accessed by the traffic, the less packet size will be produced for network transmission. Figure 11. Throughput for three streams with no classification and policing. Figure 12 shows that each stream is still a linear graph, but has its corresponding transmission value. Stream 1 (red line) used approximately 31.40 Mbps bandwidth, Stream 2 (yellow line) used 32.10 Mbps, and Stream 3 (green line) used approximately 30.50 Mbps. The total TCP packet size transferred using the three parallel streams is 1122 MB, with a corresponding bandwidth utilization of 94.2 Mbps. A small gap in the packet size is produced during the three-stream transmission. The traffic performance is clear but the number of productions decreased compared with the previous cases. Therefore, this methodology, even without using the traffic policing method, still exhibits satisfactory performance, despite the reduced packet size.  Figure 12 shows that each stream is still a linear graph, but has its corresponding transmission value. Stream 1 (red line) used approximately 31.40 Mbps bandwidth, Stream 2 (yellow line) used 32.10 Mbps, and Stream 3 (green line) used approximately 30.50 Mbps. The total TCP packet size transferred using the three parallel streams is 1122 MB, with a corresponding bandwidth utilization of 94.2 Mbps. A small gap in the packet size is produced during the three-stream transmission. The traffic performance is clear but the number of productions decreased compared with the previous cases. Therefore, this methodology, even without using the traffic policing method, still exhibits satisfactory performance, despite the reduced packet size.

Test 2: Traffic Classification Method
The next testbed used the second approach to classify the traffic that enters the ingress router. Table 4 presents the configurations that will be set up at a QoS router to perform the classification and categorization of traffic, namely, internet control message protocol (ICMP), HTTP, and VOICE packets. These configurations are adopted to ensure that each packet will have its own channel and bandwidth usage in the ADTEC BP infrastructure. Each traffic will be class mapped on each category and VOICE will match precedence 3.

Traffic classfication class map Setting class map Match-all ICMP
match access-group 101 class-map match-all HTTP match access-group 105 class-map match-all VOICE match precedence 3 The packet has been classified after entering the QoS_router (Figure 13). The different streams represent their packet markings. For example, the single-stream ICMP packet has been marked for 373 packets, which is equivalent to 72614 bytes, whereas HTTP packets have been marked for 1246 packets, which represent 304502 bytes. For the two and three parallel streams, the bar has doubled compared with the single-stream transmission. In conclusion, the second classification process is effective in ensuring that each traffic will not share the main bandwidth to achieve stable transmission.

Test 2: Traffic Classification Method
The next testbed used the second approach to classify the traffic that enters the ingress router. Table 4 presents the configurations that will be set up at a QoS router to perform the classification and categorization of traffic, namely, internet control message protocol (ICMP), HTTP, and VOICE packets. These configurations are adopted to ensure that each packet will have its own channel and bandwidth usage in the ADTEC BP infrastructure. Each traffic will be class mapped on each category and VOICE will match precedence 3. The packet has been classified after entering the QoS_router (Figure 13). The different streams represent their packet markings. For example, the single-stream ICMP packet has been marked for 373 packets, which is equivalent to 72614 bytes, whereas HTTP packets have been marked for 1246 packets, which represent 304502 bytes. For the two and three parallel streams, the bar has doubled compared with the single-stream transmission. In conclusion, the second classification process is effective in ensuring that each traffic will not share the main bandwidth to achieve stable transmission.

Test 3: Traffic Policing Method
As shown in Figure 14, the output has collected an egress router. Packets that have bursty data more than 1.0 GB will be dropped off and the spike of traffic has been cut off. After the throughput of the traffic has been assigned, a packet for TCP testing that passed the access rule will be used in the transmission (Figure 15). This rule is useful in blocking big data communication, which is the common cause of network congestion. Figure 16 shows that the two parallel communications in one traffic, which have more bursty data than the previous test. In addition, many packets displayed numerous spikes. Figure 17 shows the output of the packet size after it was filtered using traffic policing. As shown in Figure 18, when numerous parallel TCP communications transmit in the network infrastructure, many packets dropped because the bulky data are removed. As an example of the three parallel streams, a total of 10840 packets are inserted at the egress router, which is equivalent to 1580609 bytes. After the process, almost 30% of the packets exceeded the limit and were dropped, whereas the remaining 70% passed through. In conclusion, this approach is suitable for TCP communication but is not recommended for handling UDP traffic (e.g., voice communication) because of substantial delays and losses.

C L A S S I F I C A T I O N T E C H N I Q U E O V E R P A C K E T M A R K E D
Packet Mark (ICMP)(packets) Packet Mark (HTTP) (packets) Figure 13. Packets marked by the classification technique.

Test 3: Traffic Policing Method
As shown in Figure 14, the output has collected an egress router. Packets that have bursty data more than 1.0 GB will be dropped off and the spike of traffic has been cut off. After the throughput of the traffic has been assigned, a packet for TCP testing that passed the access rule will be used in the transmission ( Figure 15). This rule is useful in blocking big data communication, which is the common cause of network congestion. Figure 16 shows that the two parallel communications in one traffic, which have more bursty data than the previous test. In addition, many packets displayed numerous spikes. Figure 17 shows the output of the packet size after it was filtered using traffic policing. As shown in Figure 18, when numerous parallel TCP communications transmit in the network infrastructure, many packets dropped because the bulky data are removed. As an example of the three parallel streams, a total of 10840 packets are inserted at the egress router, which is equivalent to 1580609 bytes. After the process, almost 30% of the packets exceeded the limit and were dropped, whereas the remaining 70% passed through. In conclusion, this approach is suitable for TCP communication but is not recommended for handling UDP traffic (e.g., voice communication) because of substantial delays and losses.

Test 3: Traffic Policing Method
As shown in Figure 14, the output has collected an egress router. Packets that have bursty data more than 1.0 GB will be dropped off and the spike of traffic has been cut off. After the throughput of the traffic has been assigned, a packet for TCP testing that passed the access rule will be used in the transmission (Figure 15). This rule is useful in blocking big data communication, which is the common cause of network congestion. Figure 16 shows that the two parallel communications in one traffic, which have more bursty data than the previous test. In addition, many packets displayed numerous spikes. Figure 17 shows the output of the packet size after it was filtered using traffic policing. As shown in Figure 18, when numerous parallel TCP communications transmit in the network infrastructure, many packets dropped because the bulky data are removed. As an example of the three parallel streams, a total of 10840 packets are inserted at the egress router, which is equivalent to 1580609 bytes. After the process, almost 30% of the packets exceeded the limit and were dropped, whereas the remaining 70% passed through. In conclusion, this approach is suitable for TCP communication but is not recommended for handling UDP traffic (e.g., voice communication) because of substantial delays and losses.

C L A S S I F I C A T I O N T E C H N I Q U E O V E R P A C K E T M A R K E D
Packet Mark (ICMP)(packets) Packet Mark (HTTP) (packets) Figure 14. Throughput of the single-stream technique policing methodology.

Conclusions
This study presents the functionality of QoS with traffic policing in overcoming network congestion. End users indicated that the networks have always been intermittent and congested during peak hours. Positive outcomes have been obtained during the implementation of QoS in the testbed. QoS not only resolves network congestion, but also stabilizes the LAN. Furthermore, CROS was added on the configuration, which allowed the network to achieve high performance utilization.

Packet insert
Packet confirm Packet exceeded Figure 18. TCP packet filtering after traffic policing.

Conclusions
This study presents the functionality of QoS with traffic policing in overcoming network congestion. End users indicated that the networks have always been intermittent and congested during peak hours. Positive outcomes have been obtained during the implementation of QoS in the testbed. QoS not only resolves network congestion, but also stabilizes the LAN. Furthermore, CROS was added on the configuration, which allowed the network to achieve high performance utilization.

Conflicts of Interest:
The authors declare no conflict of interest regarding this paper.