Next Article in Journal
Energy Efficient Routing and Node Activity Scheduling in the OCARI Wireless Sensor Network
Next Article in Special Issue
A Survey of QoS Multicast in Ad Hoc Networks
Previous Article in Journal
Implementing Value Added Applications in Next Generation Networks
Previous Article in Special Issue
QoS Provisioning Techniques for Future Fiber-Wireless (FiWi) Access Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Resource Allocation and QoS Control Capabilities of the Japanese Academic Backbone Network

National Institute of Informatics (NII), 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
*
Author to whom correspondence should be addressed.
Future Internet 2010, 2(3), 295-307; https://doi.org/10.3390/fi2030295
Submission received: 22 July 2010 / Accepted: 5 August 2010 / Published: 9 August 2010
(This article belongs to the Special Issue QoS in Wired and Wireless IP Networks)

Abstract

:
Dynamic resource control capabilities have become increasingly important for academic networks that must support big scientific research projects at the same time as less data intensive research and educational activities. This paper describes the dynamic resource allocation and QoS control capabilities of the Japanese academic backbone network, called SINET3, which supports a variety of academic applications with a wide range of network services. The article describes the network architecture, networking technologies, resource allocation, QoS control, and layer-1 bandwidth on-demand services. It also details typical services developed for scientific research, including the user interface, resource control, and management functions, and includes performance evaluations.

1. Introduction

Academic backbone networks have to address the particular requirements inherent in big scientific applications [1,2,3]. The amounts of data involved in big scientific research such as astronomy, high-energy physics, and nuclear fusion research are much larger than those of other research and educational applications, and networks must have huge bandwidths to carry it all. For instance, one very long baseline interferometry (eVLBI) project [4] needs an assured bandwidth of more than 2 Gbps per radio telescope in order to transfer the data from the telescopes to a central analysis device. Furthermore, it needs to use as geographically distant as possible radio telescopes in order to enhance the detection range, which means that it has a nationwide impact on network resource usage. The use of more radio telescopes and data collection in the very near future will require a bandwidth of more than 8 Gbps for this project. Another example is the ATLAS experiment [5] at the Large Hadron Collider (LHC) at Conseil Europeen pour la Recherche Nucleaire (CERN). It must transfer huge amounts of data from Switzerland to Japan through GÉANT2 and SINET3’s international line. As of late 2009, this experimental data occupied 2 to 4 Gbps of bandwidth. The stored data in Japan will be selectively copied and transferred to fifteen domestic universities and research institutions over SINET3. In another development, nuclear fusion researchers have recently started an experiment on huge data transfers from France, where the International Thermonuclear Experimental Reactor (ITER) will be built. So far, they have found that they can transfer data from France to Japan at a rate of 4 Gbps [6]. The data from ITER will be transferred to Rokkasho Mura and then distributed to several domestic universities and research institutions. The issue we face today is how to assign network resources to a growing number of big scientific research projects and at the same time maintain normal services for research and academia.
Our network supports many research and educational applications that are sensitive to network performance issues such as delay variance and packet loss. For example, remote lectures using high-definition videos, which require assured bandwidth of tens of Mbps, have been getting very popular because of the expansion of the credit transfer system between universities and the spread of inexpensive high-definition video systems. Moreover, live video multicasts of surgeries, especially those involving rare cases and procedures, have become popular among surgeons [7] as a way of sharing their experiences and knowledge. Remote control of mechanical devices by video is another application sensitive to network performance. Video-based applications like these are carried out between university access circuits that also carry download traffic from the Internet and whose usage rates are usually high. Therefore, control of the quality of service for video-based applications, especially at the access points to SINET3, is a very important issue.
This paper describes dynamic resource allocation and QoS capabilities that can accommodate the data-intensive applications of big science and the performance-sensitive applications of other research and educational fields. Note that as we have already described the basic design and technologies of SINET3 in reference [3], this paper focuses on the resource allocation capabilities and implementation of typical services developed for data-intensive applications, i.e., layer-1 bandwidth on demand (BoD) services. The remainder of this paper is organized as follows. Section 2 overviews the network architecture and technologies used in SINET3 and shows usage examples of resource allocation and QoS control. Section 3 shows how layer-1 BoD services are implemented in terms of the user interface, control interface to layer-1 devices, and resource management functions. Section 4 describes a performance evaluation from the perspective of resource allocation time. Section 5 is the conclusion.

2. Overview of Network Architecture and Networking Technologies

This section is an overview of our network architecture and networking technologies to accommodate a wide range of network services that support a variety of academic applications. Examples of resource allocation and QoS control are also shown.

2.1. Network Architecture Based on Virtual Resource Allocation

As a nationwide academic infrastructure which covers more than 700 universities and research institutions including all national universities, SINET3 provides a wide range of network services to support a variety of research and educational activities, as well as general activities using the Internet. For research activities, virtual private network (VPN) services, which form closed user groups for specific research areas, have become increasingly important and popular for collaborations between different organizations. The total number of interfaces for VPN services is steadily increasing and currently exceeds three hundred. As networking technologies progress, VPN usage is shifting from Layer-3 VPNs to Layer-2 VPNs or virtual private LAN service (VPLS). SINET3 accommodates these different VPN services, as well as Internet service, by forming virtual service networks (Figure 1). Each virtual service network corresponds to a VPN service for a number of research projects, e.g., L3VPNs for nuclear fusion science, and GIRD computing; L2VPNs for high-energy physics, earthquake research, and small virtual laboratories. The virtual service network corresponding to IPv4/IPv6 dual stack services provides IPv6 services with multicast capabilities as well as Internet access. SINET3 is interconnected with commercial Internet service providers via two major domestic Internet Exchange Points, called JPIX and JPNAP. As for the layer-1 services, which will be described later, we have another virtual service network that accommodates L1VPNs for research groups, such as eVLBI [4], high-quality remote backup [8], and t-Room [9] projects.
Each virtual service network applies its own routing and signaling protocols and has its own high-availability functions for protection, rerouting, etc. Although this enables us to control and manage each virtual service network independently, we also have to let these networks share network resources, such as nodes and links, and we have to flexibly assign the network resources depending on the service demand. Layer-1 and layer-2/3 services in our network use the same links, and they change the assigned bandwidths by using next-generation SDH technologies [11,12,13]. The layer-2/3 services, in particular, also have QoS controls that identify specific VPNs and IP flows.
Figure 1. Virtual service networks and VPNs in SINET3.
Figure 1. Virtual service networks and VPNs in SINET3.
Futureinternet 02 00295 g001

2.2. Networking Technologies for Multiple Network Services

Figure 2 shows the network elements and networking technologies used for accommodating the above-mentioned network services. SINET3 has twelve core nodes that are located at public data centers and composed of layer-1 switches (next-generation SDH devices) and IP routers; and 63 edge nodes that are located at selected universities; these edge nodes accommodate campus networks of neighboring universities and research institutions and are composed of layer-1 switches and layer-2 multiplexers. The campus networks are connected to SINET3 with Ethernet-family interfaces (the eVLBI project described later currently uses 2.4-Gbps SDH interfaces). These edge and core nodes are controlled and managed through a control and management plane from a remote operation center. The layer-1 services are provided with an operation system (L1-OPS), which controls and manages the layer-1 switches, and a layer-1 BoD server, which was developed by NII and loads the L1-OPS for the BoD services.
Each layer-2 multiplexer attaches an internal VLAN tag to each IP packet for layer-3 services, IPv4/IPv6 dual stack, and L3VPN, or to each Ethernet packet for layer-2 services, L2VPN, and VPLS. The layer-2/3 service packets are accommodated into a layer-1 path for layer-2/3 services between edge and core layer-1 switches by using generic framing procedure (GFP) [11] and virtual concatenation (VCAT) [12] technologies (bandwidth is selected in multiples of 150 Mbps). The opposite IP router receives and distributes the layer-2/3 service packets to virtual routers corresponding to each network service by looking at their VLAN tags. The IP router encapsulates each VPN packet with multi-protocol label switching (MPLS) headers and attaches another VLAN tag identifying each virtual service network between IP routers. The layer-2/3 service packets are transferred between neighboring IP routers through a layer-1 path for layer-2/3 services between core layer-1 switches. The bandwidth of the layer-1 path for layer-2/3 services can be varied without any packet losses by using the link capacity adjustment scheme (LCAS) [13].
Figure 2. Network elements and networking technologies.
Figure 2. Network elements and networking technologies.
Futureinternet 02 00295 g002
While each VPN in layer-2/3 services is established statically, a layer-1 VPN using layer-1 paths is established on demand because layer-1 service need dedicated network resources. On-demand layer-1 paths are created by users directly making requests (using a simple web screen) to the layer-1 BoD server. The layer-1 BoD server only establishes these paths for the specified durations. The layer-1 switches set up and release these paths by exchanging the generalized MPLS (GMPLS) protocols [14,15,16] through the control and management plane. If a layer-1 service needs more network resources, the layer-1 BoD server can change the bandwidths of the layer-1 paths for layer-2/3 services by using LCAS.

2.3. Usage Examples of Layer-1 BoD Services

To accommodate the huge amounts of data generated from a big science project into our network, we want to avoid a situation in which they become bursty enough to fill up the bandwidth of the circuits through a number of transit nodes and adversely affect other service traffic. We therefore assign a sufficient fixed bandwidth between source and destination sites on demand for the project by using an end-to-end layer-1 path and completely separate the network resources between the data from the project and the other service traffic. One of the layer-1 technologies, VCAT, can assign the network resource with a granularity of 150 Mbps and can also get the network resource from parallel circuits and multiple routes between the core nodes in our network (Figure 3). We get the bandwidth from several links as equally as possible, which is good for load balancing of the layer-2/3 service traffic.
We developed a layer-1 BoD server for the BoD services requested by users. The server calculates the appropriate path routes to get the total end-to-end bandwidth by using VCAT, triggers the setup and release of layer-1 paths, and manages the resource assignment for layer-1 services. At the specified start time, the layer-1 BoD server runs the L1-OPS to direct the source layer-1 switch to establish a layer-1 path toward the destination layer-1 switch. The layer-1 BoD server also runs the L1-OPS to change the path bandwidths for the layer-2/3 services when needed.
Figure 3. Core network topology and initial BoD users.
Figure 3. Core network topology and initial BoD users.
Futureinternet 02 00295 g003
The e-VLBI project [4] led by the National Astronomical Observatory of Japan (NAOJ) is the first project to use the layer-1 BoD service. This project observes celestial bodies only when the intended radio telescopes, near Yamaguchi Univ., the National Institute for Fusion Science (NIFS), the High Energy Accelerator Research Organization (KEK), and Hokkaido Univ., are available. It therefore needs end-to-end paths only during the observation period. The data measured at these sites are transferred to a central analysis device at NAOJ. The project currently sends about 2-Gbps worth of observation data through 2.4-Gbps SDH interfaces between the sites. To make the resolution of the array even finer, plans are underway to secure additional radio telescopes and bandwidth on demand of more than 8 Gbps through 10-Gigabit Ethernet interfaces.
The layer-1 BoD services also have excellent end-to-end QoS properties, such as small delay, no delay variance, and no packet losses. The high-quality remote backup project [8] backs up huge amounts of data with a high-performance transfer protocol. The data transfer experiments are done between Osaka Univ., NII, Hokkaido Univ., and Kyushu Univ. The project is currently evaluating the transfer performance between these sites by varying the path bandwidth from 150 Mbps to 1.05 Gbps over Gigabit Ethernet interfaces. A room-sharing video system, called t-Room [9], which allows people at different sites to feel as if they are in the same room, requires a high-quality communication environment with a maximum of a 300-Mbps bandwidth. It transfers eight-sided high-definition video images, voices, and control signals. Doshisha Univ. and NTT Communication Science Laboratories are evaluating the system performance between Kyoto and Atsugi via Tokyo. NII sometimes perform demonstrations between remote sites such as Hokkaido Univ. to Kyushu-Univ. by using 1.5-Gbps non-compressed high-definition videos [10] in order to disseminate a high-quality communication environment created by the layer-1 BoD services.

2.4. Usage Examples of Packet-Based QoS Control Services

The layer-2 multiplexers and IP routers of SINET3 execute packet-based QoS control of layer-2/3 services. Each device along the path performs packet-based QoS control by using four forwarding queues, as follows. Each source layer-2 multiplexer attaches an internal VLAN tag to each IP packet or Ethernet packet and writes the QoS class in the User Priority field of the internal VLAN tag depending on the physical/logical interface or the content of the IP/Ethernet header fields. The opposite IP router copies the value in the User Priority field of the internal VLAN tag to the DSCP of the IP header if the packet is IPv4/IPv6. It copies the value to the EXP of the MPLS header if the packet is L3VPN/L2VPN/VPLS. The IP routers and multiplexers along the path transfer the packets by using four forwarding queues corresponding to each QoS class.
The usage rate of an access link between a campus network and SINET3 is usually high, especially in the direction from SINET3 to the campus network, because the volume of download traffic including from the Internet is usually much larger than that of the upload traffic. In such a situation, QoS control at the output ports of the SINET3 nodes, especially the egress layer-2 multiplexers, would be very effective. QoS control at the other ports of the transit nodes could then maintain the quality of service set at the output ports of the SINET3 nodes. In particular, we found that this method is good for guaranteeing the QoS of video applications.

3. Detailed Implementation of Layer-1 BoD Services

This section describes the implementation of the layer-1 BoD services in terms of the user interface, path calculation, control interface between the layer-1 BoD server and L1-OPS, and layer-1 resource management.

3.1. User Interface of Layer-1 BoD Services

Users submit a project application for layer-1 BoD services to NII, and network operators at NII input the project data such as the corresponding VPN name, user names, intended node IDs, physical interface types and IDs into the layer-1 BoD server. After that, registered users can request a layer-1 path on a reservation basis simply by filling out a set of Web screens (Figure 4). Each user can specify the source and destination (nodes and interfaces), the duration in fifteen-minute intervals, the bandwidth with a 150-Mbps granularity, and route preferences. After selecting the VPN name [Figure 4(a)], such as the high-quality backup project, the user chooses the source and destination nodes [Figure 4(b)], such as from among Hokkaido Univ., Kyushu Univ, NII, and Osaka Univ., and the duration [Figure 4(c)] from a pull-down menu. On the next page [Figure 4(d)], the BoD server calculates and shows the available bandwidth and the estimated delay between specified nodes during the specified duration. The user then chooses the source and destination interfaces, the bandwidth (a lambda that provides the full bandwidth of the physical interface or in multiples of 150 Mbps), and the route preferences (none or minimum delay route; or the same route constraint for multiple paths). Figure 4(e) shows the page for confirming the reservation.
Figure 4. Sample Web screens for layer-1 BoD services.
Figure 4. Sample Web screens for layer-1 BoD services.
Futureinternet 02 00295 g004

3.2. Path Calculation Based on Route Preference

Upon receiving a user request, the layer-1 BoD server selects an appropriate route from among several candidate routes in the network by referring to the user’s route preference. When the user specifies the “minimum delay route” [Figure 4 (d)], the layer-1 BoD server finds this route by using the Dijkstra algorithm, which uses the delay of each link as the link metric (see Figure 3). When “none” is specified, the layer-1 BoD server selects the shortest route with the maximum available end-to-end bandwidth by using the Edmonds-Karp algorithm [17], which uses the available bandwidth for the layer-1 services of each link as the link metric. The layer-1 BoD server gathers the required bandwidth from the parallel links between the layer-1 switches to enable the remaining bandwidth for the layer-2/3 services of each link to be as balanced as possible, thereby maximizing the packet forwarding performance. The layer-1 BoD server usually gets the required bandwidth from a single route but, if there is no single route that can meet the requested end-to-to-end bandwidth, it gathers enough bandwidth from multiple routes.

3.3. Interface between Layer-1 BoD Server and L1-OPS

The interface between the layer-1 BoD server and L1-OPS is a CORBA interface compliant with TMF-814 [18], and it can set up and release layer-1 paths for layer-1 BoD services and allocate the bandwidths of the layer-1 paths to layer-2/3 services (Figure 5). The interface between L1-OPS and the layer-1 switches is TL1.
The BoD server uses createSNC to request L1-OPS to set up a layer-1 path [Figure 5(a)]. The parameters of createSNC include path information such as the path name managed by the BoD server, route information calculated by the BoD server, and the bandwidth requested by the user. Upon receipt of the request (createSNC REQ), L1-OPS registers the path information in the source layer-1 switch, requests the switch to set up a GMPLS-based layer-1 path, and sends the BoD server the response (createSNC RESP). When the layer-1 path is established, L1-OPS receives a completion notice from the source layer-1 switch, retrieves the path information for confirmation, and notifies the BoD server of the completion of the path setup by sending a notification message [Notification (create CMPLD)]. The BoD server then checks the end-to-end path status, such as the J1 path trace result, by using getSNC. To set up a layer-1 path composed of diverse sub-paths corresponding to parallel links and multiple routes, the BoD server establishes sub-paths by using more than one createSNC, each of which includes the same path name. For path deletion, the BoD server uses deleteSNC, whose parameters include the specified path name. The BoD server sends a deleteSNC REQ to L1-OPS and receives a deleteSNC RESP followed by a Notification (delete CMPLD) from L1-OPS.
The BoD server uses changeSNCBandwidth to request L1-OPS to change the bandwidth of a layer-1 path for layer-2/3 services [Figure 5(b)]. The parameters of changeSNCBandwidth include the path name and the new bandwidth. For a path bandwidth decrease, upon receipt of the request (changeSNCBandwidth REQ), L1-OPS enables the path to be operated by LCAS and then changes the bandwidth by using VCAT. Because a layer-1 path for layer-2/3 services is established as a permanent non-GMPLS path, L1-OPS converts [or activates in Figure 5(b)] the non-GMPLS resource released from the path into a GMPLS-based resource. L1-OPS then sends the BoD server the response (changeSNCBandwidth RESP). The sequence of operations between L1-OPS and layer-1 switches for a path bandwidth increase starts by converting the GMPLS-based resource into a non-GMPLS-based resource before the LCAS and VCAT control operations begin.
Figure 5. Interface between layer-1 BoD server and L1-OPS. (a) For path setup/release. (b) For path bandwidth change.
Figure 5. Interface between layer-1 BoD server and L1-OPS. (a) For path setup/release. (b) For path bandwidth change.
Futureinternet 02 00295 g005

3.4. Resource Management for Layer-1 BoD services

We have to carry layer-2/3 service traffic whose volume has a very analogous daily pattern on each link in our network. To do so, we vary the available bandwidth for the layer-1 BoD services in each link according to the traffic volume of the layer-2/3 services (Figure 6). The available bandwidth is set to a larger value from 22:00 to 8:00 during weeknights and all hours on weekends. As the available bandwidth will likely be different from what is actually assigned, we have two more parameters in each link: the default bandwidth and the assigned bandwidth. The default bandwidth is the requisite minimum bandwidth for the layer-1 BoD services, and the assigned bandwidth is usually set to the default bandwidth. When the bandwidth requested by users for a link exceeds the default bandwidth of the link, the BoD server changes the assigned bandwidth of the link. The assigned bandwidth is set to the maximum requested bandwidth during a predefined period, such as from 8:00 to 22:00 during weekdays, from 22:00 to 8:00 during weeknights, or all hours on weekends. Here, we use the LCAS functions only at the start and finishing times of the predefined periods, because they require a few minutes to operate, as we describe later.
Our network operators set the default and available bandwidths on a simple Web screen (Figure 7). As the network operators usually care about the traffic volume of the layer-2/3 services, the screen shows the bandwidths for those services. The default/available bandwidth for them is equal to the link bandwidth minus the default/available bandwidth for layer-1 services. We use the granularity of 150 Mbps, which is indicated as “v” in Figure 7, and the operators set values of from 0v to 64v for each link. We divide the bandwidth of a 40-Gbps (STM-256) link into four bandwidth groups to accommodate four parallel virtual links established between the IP routers, and operators use the same value range for each bandwidth group.
If the requested bandwidth exceeds the available bandwidth of a link, the layer-1 BoD server will try to rearrange the reserved layer-1 paths so that they can be accommodated on other routes. In attempting to fulfill user requests, the BoD server informs the network operators about any resource shortages. It sends the information about the resource in contention, such as the link ID, bandwidth, and duration, and recommends (i.e. automatically selects) contention resolution. The operators either accept the recommendation with/without minor changes or negotiate a resolution, such as a bandwidth reduction and duration reduction, with users.
Figure 6. Default/available/assigned bandwidth for layer-1services.
Figure 6. Default/available/assigned bandwidth for layer-1services.
Futureinternet 02 00295 g006
Figure 7. Screen example: bandwidth management of layer-2/3 services.
Figure 7. Screen example: bandwidth management of layer-2/3 services.
Futureinternet 02 00295 g007

4. Performance Evaluation of Layer-1 BoD Services

This section describes the results of a performance evaluation on the layer-1 BoD services. The evaluation measured the layer-1 path setup/release time and the LCAS operation time.

4.1. Layer-1 Path Setup/Release Time

The BoD server has set up and released more than 700 layer-1 end-to-end paths for research projects and demonstrations as of May 31, 2010. Here we shall illustrate setup and release time examples. Figure 8(a) plots the average times for path setup/release versus the number of transit layer-1 switches, which were extracted from the results of the high-quality remote backup project. The figure shows the times for the bandwidths of 150 Mbps, 600 Mbps, and 1.05 Gbps between Hokkaido Univ. and other places. The setup/release time was defined as the time difference from when the BoD server sends a “createSNC/deleteSNC REQ” and to when it receives the “Notification (create/delete CMPLD)”. The routes were calculated for the maximum available end-to-end bandwidth. For example, the route between Hokkaido Univ. and Kyushu Univ. transited the edge layer-1 switch at Hokkaido Univ., the core layer-1 switches at Sapporo, Sendai, Tsukuba, Tokyo, Nagoya, Osaka, Kyoto, Hiroshima, and Fukuoka, and the edge layer-1 switch at Kyushu Univ. (see Figure 3). The path for the 150-Mbps bandwidth transited 13 layer-1 switches, because it transited only one layer-1switch at Osaka, and the paths for the 600-Mbps and 1.05-Gbps bandwidths transited 14 layer-1 switches, because we gathered the bandwidth from the parallel links by using diverse sub-paths. The numbers in parentheses in the figure indicate the number of sub-paths. The setup time tends to grow with the bandwidth, the number of transit layer-1 switches, and the number of sub-paths. These setup/release times are within the expected range for our layer-1 switches, which need time to cross-connect the TDM channels for each VC-4.

4.2. LCAS Operation Time

We started the layer-1 BoD services in 2008 by setting the assigned and available bandwidths to the same requisite minimum value. We started to operate LCAS functions in mid 2009, after finding that users frequently used layer-1 paths and sometimes experienced resource contentions. Figure 8(b) plots the sample times for LCAS operation in the real network versus decreases or increases in bandwidth, in multiples of 150 Mbps. The figure plots the times for bandwidth changes of 1v (150 Mbps), 15v (2.25 Gbps), and 34v (5.1 Gbps) between neighboring layer-1 switches. The LCAS operation time was defined as the time difference from when the BoD server sends a “changeSNCBandwidth REQ” to when it receives the response “changeSNCBandwidth RESP”. To ensure there is no packet loss, the LCAS operation needs an initial negotiation period to change the bandwidth between layer-1 switches, so even a 1v change took around two minutes. However, the subsequent increases in bandwidth took only marginally longer. The LCAS operation times for a bandwidth decrease were slightly longer than those for a bandwidth increase
Figure 8. Performance of layer-1 BoD services. (a) Layer-1 path setup/release time. (b) LCAS operation time.
Figure 8. Performance of layer-1 BoD services. (a) Layer-1 path setup/release time. (b) LCAS operation time.
Futureinternet 02 00295 g008

5. Conclusions

This paper described the architecture, networking technologies, resource allocation, and QoS control of SINET3. It also detailed the implementation of SINET3’s layer-1 BoD services in terms of its user interface, control of layer-1 switches, and resource management. An evaluation of layer-1 path setup/release times and LCAS operation times was also presented. SINET 3 will continue to offer its flexible resource allocation capabilities for many scientific purposes.

Acknowledgements

We wish to thank all the members of the Organization of Science Network Operations and Coordination for their support of SINET3. We are also grateful to K. Shimizu, R. Hayashi, I. Inoue, and K. Shiomoto of NTT Network Systems Labs, E. Kawa, and Y. Kamata of NTT Advanced Technologies, H. Tanuma, and K. Imai of NEC Corporation, T. Saimei of NTT Communications Corporation, and J. Adachi, S. Asano, S. Takano, T. Shimoda, J. Sayama, K. Minomo, and H. Matsumura of NII, for their continuous cooperation and support.

References and Notes

  1. Summerhill, R. The new Internet2 network. In Proceedings of the 6th GLIF Meeting, Tokyo, Japan, September 2006.
  2. Campanella, M. Development in GEANT2: End-to-end services. In Proceedings of the 6th GLIF Meeting, Tokyo, Japan, September 2006.
  3. Urushidani, S.; Abe, S.; Ji, Y.; Fukuda, K.; Koibuchi, M.; Nakamura, M.; Yamada, S.; Hayashi, R.; Inoue, I.; Shiomoto, K. Design of versatile academic infrastructure for multilayer network services. IEEE J. Sel. Area. Commun. 2009, 27, 253–267. [Google Scholar] [CrossRef]
  4. Kawaguchi, N. Trial on the efficient use of trunk communication lines for VLBI in Japan. In Proceedings of the 7th International eVLBI Workshop, Shanghai, China, June 2008.
  5. ATLAS. Available online: http://www.atlas.ch/ (accessed on 27 July 2010).
  6. Nagayama, Y.; Emoto, M.; Kozaki, Y.; Nakanishi, H.; Sudo, S.; Yamamoto, T.; Hiraki, K.; Urushidani, S. A proposal for the ITER remote participation system in Japan. Fusion Eng. Des. 2010, 85, 535–539. [Google Scholar] [CrossRef]
  7. Shimizu, S. Remote medical activity over APAN. In Proceedings of the Spring 2006 Internet2 Member Meeting, Arlington, VA, USA, April 2006.
  8. Inoue, F.; Ohsaki, H.; Nomoto, Y.; Imase, M. Performance evaluation of iSCSI protocol with automatic parallelism—Tuning on SINET3 with layer-1 on-demand bandwidth allocation. In Proceedings of 27th Asia-Pacific Advanced Network Meeting, Kaohsiung, Taiwan, March 2009.
  9. t-Room. Available online: http://www.mirainodenwa.com/ (accessed on 27 July 2010).
  10. Harada, K.; Kawano, T.; Zaima, K.; Hatta, S.; Meno, S. Uncompressed HDTV over IP transmission system using ultra-high-speed IP streaming technology. NTT Tech. Rev. 2003, 1, 84–89. [Google Scholar]
  11. International Telecommunication Union. Generic Framing Procedure (GFP); ITU-T Recommendation G.7041; ITU: Geneva, Switzerland, 2005. [Google Scholar]
  12. International Telecommunication Union. Network Node Interface for the Synchronous Digital Hierarchy (SDH); ITU-T Recommendation G.707; ITU: Geneva, Switzerland, 2003. [Google Scholar]
  13. International Telecommunication Union. Link Capacity Adjustment Scheme (LCAS) for Virtual Concatenated Signals; ITU-T Recommendation G.7042; ITU: Geneva, Switzerland, 2006. [Google Scholar]
  14. Berger, L. GMPLS Signaling Resource Reservation Protocol: Traffic Engineering; RFC3473; The Internet Society: Reston, VA, USA, January 2003. [Google Scholar]
  15. Mannie, E.; Papadimitriou, D. Generalized Multi-Protocol Label Switching (GMPLS) Extensions for Synchronous Optical Network (SONET) and Synchronous Digital Hierarchy (SDH) Control; RFC3473; The Internet Society: Reston, VA, USA, August 2006. [Google Scholar]
  16. Kompella, K.; Rekhter, Y. OSPF Extensions in Support of Generalized Multi-Protocol Label Switching (GMPLS); RFC4203; The Internet Society: Reston, VA, USA, October 2005. [Google Scholar]
  17. Edmonds, J.; Karp, R.M. Theoretical improvements in algorithmic efficiency for network flow problems. J. ACM 1972, 19, 248–264. [Google Scholar] [CrossRef]
  18. TM FORUM. MTNM Implementation Statement Template and Guidelines: NML-EML Interface for Management of SONET/SDH/WDM/ATM Transport Networks; TM FORUM 814A Version 2.1; TM FORUM: Morristown, NJ, USA, 2002. [Google Scholar]

Share and Cite

MDPI and ACS Style

Urushidani, S.; Fukuda, K.; Koibuchi, M.; Nakamura, M.; Abe, S.; Ji, Y.; Aoki, M.; Yamada, S. Dynamic Resource Allocation and QoS Control Capabilities of the Japanese Academic Backbone Network. Future Internet 2010, 2, 295-307. https://doi.org/10.3390/fi2030295

AMA Style

Urushidani S, Fukuda K, Koibuchi M, Nakamura M, Abe S, Ji Y, Aoki M, Yamada S. Dynamic Resource Allocation and QoS Control Capabilities of the Japanese Academic Backbone Network. Future Internet. 2010; 2(3):295-307. https://doi.org/10.3390/fi2030295

Chicago/Turabian Style

Urushidani, Shigeo, Kensuke Fukuda, Michihiro Koibuchi, Motonori Nakamura, Shunji Abe, Yusheng Ji, Michihiro Aoki, and Shigeki Yamada. 2010. "Dynamic Resource Allocation and QoS Control Capabilities of the Japanese Academic Backbone Network" Future Internet 2, no. 3: 295-307. https://doi.org/10.3390/fi2030295

Article Metrics

Back to TopTop