





















Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
I will argue in this that these drawbacks are outweighed by the advantages of using more circuit switching in the core of the network. 2.2.2 Packet switching.
Typology: Exams
1 / 29
This page cannot be seen from the preview
Don't miss anything!
It is widely assumed that, for reasons of efficiency, the various communication net- works (Internet, telephone, TV, radio, ...) will merge into one ubiquitous, packet- switched network that carries all forms of communications. This view of the future is particularly prevalent among the Internet community, where it is assumed that packet-switched IP is the layer over which everything else will be carried. In this chapter, I present evidence so as to argue that this will not happen. This stance is controversial, and is difficult to make concrete, as any attempt to compare the various candidates for the transport infrastructure^1 is fraught with lack of data and the dif- ficulty of making apples-with-apples comparisons. Therefore, the evidence presented here is different from other chapters in this thesis. Observations, case studies, and anecdotal data (rather than controlled experiments, simulations and proofs) are used to take a stance and to predict how the network architecture will evolve. Whatever the initial goals of the Internet, two main characteristics seem to account for its success: reachability and heterogeneity. IP, the packet-switching protocol that is the basis for the Internet, provides a simple, single, global address to reach every host, enables unfettered access between all hosts and adapts the topology to restore
(^1) In this chapter, transport is used in the sense of the infrastructure over which many service networks run, not in the sense of the OSI protocol layer.
20
reachability when links and routers fail. IP hides heterogeneity in the sense that it provides a single, simple service abstraction that is largely independent of the physical links over which it runs. As a result, IP provides service to a huge variety of applications and operates over extremely diverse link technologies.
The growth and success of IP has given rise to some widely held assumptions amongst researchers, the networking industry and the public at large. One common assumption is that it is only a matter of time before IP becomes the sole global communication infrastructure, dwarfing, and eventually displacing, existing commu- nication infrastructures such as telephone, cable and TV networks. IP is already universally used for data networking in wired networks (enterprise networks and the public Internet), and is being rapidly adopted for data communications in wireless and mobile networks. IP is also increasingly used for both local and long-distance voice communications, and it is technically feasible for packet-switched IP to replace SONET/SDH.
A related assumption is that IP routers (based on packet switching and datagram routing) will become the most important, or perhaps only, type of switching device inside the network. This is based on our collective belief that packet switching is inherently superior to circuit switching because of the efficiencies of statistical multi- plexing and the ability of IP to route around failures. It is widely assumed that IP is simpler than circuit switching and should be more economical to deploy and manage. And with continued advances in the underlying technology, we will no doubt see faster and faster links and routers throughout the Internet infrastructure. It is also widely assumed that IP will become the common convergence layer for all communication infrastructures. All communication services will be built on top of IP technology. In addition to information retrieval, we will stream video and audio, place phone calls, hold video-conferences, teach classes, and perform surgery.
On the face of it, these assumptions are quite reasonable. Technically, IP is flexible enough to support all communication needs, from best-effort to real-time. With robust enough routers and routing protocols, and with extensions such as weighted fair queueing, it is possible to build a packet-switched, datagram network that can support any type of application, regardless of their requirements.
2.2 Background and previous work
Before starting our discussion about whether IP can be the basis of all communication networks, I will give some background about the two main switching techniques in use today: circuit switching and packet switching.
Circuit switching was the first switching technique used in communication networks because it is simple enough to carry analog signals. This thesis will just focus on the digital version of circuit switching. Of course, the main example of its use is the phone system [72], but it is also used in the core of the Internet in the form of SONET/SDH and DWDM equipment [81, 126]. In circuit switching, the transmission medium is typically divided into channels using Frequency Division Multiplexing (FDM), 2 Time Division Multiplexing (TDM) or Code Division Multiplexing (CDM) [172]. A circuit is a string of concatenated channels from the source to the destination that carriers an information flow.^3 To establish the circuits, a signaling mechanism is used. This signaling only carries control information, and it is considered an overhead. It is also the most complex part in circuit switching, as all decisions are taken by the signaling process. It is commonly assumed that the signaling and per-circuit state management make circuit switches hard to design, configure and operate. In circuit switching the channel bandwidth is reserved for an information flow. To ensure timely delivery of the data, the capacity of the circuit has to be at least equal to the peak transmission rate of the flow. In this case, the circuit is said to be peak allocated, and then the network offers a connection-oriented service with a perfect quality of service (QoS) in terms of delay jitter and bandwidth guarantees, However, this occurs at the cost of wasting bandwidth when sources idle or simply slow down. Contention only occurs when allocating channels to circuits during circuit/call (^2) (Dense) Wavelength Division Multiplexing, (D)WDM, is a subclass of FDM that uses optical wavelengths as channels. 3 Note that the source and the destination need not be edge nodes. They can be aggregation nodes in the middle of the network that combine several user flows into one big information flow.
establishment. If there are not enough channels for the request, the call establishment may be delayed, blocked or even dropped. In contrast, once the call is accepted, resources are not shared with other flows, eliminating any uncertainty and, thus, removing the need for buffering, processing or scheduling in the data path. When circuits are peak allocated, the only measure of Quality of Service (QoS) in circuit switching is the blocking probability of a call. To summarize, circuit switching provides traffic isolation and traffic engineering, but at the expense of using bandwidth inefficiently and signaling overhead. It is often said that these two drawbacks make circuit switching highly inflexible, especially in a highly dynamic environment such as the Internet. I will argue in this that these drawbacks are outweighed by the advantages of using more circuit switching in the core of the network.
Packet switching is the basis for the Internet Protocol (IP) [152, 172]. In packet switching, information flows are broken into variable-size packets (or fixed-size cells as in the case of ATM). These packets are sent, one by one, to the nearest router, which will look up the destination address, and then forward them to the corresponding next hop. This process is repeated until the packet reaches its destination. The routing of the information is thus done locally, hop-by-hop. Routing decisions are independent of other decisions in the past and in other routers; however, they are based on network state and topology information that is exchanged among routers using BGP, IS-IS or OSPF [148]. The network does not need to keep any state to operate, other than the routing tables. The forwarding mechanism is called store-and-forward because IP packets are completely received, stored in the router while being processed, and then transmit- ted. Additionally, packets may need to be buffered locally to resolve contention for resources. 4 If the system runs out of buffers, packets are dropped. With the most scheduling policies, such as FCFS and WFQ, packet switching (^4) Resources have contention when they have more arrivals/requests that what they can process. Two examples are the outgoing links and the router interconnect.
is not true by any reasonable metric: market size, number of users, or the amount of traffic. Of course, this is not to say that the Internet will not grow over time to dominate the global communications infrastructure; after all, the Internet is still in its infancy. It is possible — and widely believed — that packet-switched IP datagrams will become the de-facto mechanism for all communications in the future. And so one has to consider the assumptions behind this belief and verify whether packet- switched IP offers inherent and compelling advantages that will lead to its inevitable and unavoidable dominance. This requires the examination of some “sacred cows” of networking; for example, that packet switching is more efficient than circuit switching, that IP is simpler, it lowers the cost of ownership, and it is more robust when there are failures in the network.
It has been reported that the Internet already carries more traffic than the phone system [122, 162], and that the difference in traffic volume will become bigger and bigger over time because Internet traffic is growing at a rate of 100% per annum versus a rate of 5.6% per year for voice traffic [48].
Despite this phenomenal success of the Internet, it is currently only a small fraction of the global communication infrastructure, which consists of separate networks for telephones, broadcast TV, cable TV, satellite, radio, public and private data networks, and the Internet. In terms of revenue, the Internet is a relatively small business. The US business and consumer-oriented ISP markets have revenues of $13B each (2000) [28, 29], in contrast, the TV broadcast industry has revenues of $29.8B (1997), the cable distribution industry $35.0B (1997), the radio broadcast industry $10.6B (1997) [180], and the phone industry $268.5B (1999), of which $111.3B correspond to long distance and $48.5B to wireless [88]. The Internet reaches 59% of US households [133], compared to 94% for telephones and 98% for TV [127, 147]. Even though Internet traffic doubles every year, revenues only increase 17% annually (2001) [162], whereas long-distance phone revenues increase 6.7% per year (1994-97) [136]. If these growth rates were kept constant, IP revenues would not surpass those of the long-distance
phone industry until 2017.^5 If we restrict our focus to the data and telephony infrastructure, the core IP router market still represents a small fraction of the public infrastructure, contrary to what happens in the private enterprise data networks. As shown in Table 2.1, the expenditure on core routers worldwide was $1.7B in 2001, compared to $28.0B for transport circuit switches. So in terms of market size, revenue, number of users, and expenditure on infrastructure, it is safe to say that IP does not currently dominate the global communications infrastructure.
Segment Market size Core routers $1.7B Edge routers $2.4B SONET/SDH/WDM $28.0B Telecom MSS $4.5B
Table 2.1: World market breakup for the public telecommunications infrastructure in 2001 [161, 158, 159, 157].
Figure 2.1 illustrates the devices currently used in the public Internet. The cur- rent communication infrastructure consists of a transport network — made of circuit- switched SONET/SDH and DWDM devices — on top of which run multiple ser- vice networks. The service networks include the voice network (circuit-switched), the IP network (datagram, packet-switched), and the ATM/Frame Relay networks (virtual-circuit-switched). Notice the distinction between the circuit-switched trans- port network, which is made of SONET/SDH and optical switches that switch coarse granularity (n×ST S −1, where an STS-1 channel is 51 Mbit/s), and the voice service circuit switches, which include Class 4 and Class 5 systems that switch 64Kbps voice circuits and handle various telephony-related functions. When considering whether IP has or will take over the world of communications, one needs to consider both the transport and service layers. In other words, for universal packet transport I am considering using a packet network to replace the transport infrastructure; and for (^5) It is interesting to note that for IP revenues to surpass those of long-distance telephony the Internet revenue per household would have to multiply by 358%.
“Analysts say [packet-switched networks] can carry 6 to 10 times the traffic of traditional circuit-switched networks.” — Business Week.
From the early days of computer networking, it has been well known that packet switching makes efficient use of scarce link bandwidth [10]. With packet switching, statistical multiplexing allows link bandwidth to be shared by all users, and work- conserving link sharing policies (such as FCFS and WFQ) ensure that a link is always busy when packets are queued-up waiting to use it. In contrast, with circuit switching, each flow is assigned its own channel, so a channel could go idle even if other flows are waiting. Packet switching (and thus IP) makes more efficient use of the bandwidth than circuit switching, which was particularly important in the early days of the Internet when long haul links were slow, congested and expensive. It is worth asking: What is the current utilization of the Internet, and how much does efficiency matter today? Odlyzko and others [135, 47, 90, 23] report that the core of the Internet is heavily overprovisioned, and that the average link utilization in links in the core is between 3% and 20% (compared to 33% average link utilization in long-distance phone lines [135, 160]). The reasons that they give for low utilization are threefold: First, Internet traffic is extremely asymmetric and bursty, but links are symmetric and of fixed capacity; second, it is difficult to predict traffic growth in a link, so operators tend to add bandwidth aggressively; third, with falling prices for coarser bandwidth granularity as faster technology appears, it is more economical to add capacity in large increments. There are other reasons to keep network utilization low. When congested, a packet-switched network performs badly, becomes unstable and can experience oscil- lations and synchronization. Many factors contribute to this. Complex and dynamic interaction of traffic means that congestion in one part of the network will spread to other parts. Further, the control packets (such as routing packets) are transmitted in-band in the Internet, and hence they are more likely to get lost and delayed when the data-path is congested. When routing protocol packets are lost or delayed due to network congestion or control processor overload, it causes an inconsistent routing
state, and may result in traffic loops, black holes, and disconnected regions of the net- work, which further exacerbate congestion in the data path [107, 55]. Currently, the most effective way for network providers to address these problems is by preventing congestion and keeping network utilization low.
But perhaps the most significant reason that network providers overprovision their network is to give low packet delay. Users want predictable behavior, which means low queueing delay, even under abnormal conditions (such as the failure of several links and routers) [90, 77]. As users, we already demand (and are willing to pay for) huge overprovisioning of Ethernet networks (the average utilization of an Ethernet network today is about 1% [47]) simply so that we do not have to share the network with others, and so that our packets can pass through without queueing delay. We will demand the same behavior from the Internet as a whole. We will pay network providers to stop using statistical multiplexing and to instead overprovision their networks. The demand for lower delay will drive providers to decrease link utilization even more than it is today.
Therefore, even though in theory a statistical multiplexed link can potentially yield a higher network utilization and throughput, in practice, to maintain a con- sistent performance and reasonably stable network, network operators significantly overprovision their network, thus keeping the network utilization low.
But simply reducing the average link utilization will not be enough to make users happy. For a typical user to experience low utilization, the variance of the network utilization also needs to be low. There are two flavors of variance that affect the perceived utilization: variance in time (short-term increases in congestion during busy times of the day), and variance by location (while most links are idle, a small number are heavily congested). If we pick some users at random and consider the network utilization their traffic experiences, our sample is biased in favor of users who find the network to be heavily congested. This explains why, as users, we know the average utilization to be low, but find that we often experience long queueing delays.
Reducing variations in link utilization is hard. Without sound traffic management and traffic engineering, the performance, predictability and stability of large IP net- works deteriorate rapidly as load increases. Today, we lack effective techniques to
phone network. If there are no circuits available, the flow is blocked until a channel is free. As we will see in Chapter 3, at the core of the network, where the rate of a single flow is limited by the data-rate of its access link, simulations and analysis suggest that the average user response time of both techniques is the same, independent of the flow length distribution. In summary, even though packet switching can lead to more efficient link utiliza- tion, unpredictable queueing delays force network operators to operate their networks very inefficiently. One can conclude that while efficiency was once a critical factor, it is so outweighed by our need for predictability, stability, immediate access, and low delay that network operators will be forced to run their networks very inefficiently. Network operators have already concluded this; they know that their customers care more about predictability than efficiency, and we know from the dynamics of queue- ing networks, that in order to achieve predictable behavior, network operators must continue to utilize their links very lightly, forfeiting the benefits of statistical mul- tiplexing. As a result, they are paying for the extra complexity of processing every packet in routers, without the benefits of increased efficiency. In other words, the original goal of “efficient usage of expensive and congested links” is no longer valid, and it would provide no benefit to users.
“The Internet was born during the cold war 30 years ago. The US Depart- ment of Defence [decided] to explore the possibility of a communication network that could survive a nuclear attack.” — BBC
The Internet was designed to withstand a catastrophic event in which a large number of links and routers were destroyed. This goal is in line with users and businesses who rely more and more on network connectivity for their activities and operations, and who want the network to be available at all times. Much has been claimed about the reliability of the current Internet, and it is widely believed to be inherently more robust and capable of withstanding failures of different network elements. Its robustness comes from using soft-state routing information; upon a link
or router failure, it can quickly update the routing tables and direct packets around the failed element. In contrast, a circuit-switched network needs to reroute all affected active circuits, which can be a large task for a high-speed link carrying hundreds or thousands of circuits.
The reliability of the current Internet has been studied by Labovitz et al. [107]. They have studied different ISPs over several months, and report a median network availability equivalent to a downtime of 471 min/year. In contrast, Kuhn [102] found that the average downtime in phone networks is less than 5 min/year. As users, we have all experienced network downtime when our link is unavailable or some part of the network is unreachable. On occasions, connectivity is lost for long periods while routers reconfigure their tables and converge to a new topology. Labovitz et al. [106] also observed that the Internet recovers slowly, with a median BGP convergence time of 3 minutes, and frequently taking over 15 minutes. In contrast, SONET/SDH rings, through the use of pre-computed backup paths, are required to recover in less than 50 ms [51], a glitch that is barely noticeable to the user in a network connection or phone conversation.
While it may be argued that the instability and unreliability of the Internet can be attributed to its rapid growth and the ad-hoc and distributed way that it has grown, a more likely explanation is that it is fundamentally more difficult to achieve robustness and stability in packet networks than circuit networks. In particular, since routers/switches need to maintain a distributed routing state, there is always the possibility that the state may become disconnected. In packet networks, inconsistent routing state can generate traffic loops and black holes and disrupt the operation of the network. In addition, as discussed in Section 2.3.2, the likelihood of a network getting into a inconsistent routing state is much higher in IP networks because (a) the routing packets are transmitted in-band, and therefore are more likely to incur congestion due to high load of user traffic; (b) the routing computation in IP networks is very complex; it is, therefore, more likely for the control processor to be overloaded; (c) the probability of misconfiguring a router is high. And misconfiguration of even a single router may cause instability in a large portion of the network. It is surprising that we have continued to use routing protocols that allow one badly behaved router to make
Type of failure Frequency of description occurrence Router Operations 36.8 % Maintenance, power fail- ures, congestion Link Failure 34.1 % Fiber cuts, unreachable, interface down Router Failures 18.9 % Hardware and software problems, routing prob- lems, malicious attacks Undefined 10.5% Miscellaneous and un- known
Table 2.2: Frequency of occurrence of recorded network failures in a regional ISP in a one-year period [107].
“IP-only networks are much easier and simpler to manage, leading to improved economics.” — Business Communications Review
It is an oft-stated principle of the Internet that the complexity belongs at the end-points, so as to keep the routers simple and streamlined. While the general abstraction and protocol specification are simple, implementing a high performance router and operating an IP network are extremely challenging tasks. In terms of router complexity, while the general belief in the academic community is that it takes 10’s of instructions to process an IP packet, the reality is that the complexities of a high performance router has as much to do with the forwarding engine as with the routing protocols (BGP, IS-IS, OSPF etc), where all the intelligence of the IP layer resides, as well as the interactions between the routing protocols and forwarding engine. A high performance router is extremely complex, particularly as the line rates increase. One subjective measure of the complexity is the failure rate of the start-ups in this space. Because of the perceived high growth of the market, a large number of well-financed start-ups with very capable talents and strong backing from carriers have attempted to build high performance routers. Almost all have
failed or are in the process of failing— putting aside the business/market-related issues, none have succeeded technically and delivered a product-quality core router. The core router market is still dominated by two vendors, and many of the architects of one came from the other. The bottom line is that building a core router is far from simple, mastered by only a very small group of people.
If we are looking for simplicity, then we would do well to look at how circuit- switched transport switches are built. First, the software is simpler. The software running in a typical transport switch is based on about three million lines of source code [154], whereas Cisco’s Internet Operating System (IOS) is based on eight million [66], over twice as many. Routers have a reputation for being unreliable, crashing frequently and taking a long time to restart, so much so that router vendors frequently compete on the reliability of their software, pointing out the unreliability of their competitor’s software as a marketing tactic. Even a 5ESS service telephone switch from Lucent, with its myriad of features for call establishment and billing, has only about twice the number of lines of code as a core router [179, 67].
The hardware in the forwarding path of a circuit switch is also simpler than that of a router, as shown in Figure 1.1 and Figure 1.2. At the very least, the line card of a router must unframe/frame the packet, process its header, find the longest-matching prefix that matches the destination address, generate ICMP error messages for ex- pired TTLs, process optional headers, and then buffer the packet (a buffer typically holds 250ms of packet data). If multiple service levels are added (for example, dif- ferentiated services), then multiple queues must be maintained, as well as an output link scheduling mechanism. In a router that performs access control, packets must be classified to determine whether or not they should be forwarded. Further, in a router that supports virtual private networks, there are different forwarding tables for each customer. A router carrying out all these operations typically performs the equivalent of 500 CPU serial instructions per packet (and we thought that all the complexity was in the end system!).
On the other hand, the linecard of an electronic transport switch typically contains a SONET framer to interface to the external line, a chip to map ingress time slots to egress time slots, and an interface to a switch fabric. Essentially, one can build
of the switch. While such a small circuit might not be the best way to incorporate circuit switching into the Internet, using such small flow granularity provides an upper bound on the complexity of doing so. A 10 Gbit/s linecard needs to manage at most 200,000 circuits of 56 Kbit/s. The state required to maintain the circuits, and the algorithms needed to quickly establish and remove circuits, would occupy only a fraction of one ASIC. This suggests that the hardware complexity of a circuit switch will always be lower than the complexity of the corresponding router.
It is interesting to explore how optical technology will affect the performance of routers and circuit switches. In recent years, there has been a good deal of discussion about all-optical Internet routers. As was mentioned in Chapter 1, there are two reasons why this is not feasible. First, a router is a packet switch and so inherently requires large buffers to hold packets during times of congestion, and currently no economically feasible ways exist to buffer large numbers of packets optically. The buffers need to be large because TCP’s congestion control algorithms currently require at least one bandwidth-delay product of buffering to perform well. For a 40 Gbit/s link and a round-trip time of 250 ms, this corresponds to 1.3 GBytes of storage, which is a large amount of electronic buffering and (currently) an unthinkable amount of optical buffering. The second reason that all-optical routers do not make sense is that an Internet router must perform an address lookup for each arriving packet. Neither the size of the routing table, nor the nature of the lookup, lends itself to implementation using optics. For example, a router at the core of the Internet today must hold over 100,000 entries, and must search the table to find the longest matching prefix — a non-trivial operation. There are currently no known ways to do this optically.
Optical switching technology is much better suited to circuit switches. Devices such as tunable lasers, MEMS switches, fiber amplifiers and DWDM multiplexers provide the technology to build extremely high capacity, low power circuit switches that are well beyond the capacities possible in electronic routers [15].
In summary, packet switches and IP linecards have to perform more operations on the incoming data. This requires more chips, both for logic functions and buffering; in addition, these chips are more complex. In contrast, circuit switches are simpler, which allows them to have higher capacities and to be implemented in optics.
“Packet technology is just inherently much less expensive and more flexible than circuit switches.” — CTO of Sonus.
IP networks are usually marketed as having a lower cost of ownership than the corresponding circuit-switched network, and so they should displace circuit switching from the parts of the network that it still dominates; however, this has not (yet) happened. For example, Voice over IP (VoIP) promises lower communication costs because of the statistical multiplexing gain of packet switching and the sharing of the physical infrastructure between data and voice traffic. Despite these potential long- term cost savings, less than 6% of all international traffic used VoIP in 2001 [38, 98]. VoIP has become less attractive because fierce competition among phone companies has dramatically driven down the prices of long-distance calls [26]. In addition, the cost savings of a single infrastructure can only be realized in new buildings. One of the most important factors in determining a network architecture is the total cost of ownership. Given two options with equivalent technical capabilities, the least expensive option is the one that gets deployed in the long term. So, in order to see whether IP will conquer the world of communications, one needs to answer this question: Is there something inherent in packet switching that makes packet-switched networks less expensive to build and operate? Here, the metric to study is the total cost per bit/s of capacity. As we saw in Section 2.3.1, the market for core routers is much smaller than that of circuit switches. One could argue that the market difference is because routers are far less expensive than circuit switches and that carriers are stuck into supporting expensive legacy circuit-switched equipment; however, IP, SONET/SDH and DWDM reached maturity almost at the same time,^7 so a historical advantage does not seem to be a valid explanation for the market sizes. A more likely explanation is that there are simply more circuit switches than routers in the core because routers are
(^7) In April 1995, commercial Internet was born after the decommissioning of the NSFnet. In March 1994, Sprint first announced its deployment of directional SONET rings. The first deployments of WDM were from June 1996.