









Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
his notes of section C and D , According to university syllabus
Typology: Study notes
1 / 17
This page cannot be seen from the preview
Don't miss anything!
Fast Ethernet
In computer networking, Fast Ethernet is a collective term for a number of Ethernet standards
that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet speed of 10
Mbit/s. Of the fast Ethernet standards 100BASE-TX is by far the most common and is supported
by the vast majority of Ethernet hardware currently produced. Fast Ethernet was introduced in
1995 and remained the fastest version of Ethernet for three years before being superseded
by gigabit Ethernet.
Fast Ethernet is an extension of the existing Ethernet standard. It runs on UTP data or optical fiber cable and uses CSMA/CD in a star wired bus topology, similar to 10BASE-T where all cables are attached to a hub. And, it provides compatibility with existing 10BASE-T systems and thus enables plug-and-play upgrades from 10BASE-T. Fast Ethernet is sometimes referred to as 100BASE-X where X is a placeholder for the FX and TX variants.
The 100 in the media type designation refers to the transmission speed of 100 Mbps. The "BASE" refers to baseband signalling, which means that only Ethernet signals are carried on the medium. The TX , FX and T4 refer to the physical medium that carries the signal. A fast Ethernet adapter can be logically divided into a Media Access Controller (MAC) which deals with the higher level issues of medium availability and a Physical Layer Interface (PHY). The MAC may be linked to the PHY by a 4 bit 25 MHz synchronous parallel interface known as a Media Independent Interface (MII) or a 2 bit 50 MHz variant Reduced Media Independent
Interface (RMII). Repeaters (hubs) are also allowed and connect to multiple PHYs for their different interfaces. The MII may (rarely) be an external connection but is usually a connection between ICs in a network adapter or even within a single IC. The specs are written based on the assumption that the interface between MAC and PHY will be a MII but they do not require it. The MII fixes the theoretical maximum data bit rate for all versions of fast Ethernet to 100 Mbit/ s. The data signaling rate actually observed on real networks is less than the theoretical maximum, due to the necessary header and trailer (addressing and error-detection bits) on everyframe, the occasional "lost frame" due to noise, and time waiting after each sent frame for other devices on the network to finish transmitting.
Gigabit Ethernet Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet frames^ at^ a^ rate^ of^ a gigabit^ per^ second,^ as^ defined^ by^ the IEEE 802.3-2008 standard. Half-duplex gigabit links connected through hubs are allowed by the specification but in the marketplace full-duplex with switches is normal. Token Ring
Token ring local area network (LAN) technology is a local area network protocol which resides at the data link layer (DLL) of the OSI model. It uses a special three-byte frame called a token
that travels around the ring. Token-possession grants the possessor permission to transmit on the medium. Token ring frames travel completely around the loop. Stations on a token ring LAN are logically organized in a ring topology with data being transmitted sequentially from one ring station to the next with a control token circulating around the ring controlling access. This token passing mechanism is shared by ARCNET, token bus, and FDDI, and has theoretical advantages over the stochastic CSMA/CD of Ethernet.
Physically, a token ring network is wired as a star, with 'hubs' and arms out to each station and the loop going out-and-back through each.
Cabling is generally IBM "Type-1" shielded twisted pair, with unique hermaphroditic connectors, commonly referred to as IBM data connectors. The connectors have the disadvantage of being quite bulky, requiring at least 3 x 3 cm panel space, and being relatively fragile. Initially (in 1985) token ring ran at 4 Mbit/s, but in 1989 IBM introduced the first 16 Mbit/s token ring products and the 802.5 standard was extended to support this. In 1981, Apollo
Computerintroduced their proprietary 12 Mbit/s Apollo token ring (ATR) and Proteon introduced their 10 Mbit/s ProNet-10 token ring network in 1984. However, IBM token ring was not compatible with ATR or ProNet-10.
Each station passes or repeats the special token frame around the ring to its nearest downstream neighbour. This token-passing process is used to arbitrate access to the shared ring media. Stations that have data frames to transmit must first acquire the token before they can transmit them. Token ring LANs normally use differential Manchester encoding of bits on the LAN
media.
IBM popularized the use of token ring LANs in the mid 1980s when it released its IBM token ring architecture based on active MAUs (Media Access Unit, not to be confused with Medium
Attachment Unit) and the IBM Structured Cabling System. The Institute of Electrical and Electronics Engineers (IEEE) later standardized a token ring LAN system as IEEE 802.5.
Token ring LAN speeds of 4 Mbit/s and 16 Mbit/s were standardized by the IEEE 802.5 working group. An increase to 100 Mbit/s was standardized and marketed during the wane of token ring's existence while a 1000 Mbit/s speed was actually approved in 2001, but no products were ever brought to market. When token ring LANs were first introduced at 4 Mbit/s, there were widely circulated claims that they were superior to Ethernet,[3]^ but these claims were fiercely debated.
With the development of switched Ethernet^ and^ faster^ variants^ of^ Ethernet,^ token^ ring architectures lagged behind Ethernet, and the higher sales of Ethernet allowed economies of scale which drove down prices further, and added a compelling price advantage.
Token ring networks have since declined in usage and the standards activity has since come to a standstill as 100Mbps switched Ethernet has dominated the LAN/layer 2 networking market. LAN interconnecting devices: Hubs, Switches, Bridges, Routers, Gateways
Hubs are used to build a LAN by connecting different computers in a star/hierarchal network topology, the most common type on LANs now a day. A hub is a very simple (or dumb) device, once it gets bits of data sent from computer A to B, it does not check the destination, instead, it forwards that signal to all other computers (B, C, D…) within the network. B will then pick it up while other nodes discard it. This amplifies that the traffic is shared.
Routers use NAT (Network Address Translation) in conjunction with IP Masquerading to provide
the internet to multiple nodes in the LAN under a single IP address.
Now a day, routers come with hub or switch technology to connect computers directly.
OSI: Routers work on the network layer so they can filter data based on IP addresses. They have route tables to store network addresses and forward packets to the right port.
Gateways are very intelligent devices or else can be a computer running the appropriate software to connect and translate data between networks with different protocols or architecture, so their work is much more complex than a normal router. For instance, allowing communication between TCP/IP clients and IPX/SPX or AppleTalk.
OSI: Gateways operate at the network layer and above, but most of them at the application layer.
P.S. The term Gateway is used to refer to routers in some articles so beware. In this case, the router has gateway software. And Default Gateway is used to refer to the node (e.g. router) connecting the LAN to the outside (e.g. internet).
Repeaters are simple devices that work at the physical layer of the OSI. They regenerate signals (active hubs does that too).
There is an important rule to obey while using repeaters/hubs to extend a local network and is called the 5-4-3 rule or the IEEE way. The rule forces that in a single collision domain there shouldn’t be more than 5 segments, 4 repeaters between any two hosts in the network and only 3 of the segments can be populated (contain user connections).
This rule ensures that a signal sent over the network will reach every part of it within an
acceptable length of time.
If the network is bigger, the collision domain can be divided into two parts or more using a
switch or a bridge.
Introduction of WANs
A WAN is a data communications network that covers a relatively broad geographic area and that
often uses transmission facilities provided by common carriers, such as telephone companies.
WAN technologies generally function at the lower three layers of the OSI reference model: the
physical layer,the data link layer, and the network layer.
A point-to-point link provides a single, pre-established WAN communications path from the
customerpremises through a carrier network, such as a telephone company, to a remote network.
Point-to-pointlines are usually leased from a carrier and thus are often called leased lines. For a
point-to-point line, thecarrier allocates pairs of wire and facility hardware to your line only.
These circuits are generally pricedbased on bandwidth required and distance between the two
connected points. Point-to-point links aregenerally more expensive than shared services such as
Frame Relay.
Switched circuits allow data connections that can be initiated when needed and terminated when
communication is complete. This works much like a normal telephone line works for voice
communication. Integrated Services Digital Network (ISDN) is a good example of circuit
switching.When a router has data for a remote site, the switched circuit is initiated with the
circuit number of the remote network. In the case of ISDN circuits, the device actually places a call to the telephone number of the remote ISDN circuit. When the two networks are connected and authenticated, they can transfer data. When the data transmission is complete, the call can be terminated.
Packet switching is a WAN technology in which users share common carrier resources. Because this allows the carrier to make more efficient use of its infrastructure, the cost to the customer is generally much better than with point-to-point lines. In a packet switching setup, networks have connections into he carrier’s network, and many customers share the carrier’s network. The carrier can then create virtualcircuits between customers’ sites by which packets of data are delivered from one to the other throughthe network. The section of the carrier’s network that is shared is often referred to as a cloud. Some examples of packet-switching networks include Asynchronous Transfer Mode (ATM), Frame Relay, Switched Multimegabit Data Services (SMDS), and X.25.
Routing or routeing is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network (Circuit switching) , electronic data networks (such as the Internet), and transportation
networks.^ This^ article^ is^ concerned^ primarily^ with^ routing^ in^ electronic^ data^ networks using packet switching technology.
In packet switching networks, routing directs packet forwarding, the transit of logically
addressed packets from their source toward their ultimate destination through intermediate nodes, typically hardware devices called routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the router's memory, is very important for efficient routing. Most routing algorithms use only one network path at a time, but multipath routing techniques enable the use of multiple alternative paths.
Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption
that network addresses are structured and that similar addresses imply proximity within the network. Because structured addresses allow a single routing table entry to represent the route to a group of devices, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging) in large networks, and has become the dominant form of addressing on the Internet, though bridging is still widely used within localized environments. Congestion Control In data networking and queueing theory, network congestion occurs when a link or node is carrying so much data that its quality of servicedeteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that
A virtual circuit is a logical circuit created within a shared network between two network devices. Two types of virtual circuits exist: switched virtual circuits (SVCs) and permanent virtual circuits (PVCs). SVCs are virtual circuits that are dynamically established on demand and terminated when transmission is complete. Communication over an SVC consists of three phases: circuit establishment, data transfer, and circuit termination. The establishment phase involves creating the virtual circuit between the source and destination devices. Data transfer involves transmitting data between the devices over the virtual circuit, and the circuit termination phase involves tearing down the virtual circuit between the source and destination devices. SVCs are used in situations in which data transmission between devices is sporadic, largely because SVCs increase bandwidth used due to the circuit establishment and termination phases, but they decrease the cost associated with constant virtual circuit availability. PVC is a permanently established virtual circuit that consists of one mode: data transfer. PVCs are used in situations in which data transfer between devices is constant. PVCs decrease the bandwidth use associated with the establishment and termination of virtual circuits, but they increase costs due to constant virtual circuit availability. PVCs are generally configured by the service provider when an order is placed for service. Dialup services offer cost-effective methods for connectivity across WANs. Two popular dialup implementations are dial-on-demand routing (DDR) and dial backup. DDR is a technique whereby a router can dynamically initiate a call on a switched circuit when it needs to send data. In a DDR setup, the router is configured to initiate the call when certain criteria are met, such as a particular type of network traffic needing to be transmitted. When the connection is made, traffic passes over the line. The router configuration specifies an idle timer that tells the router to drop the connection when the circuit has remained idle for a certain period. Dial backup is another way of configuring DDR. However, in dial backup, the switched circuit is used to provide backup service for another type of circuit, such as point-to-point or packet switching. The router is configured so that when a failure is detected on the primary circuit, the dial backup line is initiated. The dial backup line then supports the WAN connection until the primary circuit is restored. When this occurs, the dial backup connection is terminated. WANs use numerous types of devices that are specific to WAN environments. WAN switches, access servers, modems, CSU/DSUs, and ISDN terminal adapters are discussed in the following sections. Other devices found in WAN environments that are used in WAN implementations include routers, ATM switches, and multiplexers. Distributed Queue Dual Bus (DQDB)
In telecommunication, a distributed-queue dual-bus network (DQDB) is a distributed multi- access network that (a) supports integrated communications using a dual bus and distributed queuing (b) provides access to local or metropolitan area networks
(c) supports connectionless data transfer, connection-oriented data transfer, and isochronous communications, such as voice communications.
IEEE 802.6 is an example of a network providing DQDB access methods.
DQDB concept of operation
The DQDB Medium Access Control (MAC) algorithm is generally credited to Robert Newman who developed this algorithm in his PhD thesis in the 1980s at the University of Western Australia. To appreciate the innovative value of the DQDB MAC algorithm, it must be seen
against the background of LAN protocols at that time, which were based on broadcast (such as ethernet IEEE 802.3) or a ring (like token ring IEEE 802.5 and FDDI). The DQDB may be thought of as two token rings, one carrying data in each direction around the ring. The ring is broken between two of the nodes in the ring. (An advantage of this is that if the ring breaks somewhere else, the broken link can be closed to form a ring with only one break again. This gives reliability which is important in Metropolitan Area Networks (MAN), where repairs may take longer than in a LAN and wifi because the damage may be inaccessible).
The DQDB standard IEEE 802.6 was developed while ATM (Broadband ISDN) was still in early development, but there was strong interaction between the two standards. ATM cells and DQDB frames were harmonized. They both settled on essentially a 48-byte data frame with a 5-byte header. In the DQDB algorithm, a distributed queue was implemented by communicating queue state information via the header. Each node in a DQDB network maintains a pair of state variables which represent its position in the distributed queue and the size of the queue. The headers on the reverse bus communicated requests to be inserted in the distributed queue so that upstream nodes would know that they should allow DQDB cells to pass unused on the forward bus. The algorithm was remarkable for its extreme simplicity.
Currently DQDB systems are being installed by many carriers in entire cities, with lengths that reach up to 160 km (100 miles) with speeds of a DS3 line (44.736 Mbit/s) [5]. Other implementations use optical fiber for a length of up to 100 km and speeds around 150 Mbit/s
Synchronous Digital Hierarchy (SDH)
SDH (Synchronous Digital Hierarchy) is an international standard for high speed telecommunication over optical/electrical networks which can transport digital signals in variable capacities. It is a synchronous system which intend to provide a more flexible , yet simple network infrastructure. SDH (and its American variant- SONET) emerged from standard bodies somewhere around 1990.these two standards create a revolution in the communication networks based on optical fibers ,in their cost and performance. Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers or light-emitting diodes (LEDs).
SONET and SDH, which are essentially the same, were originally designed to transport circuit mode communications (e.g., DS1, DS3) from a variety of different sources, but they were primarily designed to support real-time, uncompressed, circuit-switched voice encoded in PCM format. The primary difficulty in doing this prior to SONET/SDH was that the synchronization sources of these various circuits were different. This meant that each circuit was actually operating at a slightly different rate and with different phase. SONET/SDH allowed for the
functions, such as monitoring transmission quality, detecting failures, managing alarms, data communication channels, service channels, etc.The STM frame is continuous and is transmitted in a serial fashion: byte-by-byte, row-by-row.
Asynchronous Transfer Mode (ATM) Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division^ multiplexing, and^ it^ encodes^ data^ into^ small,^ fixed- sized cells. This differs from networks such as the Internet or Ethernet LANs that use variable sized packets or frames. ATM provides data link layer services that run over OSI Layer 1 physical links. ATM has functional similarity with both circuit switched networking and small
packet switched networking. This makes it a good choice for a network that must handle both traditional high-speed data traffic (e.g., file transfers), and real-time, low-latency content such as voice and video. ATM uses aconnection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.ATM is a core protocol used over the SONET/SDH backbone of the Integrated Services Digital Network (ISDN).
Thus, the designers of ATM utilized small data cells to reduce jitter (delay variance, in this case) in the multiplexing of data streams. Reduction of jitter (and also end-to-end round-trip delays) is particularly important when carrying voice traffic, because the conversion of digitized voice into an analog audio signal is an inherently real-time process, and to do a good job, the codec that
does this needs an evenly spaced (in time) stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence or guess — and if the data is late, it is useless, because the time period when it should have been converted to a signal has already passed.
At the time of the design of ATM, 155 Mbit/s SDH (135 Mbit/s payload) was considered a fast optical network link, and many PDH links in the digital network were considerably slower, ranging from 1.544 to 45 Mbit/s in the USA, and 2 to 34 Mbit/s in Europe.
At this rate, a typical full-length 1500 byte (12000-bit) data packet would take 77.42 μs to
transmit. In a lower-speed link, such as a 1.544 Mbit/s T1 link, a 1500 byte packet would take up to 7.8 milliseconds.
A packet voice system can produce this in a number of ways:
■ Have a playback buffer between the network and the codec, one large enough to tide the codec over almost all the jitter in the data. This allows smoothing out the jitter, but the delay introduced by passage through the buffer would require echo cancellers even in local networks; this was considered too expensive at the time. Also, it would have increased the delay across the channel, and conversation is difficult over high-delay channels.
■ Build a system which can inherently provide low jitter (and minimal overall delay) to traffic which needs it.
■ Operate on a 1:1 user basis (i.e., a dedicated pipe).
The design of ATM aimed for a low-jitter network interface. However, to be able to provide short queueing delays, but also be able to carry large datagrams, it had to have cells. ATM broke up all packets, data, and voice streams into 48-byte chunks, adding a 5-byte routing header to each one so that they could be reassembled later. The choice of 48 bytes was political rather than technical.[3]^ When the CCITT was standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a good compromise in larger payloads optimized for data transmission and shorter payloads optimized for real-time applications like voice; parties from Europe wanted 32-byte payloads because the small size (and therefore short transmission times) simplify voice applications with respect to echo cancellation. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length. With 32 bytes, France would have been able to implement an ATM-based voice network with calls from one end of France to the other requiring no echo cancellation. 48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these 53-byte cells instead of packets. Doing so reduced the worst-case jitter due to cell contention by a factor of almost 30, minimizing the need for echo cancellers. Frame Relay Frame Relay is a standardized wide area network technology that specifies the physical and logical link layers of digital telecommunications channels using a packet switching methodology. Originally designed for transport across Integrated Services Digital Network (ISDN) infrastructure, it may be used today in the context of many other network interfaces. Network providers commonly implement Frame Relay for voice (VoFR) and data as an encapsulation technique,^ used^ between local^ area^ networks (LANs)^ over^ a wide^ area network (WAN). Each end-user gets a private line (or leased line) to a frame-relay node. The frame-relay network handles the transmission over a frequently-changing path transparent to all end-users.
client's request or the server's response, and sometimes it may serve the request without
contacting the specified server. In this case, it 'caches' responses from the remote server, and returns subsequent requests for the same content directly.
The proxy concept was invented in the early days of distributed systems as a way to simplify and
control their complexity. Today, most proxies are a web proxy , allowing access to content on the World Wide Web.
A proxy server has a large variety of potential purposes, including:
A proxy server that passes requests and responses unmodified is usually called a gateway or sometimes tunneling proxy.
A proxy server can be placed in the user's local computer or at various points between the user and the destination servers on the Internet.
A reverse proxy is (usually) an Internet-facing proxy used as a front-end to control and protect access to a server on a private network, commonly also performing tasks such as load-balancing, authentication, decryption or caching.
Forward proxies
A forward proxy taking requests from an internal network and forwarding them to the Internet.
Forward proxies are proxies where the client server names the target server to connect
to.Forward proxies are able to retrieve from a wide range of sources (in most cases anywhere on the Internet).
The terms "forward proxy" and "forwarding proxy" are a general description of behaviour
(forwarding traffic) and thus ambiguous. Except for Reverse proxy, the types of proxies described on this article are more specialized sub-types of the general forward proxy concept.
Open proxies
An open proxy forwarding requests from and to anywhere on the Internet.
An open proxy is a forwarding proxy server that is accessible by any Internet user. Gordon Lyon estimates there are "hundreds of thousands" of open proxies on the Internet.An anonymous open proxy allows users to conceal their IP address while browsing the Web or using other Internet services.
Reverse proxies
A reverse proxy taking requests from the Internet and forwarding them to servers in an internal network. Those making requests connect to the proxy and may not be aware of the internal network.
A reverse proxy is a proxy server that appears to clients to be an ordinary server. Requests are
forwarded to one or more origin servers which handle the request. The response is returned as if it came directly from the proxy server.
Reverse proxies are installed in the neighborhood of one or more web servers. All traffic coming
from the Internet and with a destination of one of the web servers goes through the proxy server. The use of "reverse" originates in its counterpart "forward proxy" since the reverse proxy sits closer to the web server and serves only a restricted set of websites.
There are several reasons for installing reverse proxy servers:
The client–server characteristic describes the relationship of cooperating programs in an
application. The server component provides a function or service to one or many clients, which initiate requests for such services.
Functions such as email exchange, web access and database access, are built on the client–server
model. Users accessing banking services from their computer use a web browser client to send a request to a web server at a bank. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve the account information. The balance is returned to the bank database client, which in turn serves it back to the web browser client displaying the results to the user. The client–server model has become one of the central ideas of network computing. Many business applications being written today use the client–server model. So do the Internet's main application protocols, such as HTTP, SMTP, Telnet, and DNS.
Generally a server may be challenged beyond its capabilities. Then a single server may cause a bottleneck or constraints problem. However, servers may be cloned and networked to fulfill all known capacity and performance requirements. Limitations include network load, network address volume, and transaction recovery time.
Aspects of comparison for other architectural concepts today include cloud computing as well. Possible design decision considerations might be:
Windows NT/ 2000
Microsoft Windows is a series of operating systems produced by Microsoft.
Microsoft introduced an operating environment named Windows on November 20, 1985 as an
add-on to MS-DOS in response to the growing interest in graphical user interfaces (GUIs). [2] Microsoft Windows came to dominate the world's personal computer market, overtaking Mac OS, which had been introduced in 1984. As of October 2009, Windows had approximately 90% of the market share of the client operating systems for usage on the Internet.[3][4][5]
The most recent client version of Windows is Windows 7; the most recent server version is Windows Server 2008 R2; the most recent mobile version is Windows Phone 7.
Windows NT family
The NT family of Windows systems was fashioned and marketed for higher reliability business
use. The first release was NT 3.1 (1993), numbered "3.1" to match the consumer Windows version, which was followed by NT 3.5 (1994), NT 3.51 (1995), NT 4.0 (1996), and Windows 2000, which is the last NT-based Windows release that does not include Microsoft Product Activation. Windows NT 4.0 was the first in this line to implement the "Windows 95" user interface (and the first to include Windows 95’s built-in 32-bit runtimes).
Microsoft then moved to combine their consumer and business operating systems with Windows XP that was released in August 2001. It came both in home and professional versions (and later niche market versions for tablet PCs and media centers); they also diverged release schedules for server operating systems. Windows Server 2003, released a year and a half after Windows XP, brought Windows Server up to date with Windows XP. After a lengthy development process, Windows Vista was released toward the end of 2006, and its server counterpart, Windows Server 2008 was released in early 2008. On July 22, 2009, Windows 7 and Windows Server 2008 R were released as RTM (release to manufacturing). Windows 7 was released on October 22, 2009.