Chủ Nhật, 16 tháng 2, 2014

Tài liệu IP for 3G - (P3) doc

tions as Internet standards. Appeals against decisions made by IESG can be
made to the IAB, the Internet Architectures Board. This technical advisory
body aims to maintain a cohesive picture of the Internet architecture.
Finally IANA, the Internet Assigned Number Authority, has responsibility
for assignment of unique parameter values (e.g. port numbers). The ISOC is
responsible for the development only of the Internet networking standards.
Separate organisations exist for the development of many other aspects of
the ‘Internet’ as we know it today; for example, Web development takes
place in a completely separate organisation. There remains a clear distinc-
tion between the development of the network and the applications and
services that use the network.
Within this overall framework, the main standardisation work occurs
within the IETF and its working groups. This body is significantly different
from conventional standards bodies such as the ITU, International Telecom-
munication Union, in which governments and the private sector co-ordi-
nate global telecommunications networks and services, or ANSI, the
American National Standards Institute, which again involves both the
public and private sector companies. The private sector in these organisa-
tions is often accused of promoting its own patented technology solutions
to any particular problem, whilst the use of patented technology is avoided
within the IETF. Instead, the IETF working groups and meetings are open to
any person who has anything to contribute to the debate. This does not of
course prevent groups of people with similar interest all attending. Busi-
nesses have used this route to ensure that their favourite technology is given
a strong (loud) voice.
The work of the IETF and the drafting of standards are devolved to specific
working groups. Each working group belongs to one of the nine specific
functional areas, covering Applications to SubIP. These working groups,
which focus on one specific topic, are formed when there is a sufficient
IP STANDARDISATION PROCESS 75
Figure 3.2 showing the organisation of the Internet society.
weight of interest in a particular area. At any one time, there may be in the
order of 150 working groups. Anybody can make a written contribution to
the work of a group; such a contribution is known as an Internet Draft. Once
a draft has been submitted, comments may be made on the e-mail list, and if
all goes well, the draft may be formally considered at the next IETF meeting.
These IETF meetings are attended by upwards of 2000 individual delegates.
Within the meeting, many parallel sessions are held by each of the working
groups. The meetings also provide a time for ‘BOF’, Birds of a Feather,
sessions where people interested in working on a specific task can see if
there is sufficient interest to generate a new working group. Any Internet
Draft has a lifetime of 6 months, after which it is updated and re-issued
following e-mail discussion, adopted, or, most likely, dropped. Adopted
drafts become RFCs – Request For Comments – for example, IP itself is
described in RFC 791. Working groups are disbanded once they have
completed the work of their original charter.
Within the development of Internet standards, the working groups
generally aim to find a consensus solution based on the technical quality
of the proposal. Where consensus cannot be reached, different working
groups may be formed that each look at different solutions. Often, this
leads to two or more different solutions, each becoming standard. These
will be incompatible solutions to the same problem. In this situation, the
market will determine which is its preferred solution. This avoids the
problem, often seen in the telecommunications environment, where a
single, compromise, standard is developed that has so many optional
components to cover the interests of different parties that different imple-
mentations of the standard do not work together. Indeed, the requirement
for simple protocol definitions that, by avoiding compromise and
complexity, lead to good implementations is a very important focus in
protocol definition. To achieve full standard status, there should be at
least two independent, working, compatible implementations of the
proposed standard. Another indication of how important actual implemen-
tations are in the Internet standardisation process is currently taking place
in the QoS community. The Integrated Service Architecture, as described
in the QoS chapter, has three service definitions, a guaranteed service, a
controlled load service, and a best effort service. Over time, it has become
clear that implementations are not accurate to the service definitions.
Therefore, there is a proposal to produce an informational RFC that
provides service definitions in line with the actual implementations, thus
promoting a pragmatic approach to inter-operability.
The IP standardisation process is very dynamic – it has a wide range of
contributors, and the debate at meetings and on e-mail lists can be very
heated. The nature of the work is such that only those who are really interested
in a topic become involved, and they are only listened to if they are deemed to
be making sense. It has often been suggested that this dynamic process is one
of the reasons that IP has been so successful over the past few years.
AN INTRODUCTION TO IP NETWORKS76
3.4 IP Design Principles
In following IETF e-mail debates, it is useful to understand some of the
underlying philosophy and design principles that are usually strongly
adhered to by those working on Internet development. However, it is
worth remembering that the RFC1958, ‘Architectural Principles of the Inter-
net’ does state that ‘‘the principle of constant change is perhaps the only
principle of the Internet that should survive indefinitely’’ and, further, that
‘‘engineering feed-back from real implementations is more important that
any architectural principles’’.
Two of these key principles, layering and the end-to-end principle, have
already been mentioned in the introductory chapter as part of the discussion
of the engineering benefits of ‘IP for 3G’. However, this section begins with
what is probably the more fundamental principle: connectivity.
3.4.1 Connectivity
Providing connectivity is the key goal of the Internet. It is believed that
focusing on this, rather than on trying to guess what the connectivity
might be used for, has been behind the exponential growth of the Internet.
Since the Internet concentrates on connectivity, it has supported the devel-
opment not just of a single service like telephony but of a whole host of
applications all using the same connectivity. The key to this connectivity is
the inter-networking
2
layer – the Internet Protocol provides one protocol that
allows for seamless operation over a whole range of different networks.
Indeed, the method of carrying IP packets has been defined for each of
the carriers illustrated in Figure 3.3. Further details can be found in
RFC2549, ‘IP over avian carriers with Quality of Service’.
Each of these networks can carry IP data packets. IP packets, independent
IP DESIGN PRINCIPLES 77
Figure 3.3 Possible carriers of IP packets - satellite, radio, telephone wires, birds.
2
Internet ¼ Inter-Networking.
of the physical network type, have the same common format and common
addressing scheme. Thus, it is easy to take a packet from one type of network
(satellite) and send it on over another network (such as a telephone network).
A useful analogy is the post network. Provided the post is put into an envel-
ope, the correct stamp added, and an address specified, the post will be
delivered by walking to the post office, then by van to the sorting office, and
possibly by train or plane towards its final destination. This only works
because everyone understands the rules (the posting protocol) that apply.
The carrier is unimportant. However, if, by mistake, an IP address is put on
the envelope, there is no chance of correct delivery. This would require a
translator (referred to elsewhere in this book as a ‘media gateway’) to trans-
late the IP address to the postal address.
Connectivity, clearly a benefit to users, is also beneficial to the network
operators. Those that provide Internet connectivity immediately ensure that
their users can reach users world-wide, regardless of local network provi-
ders. To achieve this connectivity, the different networks need to be inter-
connected. They can achieve this either through peer–peer relationships
with specific carriers, or through connection to one of the (usually non-
profit) Internet exchanges. These exchanges exist around the world and
provide the physical connectivity between different types of network and
different network suppliers (the ISPs, Internet Service Providers). An example
of an Internet Exchange is LINX, the London Internet Exchange. This
exchange is significant because most transatlantic cables terminate in the
UK, and separate submarine cables then connect the UK, and hence the US,
to the rest of Europe. Thus, it is not surprising that LINX statistics show that
45% of the total Internet routing table is available by peering at LINX. A key
difference between LINX and, for example the telephone systems that inter-
connect the UK and US, is its simplicity. The IP protocol ensures that inter-
working will occur. The exchange could be a simple piece of Ethernet cable
to which each operator attaches a standard router. The IP routing protocols
(later discussed) will then ensure that hosts on either network can commu-
nicate.
The focus on connectivity also has an impact on how protocol implemen-
tations are written. A good protocol implementation is one that works well
with other protocol implementations, not one that adheres rigorously to the
standards
3
. Throughout the Internet development, the focus is always on
producing a system that works. Analysis, models, and optimisations are all
considered as a lower priority. This connectivity principle can be applied in
the wireless environment when considering that, in applying the IP proto-
cols, invariably a system is developed that is less optimised, specifically less
bandwidth-efficient, than current 2G wireless systems. But a system may also
be produced that gives wireless users immediate access to the full connec-
AN INTRODUCTION TO IP NETWORKS78
3
Since any natural language is open to ambiguity, two accurate standard implementations may
mot actually inter-work.
tivity of the Internet, using standard programs and applications, whilst leav-
ing much scope for innovative, subIP development of the wireless transmis-
sion systems. Further, as wireless systems do become broadband – like the
Hiperlan system
4
, for example – such efficiency concerns will become less
significant.
Connectivity was one of the key drivers for the original DoD network. The
DoD wanted a network that would provide connectivity, even if large parts
of the network were destroyed by enemy actions. This, in turn, led directly to
the connectionless packet network seen today, rather than a circuit network
such as that used in 2G mobile systems.
Circuit switched networks, illustrated in Figure 3.4, operate by the user
first requesting that a path be set up through the network to the destination
– dialling the telephone number. This message is propagated through the
network and at each switching point, information (state) is stored about
the request, and resources are reserved for use by the user. Only once the
path has been established can data be sent. This guarantees that data will
reach the destination. All the data to the destination will follow the same
path, and so will arrive in the order sent. In such a network, it is easy to
ensure that the delays data experience through the network are
constrained, as the resource reservation means that there is no possibility
of congestion occurring except at call set-up time (when a busy tone is
received and sent to the calling party). However, there is often a signifi-
cant time delay before data can be sent – it can easily take 10 s to
connect an international, or mobile, call. Further, this type of network
may be used inefficiently as a full circuit-worth of resources are reserved,
irrespective of whether they are used. This is the type of network used in
standard telephony and 2G mobile systems.
IP DESIGN PRINCIPLES 79
4
Hiperlan and other wireless LAN technologies operate in an unregulated spectrum.
Figure 3.4 Circuit switched communications.
In a connectionless network (Figure 3.5), there is no need to establish a
path for the data through the network before data transmission. There is no
state information stored within the network about particular communica-
tions. Instead, each packet of data carries the destination address and can
be routed to that destination independently of the other packets that might
make up the transmission. There are no guarantees that any packet will reach
the destination, as it is not known whether the destination can be reached
when the data are sent. There is no guarantee that all data will follow the
same route to the destination, so there is no guarantee that the data will
arrive in the order in which they were sent. There is no guarantee that data
will not suffer long delays due to congestion. Whilst such a network may
seem to be much worse than the guaranteed network described above, its
original advantage from the DoD point of view was that such a network
could be made highly resilient. Should any node be destroyed, packets
would still be able to find alternative routes through the network. No state
information about the data transmission could be lost, as all the required
information is carried with each data packet.
Another advantage of the network is that it is more suited to delivery of
small messages, whereas in a circuit-switched connection oriented network
the amount of data and time needed in order to establish a data path would
be significant compared with the amount of useful data. Short messages,
such as data acknowledgements, are very common in the Internet. Indeed,
measurements suggest that half the packets on the Internet are no more than
100 bytes long (although more than half the total data transmitted comes in
large packets). Similarly, once a circuit has been established, sending small,
irregular data messages would be highly inefficient – wasteful of bandwidth,
as, unlike the packet network, other data could not access the unused
resources.
Although a connectionless network does not guarantee that all packets are
delivered without errors and in the correct order, it is a relatively simple task
for the end hosts to achieve these goals without any network functionality.
Indeed, it appears that the only functionality that is difficult to achieve with-
AN INTRODUCTION TO IP NETWORKS80
Figure 3.5 Packet switched network.
out some level of network functionality is that of delivering packets through
the network with a bounded delay. This functionality is not significant for
computer communications, or even for information download services, but
is essential if user–user interactive services (such as telephony) are to be
successfully transmitted over the Internet. As anyone with experience of
satellite communications will know, large delays in speech make it very
difficult to hold a conversation.
In general, in order to enable applications to maintain connectivity, in the
presence of partial network failures, one must ensure that end-to-end proto-
cols do not rely on state information being held within the network. Thus,
services such as QoS that typically introduce state within the network need
to be carefully designed to ensure that minimal state is held within the
network, that minimal service disruption occurs if failure occurs, and that,
where possible, the network should be self-healing.
3.4.2 The End-to-end Principle
The second major design principle is the end-to-end principle. This is really a
statement that only the end systems can correctly perform functions that are
required from end-to-end, such as security and reliability, and therefore,
these functions should be left to the end systems. End systems are the
hosts that are actually communicating, such as a PC or mobile phone. Figure
3.6 illustrates the difference between the Internet’s end-to-end approach and
the approach of traditional telecommunication systems such as 2G mobile
systems. This end-to-end approach removes much of the complexity from
the network, and prevents unnecessary processing, as the network does not
need to provide functions that the terminal will need to perform for itself.
This principle does not mean that a communications system cannot provide
enhancement by providing an incomplete version of any specific function
(for example, local error recovery over a lossy link).
As an example, we can consider the handling of corrupted packets.
IP DESIGN PRINCIPLES 81
Figure 3.6 Processing complexity within a telecommunications network, and distributed to the end
terminals in an Internet network.
During the transmission of data from one application to another, it is possible
that errors could occur. In many cases, these errors will need to be corrected
for the application to proceed correctly. It would be possible for the network
to ensure that corrupted packets were not delivered to the terminal by
running a protocol across each segment of the network that provided local
error correction. However, this is a slow process, and with modern and
reliable networks, most hops will have no errors to correct. The slowness
of the procedure will even cause problems to certain types of application,
such as voice, which prefer rapid data delivery and can tolerate a certain
level of data corruption. If accurate data delivery is important, despite the
network error correction, the application will still need to run an end-to-end
error correction protocol like TCP. This is because errors could still occur in
the data either in an untrusted part of the network or as it is handled on the
end terminals between the application sending/receiving the data and the
terminal transmitting/delivering the data. Thus, the use of hop-by-hop error
correction is not sufficient for many applications’ requirements, but leads to
an increasingly complex network and slower transmission.
The assumption, used above, of accurate transmission is not necessarily
valid in wireless networks. Here, local error recovery over the wireless hop
may still be needed. Indeed, in this situation, a local error recovery scheme
might provide additional efficiency by preventing excess TCP re-transmis-
sions across the whole network. The wireless network need only provide
basic error recovery mechanisms to supplement any that might be used by
the end terminals. However, practice has shown that this can be very
difficult to implement well. Inefficiencies often occur as the two error-
correction schemes (TCP and the local mechanism) may interact in unpre-
dictable or unfortunate ways. For example, the long time delays on wireless
networks, which become even worse if good error correction techniques
are used, adversely affect TCP throughput. This exemplifies the problems
that can be caused if any piece of functionality is performed more than
once.
Other functions that are also the responsibility of the end terminals include
ordering of data packets, by giving them sequence numbers, and the sche-
duling of data packets to the application. One of the most important func-
tions that should be provided by the end terminals is that of security. For
example, if two end points want to hide their data from other users, the most
efficient and secure way to do this is to run a protocol between them. One
such protocol is IPsec, which encrypts the packet payload so that it cannot
be ‘opened’ by any of the routers, or indeed anyone pretending to be a
router. This exemplifies another general principle, that the network cannot
assume that it can have any knowledge of the protocols being used end to
end, or of the nature of the data being transmitted. The network can therefore
not use such information to give an ‘improved’ service to users. This can
affect, for example, how compression might be used to give more efficient
use of bandwidth over a low-bandwidth wireless link.
AN INTRODUCTION TO IP NETWORKS82
This end-to-end principle is often reduced to the concept of the ‘stupid’
network, as opposed to the telecommunications concept of an ‘intelligent
network’. The end-to-end principle means that the basic network deals
only with IP packets and is independent of the transport layer protocol –
allowing a much greater flexibility. This principle does assume that hosts
have sufficient capabilities to perform these functions. This can translate
into a requirement for a certain level of processing and memory capability
for the host, which may in turn impact upon the weight and battery
requirements of a mobile node. However, technology advances over the
last few years have made this a much less significant issue than in the
past.
3.4.3 Layering and Modularity
One of the key design principles is that, in order to be readily implementa-
ble, solutions should be simple and easy to understand. One way to achieve
this is through layering. This is a structured way of dividing the functionality
in order to remove or hide complexity. Each layer offers specific services to
upper layers, whilst hiding the implementation detail from the higher layers.
Ideally, there should be a clean interface between each layer. This simplifies
programming and makes it easier to change any individual layer implemen-
tation. For communications, a protocol exists that allows a specific layer on
one machine to communicate to the peer layer on another machine. Each
protocol belongs to one layer. Thus, the IP layer on one machine commu-
nicates to the peer IP layer on another machine to provide a packet delivery
service. This is used by the upper transport layer in order to provide reliable
packet delivery by adding the error recovery functions. Extending this
concept in the orthogonal direction, we get the concept of modularity.
Any protocol performs one well-defined function (at a specific layer).
These modular protocols can then be reused. Ideally protocols should be
reused wherever possible, and functionality should not be duplicated. The
problems of functionality duplication were indicated in the previous section
when interactions occur between similar functionality provided at different
layers. Avoiding duplication also makes it easier for users and programmers
to understand the system. The layered model of the Internet shown in Figure
3.7 is basically a representation of the current state of the network – it is a
model that is designed to describe the solution. The next few sections look
briefly at the role of each of the layers.
Physical Layer
This is the layer at which physical bits are transferred around the world. The
physical media could be an optical fibre using light pulses, or a cable where
a certain voltage on the cable would indicate a 0 or 1 bit.
IP DESIGN PRINCIPLES 83
Link Layer
This layer puts the IP packets on to the physical media. Ethernet is one
example of a link layer. This enables computers sharing a physical cable
to deliver frames across the cable. Ethernet essentially manages the access
on to the physical media (it is responsible for Media Access Control, MAC).
All Ethernet modules will listen to the cable to ensure that they only transmit
packets when nobody else is transmitting. Not all packets entering an Ether-
net module will go to the IP module on a computer. For example, some
packets may go to the ARP, Address Resolution Protocol, module that main-
tains a mapping between IP addresses and Ethernet addresses. IP addresses
may change regularly, for example when a computer is moved to a different
building, whilst the Ethernet address is hardwired into the Ethernet card on
manufacture.
IP Layer
This layer is responsible for routing packets to their destination. This may be
by choosing the correct output port such as the local Ethernet, or for data that
have reached the destination computer. It will choose a local ‘port’ such as
that representing the TCP or UDP transport layer modules. It makes no
guarantees that the data will be delivered correctly, in order or even at all.
It is even possible that duplicate packets are transmitted. It is this layer that is
responsible for the inter-connectivity of the Internet.
Transport Layer
This layer improves upon the IP layer by adding commonly required func-
tionality. It is separate from the IP layer as not all applications require the
same functionality. Key protocols at this layer are TCP, the Transmission
AN INTRODUCTION TO IP NETWORKS84
Figure 3.7 An example of IP protocol stack on a computer. Specific protocols provide specific
functionality in any particular layer. The IP layer provides the connectivity across many different
network types.

Không có nhận xét nào:

Đăng nhận xét