ATM was intended to provide a single unified networking standard that could support both synchronous channel networking (PDH, SDH) and packet-based networking (IP, Frame relay, etc), whilst supporting multiple levels of quality of service for packet traffic.
ATM sought to resolve the conflict between circuit-switched networks and packet-switched networks by mapping both bitstreams and packet-streams onto a stream of small fixed-size 'cells' tagged with virtual circuit identifiers. The cells are typically sent on demand within a synchronous time-slot pattern in a synchronous bit-stream: what is asynchronous here is the sending of the cells, not the low-level bitstream that carries them.
In its original conception, ATM was to be the enabling technology of the 'Broadband Integrated Services Digital Network' (B-ISDN) that would replace the existing PSTN. The full suite of ATM standards provides definitions for layer 1 (physical connections), layer 2 (data link layer) and layer 3 (network) of the classical OSI seven-layer networking model. The ATM standards drew on concepts drawn from the telecommunications community, rather than the computer networking community. For this reason, extensive provision was made for integration of most existing telco technologies and conventions into ATM.
As a result, ATM provides a highly complex technology, with features intended for applications ranging from global telco networks to private local area computer networks.
Numerous telcoss have implemented wide-area ATM networks, and many ADSL implementations utilise ATM. However, ATM has failed to gain wide use as a LAN technology, and its great complexity has held back its full deployment as the single integrating network technology in the way that its inventors originally intended.
(Many people, particularly in the Internet protcol-design community, considered this vision chimerical in any case. Their argument went something like this: We know that there will always be both brand-new and obsolescent link-layer technologies, particularly in the LAN area, and it is fair to assume that not all of them will fit neatly into the SDH model that ATM was designed for. Therefore, some sort of protocol is needed to provide a unifying layer over both ATM and non-ATM link layers, and ATM itself can't fill that role. Conveniently, we have this protcol called "IP" which already does that. Ergo, there is no point in implementing ATM at the network layer.)
Most of the good ideas from ATM migrated into MPLS, a generic layer 2 packet switching protocol. ATM remains useful and widely deployed as a multiplexing layer in DSL networks, where its compromises fit the needs of the application well.
ATM will probably also survive for some time in higher-speed interconnects where carriers have already committed themselves to existing ATM deployments as a way of combining PDH/SDH traffic and packet-switched traffic into a single infrastructure.
The motivation of the use of small data cells was the reduction of jitter in the multiplexing of data streams.
At the time ATM was designed, 155 Mbits/s SDH (135 Mbits/s payload) was considered a fast optical network link, and many PDH links in the digital network were considerably slower, ranging from 1.544 Mbits/s to 45 Mbits/s in the USA (2 Mbits/s to 34 Mbits/s in Europe).
At this rate, a typical full-length 1500 byte (12000 bit) data packet would take 89 µs to transmit. In a lower-speed link, such as a 1.544 Mbits/s T1 link, a 1500 byte packet would take up to 7.8 milliseconds.
Now consider a speech signal reduced to packets, and forced to share a link with bursty data traffic. No matter how small the speech packets could be made, they would always encounter full-size data packets, and under normal queuing conditions, might experience maximum queuing delays that might be several times the figure of 7.8 ms, in addition to any packetisation delay in the shorter speech packet. This was clearly unacceptable for speech traffic, as even if the jitter was buffered out, the delay involved would be such that echo cancellers would be required even in local networks. This was considered too expensive at the time.
The ATM solution was to break all packets, data and voice streams up into 48 byte chunks, adding a 5 byte routing header to each one so that they could be re-assembled later, and to multiplex these 53 byte cells instead of packets. Doing so reduced the worst-case queuing jitter by a factor of almost 30, removing the need for echo cancellers. The rules for segmentating and reassembling packets and streams into cells are known as ATM Adaptation Layers: the two most important are AAL 1, used for streams, and AAL 5, used for most types of packets. Which AAL is in use for a given cell is not encoded in the cell: instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis.
Since then, networks have become much faster. Now (2001) a 1500 byte (12000 bit) full-size Ethernet packet will take only 1.2 µs to transmit on a 10 Gbits/s optical network, removing the need for small cells to reduce jitter, and some consider that this removes the need for ATM in the network backbone.
On slow links (2 Mbit/s and below) ATM still makes sense, and this is why so many ADSL systems use ATM as an intermediate layer between the physical link layer and a Layer 2 protocol like PPP or Ethernet.
ATM is a channel based transport layer. This is encompassed in the concept of Virtual Paths (VP's) and Virtual Circuits (VC's). Every ATM cell has a 8-bit Virtual Path Identifier (VPI) and 16-bit Virtual Circuit Identifer (VCI) pair defined in its header. As these cells traverse an ATM network switching is achieved by changing the VPI/VCI values. Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other the concept of a circuit is (unlike IP where any given packet could get to its destination by a different route to preceding and following packets).
The use of virtual circuits also has the advantage of being able to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, n*64 channels, IP, SNA etc.) to share a common ATM connection without interfering with one another.
Another key ATM concept is that of the traffic contract. When an ATM circuit is set up each switch is informed of the traffic class of the connection.
ATM traffic contracts are part of the mechanism by which "Quality of Service" (QoS) is ensured. There are three basic types (and several variants) which each have a set of parameters describing the connection.
Traffic contracts are usually maintained by the use of "Shaping", a combination of queuing and marking of cells, and enforced by "Policing".
An ATM cell consists of a 5 byte header and a 48 byte payload. The payload size of 48 bytes was a compromise between the needs of voice telephony and packet networks, obtained by a simple averaging of the US proposal of 64 bytes and European proposal of 32, said by some to be motivated by a European desire not to need echo-cancellers on national trunks.
ATM defines two different cell formats: NNI (Network-network interface) and UNI (User-network interface). Most ATM links use UNI cell format.
ASCII diagram of a UNI ATM cell
PT = Payload Type (3 bits)
GFC = Generic Flow Control (4 bits)
HEC = Header Error Correction (checksum of header only)
8 4 0
- - - - - - - - - - - - - -
| GFC | VPI |
- - - - - - - - - - - - - -
| VPI | VCI |
- - - - - - - - - - - - - -
| VCI |
- - - - - - - - - - - - - -
| VCI | PT |CLP|
- - - - - - - - - - - - - -
| HEC |
- - - - - - - - - - - - - -
| |
| 48 bytes of payload |
| |
...
| |
| |
- - - - - - - - - - - - - -
In a UNI cell the GFC field is reserved for an (as yet undefined) local flow control/submultiplexing system between network and user. All four GFC bits must be zero by default.
The NNI cell format is almost identical to the UNI format, except that the 4-bit GFC field is re-allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212 VPs of up to almost 212 VCs each
(in practice some of the VP and VC numbers are reserved).
Q.2931,etc.
To be written...
NSAP addressing, ATMARP, IP over ATM SVCs, etc...
To be written...
External links:
Introduction
ATM Concepts
Why cells?
Why virtual circuits?
Using cells and virtual circuits for traffic engineering
Most traffic classes also introduce the concept of Cell Delay Variation Time (CDVT) which defines the "clumping" of cells in time.Traffic Shaping
This is usually done at the entry point to an ATM network and attempts to ensure that the cell flow will meet its traffic contract.Traffic Policing
To maintain network performance it is possible to police virtual circuits against their traffic contracts. If a circuit is exceeding its traffic contract the network can either drop the cells or mark the Cell Loss Priority (CLP) bit to identify a cell as discardable further down the line. Basic policing works on a cell by cell basis but this is sub-optimal for encapsulated packet traffic as discarding a single cell will invalidate the whole packet anyway. As a result schemes such as Partial Packet Discard (PPD) and Early Packet Discard (EPD) have been created that will discard a whole series of cells until the next frame starts. This reduces the number of redundent cells in the network saving bandwidth for full frames. EPD and PPD work with AAL5 conections as they use the frame end bit to detect the end of packets.Structure of an ATM Cell
Fields not mentioned above:
The PT field is used to designate various special kinds of cells for Operation and Management (OAM) purposes, and to delineate packet boundaries in some AALs.ATM-specific signalling and routing protocols
Layer 3 networking over ATM switched virtual circuits
See also: