InfiniBand uses a bidirectional serial bus for low cost and low latency. Nevetheless it is very fast, 10Gbps in both directions. InfiniBand uses a switched fabric topology so several devices can share the network at the same time (as opposed to a bus topology). Data is transmitted in packets, that taken together form a message. A message can be a direct memory access read or write operation, a channel send or receive, a transaction-based operation (that can be reversed), or a multicast transmission.
Like the channel model used in most mainframes, all transmissions begin or end with a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can exchange information for security or quality of service.
The primary aim of InfiniBand appears to be to connect CPUs and their high speed devices into clusters for "back-office" applications. In this role it will replace PCI, Fibre Channel, and various machine-interconnect systems like Ethernet. Instead all of the CPUs and peripherals will be connected into a single larger InfiniBand fabric. This has a number of advantages in addition to greater speed, not the least of which is that normally "hidden" devices like PCI cards can be used by any device on the fabric. In theory this should make the construction of clusters much easier, and potentially less expensive because more devices can be shared.