InfiniBand is a switched fabric communications link used in high-performance computing and enterprise data centers.
Its features include high throughput, low latency, quality of service and failover, and it is designed to be scalable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices.
Like Fibre Channel, PCI Express, Serial ATA, and many other modern interconnects, InfiniBand offers point-to-point bidirectional serial links intended for the connection of processors with high-speed peripherals such as disks. On top of the point to point capabilities, InfiniBand also offers multicast operations as well. It supports several signalling rates and, as with PCI Express, links can be bonded together for additional throughput.
Effective theoretical throughput in different configurations (the actual data rate, not the signaling rate)
| SDR | DDR | QDR | FDR | EDR | HDR | NDR |
1X | 2 Gbit/s | 4 Gbit/s | 8 Gbit/s | 14 Gbit/s | 25 Gbit/s | 125 Gbit/s | 750 Gbit/s |
4X | 8 Gbit/s | 16 Gbit/s | 32 Gbit/s | 56 Gbit/s | 100 Gbit/s | 500 Gbit/s | 3000 Gbit/s |
12X | 24 Gbit/s | 48 Gbit/s | 96 Gbit/s | 168 Gbit/s | 300 Gbit/s | 1500 Gbit/s | 9000 Gbit/s |
Lustre, IBM GPFS, HP Ibrix FS, DDN SFA10000
HPC, HPC Starter Pack for Production and Post-production, Cloud Computing, Virtualization