PCI Express* Architecture
PCI Express* Architecture
Still Pushing the Limits of I/O Performance
PCI Express* (PCIe*) Architecture again leaps beyond I/O performance boundaries with PCI Express* 3.0. PCIe* 3.0 doubles the maximum data rate over its predecessor PCIe* 2.0, with transfer rates up to 8 GT/s. Yet it maintains backwards-compatibility with previous generations. This leap in transfer speed enables greater performance capabilities to developers of PC interconnect, graphics adapters, and chip-level communications, among other applications using this ubiquitous technology.
What is PCI Express*?
PCI Express* (PCIe*) is a standards-based, point-to-point, serial interconnect used throughout the computing and embedded devices industries. Introduced in 2004, PCIe* is managed by the PCI-SIG. PCIe* is capable of the following:
- Scalable, simultaneous, bi-directional transfers using one to 32 lanes of differential-pair interconnects
- Grouping lanes to achieve high transfer rates, such as with graphics adapters
- Up to 32 GB/s of bi-directional bandwidth on a x16 connector with PCI Express* 3.0
- Low-overhead, low-latency data transfers
- Both host-directed and peer-to-peer transfers
- Emulation of network environments by sending data between two points without host-chip routing
Why a Serial Interconnect?
PCI Express* (PCIe*) is the interconnect of choice because of its low cost, high performance, and flexibility. Maintaining software compatibility with the previous PCI* interconnect, PCIe* enables many benefits not possible with PCI, including:
- Scalable performance by grouping lanes together (one to 32)
- Lower cost, simpler implementations with its low pin counts
- Improved power management capabilities
- Ubiquity and flexibility – it’s widely used in a wide range of applications
How PCI Express* Works
A PCI Express* (PCIe*) ‘link’ comprises from one to 32 lanes. Links are expressed as x1, x2, x4, x8, x16, etc. The link is negotiated and configured on power up. More lanes deliver faster transfer rates; most graphics adapters use at least 16 lanes in today’s PCs. The clock is embedded in the data stream, allowing excellent frequency scaling for scalable performance.
PCI Express'* Holding Power
PCIe* 3.0, continues to scale with the demands of computing applications and delivery of higher performance processors. It remains central in both systems and devices, including servers, desktops, laptops, embedded solutions, add-on cards, and chipsets. Low latency makes it ideal as an interconnect throughout clustered systems making up the internet cloud.
Designing with PCI Express*
Intel, along with industry leaders, work together to ensure the PCI* standard is based on a robust specification to ensure compatibility for a multitude of products for years to come. Intel offers extensive resources to developers working with PCI Express* designs. Find out more about how Intel can help you design, develop, and deploy your PCI Express* designs faster.
Intel and BlueData are working together to help Nasdaq meet its data & advanced analytics goals.
This chalk talk video describes the features and performance benefits of Intel DDIO for Intel® Ethernet products.
Intel’s Trish Damkroger discusses the performance leaps the latest Intel® Xeon® processors bring to the HPC community.
See how Intel® Scalable System Framework is helping Kyoto University reach deeper insight faster in the field of drug discovery.
A video introducing the newest 2nd Gen Intel® Xeon® Scalable processors.