Placeholder Image

字幕列表 影片播放

  • PCI Express, officially abbreviated as PCIe, is a high-speed serial computer expansion

  • bus standard designed to replace the older PCI, PCI-X, and AGP bus standards. PCIe has

  • numerous improvements over the aforementioned bus standards, including higher maximum system

  • bus throughput, lower I/O pin count and smaller physical footprint, better performance-scaling

  • for bus devices, a more detailed error detection and reporting mechanism), and native hot-plug

  • functionality. More recent revisions of the PCIe standard support hardware I/O virtualization.

  • The PCI Express electrical interface is also used in a variety of other standards, most

  • notably in ExpressCard which is a laptop expansion card interface, and in SATA Express which

  • is a computer storage interface. Format specifications are maintained and developed

  • by the PCI-SIG, a group of more than 900 companies that also maintain the conventional PCI specifications.

  • PCIe 3.0 is the latest standard for expansion cards that is in production and available

  • on mainstream personal computers.

  • Architecture

  • Conceptually, the PCIe bus is like a high-speed serial replacement of the older PCI/PCI-X

  • bus, an interconnect bus using shared address/data lines.

  • A key difference between PCIe bus and the older PCI is the bus topology. PCI uses a

  • shared parallel bus architecture, where the PCI host and all devices share a common set

  • of addresscontrol lines. In contrast, PCIe is based on point-to-point topology, with

  • separate serial links connecting every device to the root complex. Due to its shared bus

  • topology, access to the older PCI bus is arbitrated, and limited to one master at a time, in a

  • single direction. Furthermore, the older PCI clocking scheme limits the bus clock to the

  • slowest peripheral on the bus. In contrast, a PCIe bus link supports full-duplex communication

  • between any two endpoints, with no inherent limitation on concurrent access across multiple

  • endpoints. In terms of bus protocol, PCIe communication

  • is encapsulated in packets. The work of packetizing and de-packetizing data and status-message

  • traffic is handled by the transaction layer of the PCIe port. Radical differences in electrical

  • signaling and bus protocol require the use of a different mechanical form factor and

  • expansion connectors; PCI slots and PCIe slots are not interchangeable. At the software level,

  • PCIe preserves backward compatibility with PCI; legacy PCI system software can detect

  • and configure newer PCIe devices without explicit support for the PCIe standard, though PCIe's

  • new features are inaccessible. The PCIe link between two devices can consist

  • of anywhere from 1 to 32 lanes. In a multi-lane link, the packet data is striped across lanes,

  • and peak data-throughput scales with the overall link width. The lane count is automatically

  • negotiated during device initialization, and can be restricted by either endpoint. For

  • example, a single-lane PCIe card can be inserted into a multi-lane slot, and the initialization

  • cycle auto-negotiates the highest mutually supported lane count. The link can dynamically

  • down-configure the link to use fewer lanes, thus providing some measure of failure tolerance

  • in the presence of bad or unreliable lanes. The PCIe standard defines slots and connectors

  • for multiple widths: ×1, ×4, ×8, ×16, ×32. This allows PCIe bus to serve both cost-sensitive

  • applications where high throughput is not needed, as well as performance-critical applications

  • such as 3D graphics, networking, and enterprise storage.

  • As a point of reference, a PCI-X device and PCIe device using four lanes, Gen1 speed have

  • roughly the same peak transfer rate in a single-direction: 1064 MB/sec. The PCIe bus has the potential

  • to perform better than the PCI-X bus in cases where multiple devices are transferring data

  • simultaneously, or if communication with the PCIe peripheral is bidirectional.

  • Interconnect PCIe devices communicate via a logical connection

  • called an interconnect or link. A link is a point-to-point communication channel between

  • two PCIe ports, allowing both to send/receive ordinary PCI-requests and interrupts. At the

  • physical level, a link is composed of one or more lanes. Low-speed peripherals use a

  • single-lane link, while a graphics adapter typically uses a much wider 16-lane link.

  • Lane A lane is composed of two differential signaling

  • pairs: one pair for receiving data, the other for transmitting. Thus, each lane is composed

  • of four wires or signal traces. Conceptually, each lane is used as a full-duplex byte stream,

  • transporting data packets in eight-bit 'byte' format, between endpoints of a link, in both

  • directions simultaneously. Physical PCIe slots may contain from one to thirty-two lanes,

  • in powers of two. Lane counts are written with an × prefix, with ×16 being the largest

  • size in common use. Serial bus

  • The bonded serial format was chosen over a traditional parallel bus format due to the

  • latter's inherent limitations, including single-duplex operation, excess signal count and an inherently

  • lower bandwidth due to timing skew. Timing skew results from separate electrical signals

  • within a parallel interface traveling down different-length conductors, on potentially

  • different printed circuit board layers, at possibly different signal velocities. Despite

  • being transmitted simultaneously as a single word, signals on a parallel interface experience

  • different travel times and arrive at their destinations at different moments. When the

  • interface clock rate is increased to a point where its inverse is shorter than the largest

  • possible time between signal arrivals, the signals no longer arrive with sufficient coincidence

  • to make recovery of the transmitted word possible. Since timing skew over a parallel bus can

  • amount to a few nanoseconds, the resulting bandwidth limitation is in the range of hundreds

  • of megahertz. A serial interface does not exhibit timing

  • skew because there is only one differential signal in each direction within each lane,

  • and there is no external clock signal since clocking information is embedded within the

  • serial signal. As such, typical bandwidth limitations on serial signals are in the multi-gigahertz

  • range. PCIe is just one example of a general trend away from parallel buses to serial interconnects.

  • Other examples include Serial ATA, USB, SAS, FireWire and RapidIO.

  • Multichannel serial design increases flexibility by allocating slow devices to fewer lanes

  • than fast devices. Form factors

  • PCI Express

  • A PCIe card fits into a slot of its physical size or larger, but may not fit into a smaller

  • PCIe slot. Some slots use open-ended sockets to permit physically longer cards and negotiate

  • the best available electrical connection. The number of lanes actually connected to

  • a slot may also be less than the number supported by the physical slot size.

  • An example is a ×16 slot that runs at ×4. This slot will accept any ×1, ×2, ×4, ×8,

  • or ×16 card, but provides only ×4 speed. Its specification may read: ×16; "×size

  • @ ×speed" notation is also common. The advantage is that such slot can accommodate a larger

  • range of PCIe cards without requiring motherboard hardware to support the full transfer rate.

  • Pinout The following table identifies the conductors

  • on each side of the edge connector on a PCI Express card. The solder side of the printed

  • circuit board is the A side, and the component side is the B side. PRSNT1# and PRSNT2# pins

  • must be slightly shorter than the rest, to ensure that a hot-plugged card is fully inserted.

  • The WAKE# pin uses full voltage to wake the computer, but must be pulled high from the

  • standby power to indicate that the card is wake capable.

  • Power All sizes of ×4 and ×8 PCI Express cards

  • are allowed a maximum power consumption of 25 W. All ×1 cards are initially 10 W;

  • full-height cards may configure themselves as 'high-power' to reach 25 W, while half-height

  • ×1 cards are fixed at 10 W. All sizes of ×16 cards are initially 25 W; like ×1 cards,

  • half-height cards are limited to this number while full-height cards may increase their

  • power after configuration. They can use up to 75 W, though the specification demands

  • that the higher-power configuration be used for graphics cards only, while cards of other

  • purposes are to remain at 25 W. Optional connectors add 75 W or 150 W power

  • for up to 300 W total. Some cards are using two 8-pin connectors, but this has not been

  • standardized yet, therefore such cards must not carry the official PCI Express logo. This

  • configuration would allow 375 W total and will likely be standardized by PCI-SIG with

  • the PCI Express 4.0 standard. The 8-pin PCI Express connector could be mistaken with the

  • EPS12V connector, which is mainly used for powering SMP and multi-core systems.

  • PCI Express Mini Card

  • PCI Express Mini Card, based on PCI Express, is a replacement for the Mini PCI form factor.

  • It is developed by the PCI-SIG. The host device supports both PCI Express and USB 2.0 connectivity,

  • and each card may use either standard. Most laptop computers built after 2005 are based

  • on PCI Express. Physical dimensions

  • PCI Express Mini Cards are 30×50.95 mm. There is a 52-pin edge connector, consisting

  • of two staggered rows on a 0.8 mm pitch. Each row has eight contacts, a gap equivalent

  • to four contacts, then a further 18 contacts. A half-length card is also specified 30×26.8 mm.

  • Cards have a thickness of 1.0 mm. Electrical interface

  • PCI Express Mini Card edge connectors provide multiple connections and buses:

  • PCIe ×1 USB 2.0

  • SMBus Wires to diagnostics LEDs for wireless network

  • status on computer's chassis SIM card for GSM and WCDMA applications.

  • Future extension for another PCIe lane 1.5 and 3.3 volt power

  • Mini PCI Express & mSATA Despite sharing the mini-PCI Express form

  • factor, an mSATA slot is not necessarily electrically compatible with Mini PCI Express. For this

  • reason, only certain notebooks are compatible with mSATA drives. Most compatible systems

  • are based on Intel's Sandy Bridge processor architecture, using the Huron River platform.

  • But for a mSATA/mini-PCI-E connector, the only prerequisite is that there is a switch

  • which makes it either a mSATA or a mini-PCI-E slot and can be implemented on any platform.

  • Notebooks like Lenovo's T-Series, W-Series, and X-Series ThinkPads released in MarchApril

  • 2011 have support for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s,

  • and the Lenovo IdeaPad Y460/Y560 also support mSATA.

  • Some notebooks use a variant of the PCI Express Mini Card as an SSD. This variant uses the

  • reserved and several non-reserved pins to implement SATA and IDE interface passthrough,

  • keeping only USB, ground lines, and sometimes the core PCIe 1x bus intact. This makes the

  • 'miniPCIe' flash and solid state drives sold for netbooks largely incompatible with true

  • PCI Express Mini implementations. Also, the typical Asus miniPCIe SSD is 71 mm

  • long, causing the Dell 51 mm model to often be referred to as half length. A true 51 mm

  • Mini PCIe SSD was announced in 2009, with two stacked PCB layers, which allows for higher

  • storage capacity. The announced design preserves the PCIe interface, making it compatible with

  • the standard mini PCIe slot. No working product has yet been developed.

  • Intel has numerous desktop boards with the PCIe ×1 Mini-Card slot which typically do

  • not support mSATA SSD. A list of desktop boards that natively support mSATA in the PCIe ×1

  • Mini-Card slot is provided on the Intel Support site.

  • PCI Express External Cabling PCI Express External Cabling specifications

  • were released by the PCI-SIG in February 2007. Standard cables and connectors have been defined

  • for ×1, ×4, ×8, and ×16 link widths, with a transfer rate of 250 MB/s per lane. The

  • PCI-SIG also expects the norm will evolve to reach 500 MB/s, as in PCI Express 2.0.

  • The maximum cable length remains undetermined. An example of the uses of Cabled PCI Express

  • is a metal enclosure, containing a number of PCI slots and PCI-to-ePCIe adapter circuitry.

  • This device would not be possible had it not been for the ePCIe spec.

  • Derivative forms There are several other expansion card types

  • derived from PCIe. These include: Low-height card

  • ExpressCard: successor to the PC Card form factor

  • PCI Express ExpressModule: a hot-pluggable modular form factor defined for servers and

  • workstations XQD card: a PCI Express-based flash card standard

  • by the CompactFlash Association XMC: similar to the CMC/PMC form factor

  • AdvancedTCA: a complement to CompactPCI for larger applications; supports serial based

  • backplane topologies AMC: a complement to the AdvancedTCA specification;

  • supports processor and I/O modules on ATCA boards.

  • FeaturePak: a tiny expansion card format for embedded and small form factor applications;

  • it implements two ×1 PCIe links on a high-density connector along with USB, I2C, and up to 100

  • points of I/O. Universal IO: A variant from Super Micro Computer

  • Inc designed for use in low-profile rack-mounted chassis. It has the connector bracket reversed

  • so it cannot fit in a normal PCI Express socket, but it is pin-compatible and may be inserted

  • if the bracket is removed. Thunderbolt: A variant from Intel that combines

  • DisplayPort and PCIe protocols in a form factor compatible with Mini DisplayPort.

  • Serial Digital Video Out: some 9xx series Intel chipsets allow for adding an additional

  • output for the integrated video into a PCIe slot

  • M.2 M-PCIe brings PCIe 3.0 to mobile devices,

  • over the M-PHY physical layer. History and revisions

  • While in early development, PCIe was initially referred to as HSI, and underwent a name change

  • to 3GIO before finally settling on its PCI-SIG name PCI Express. A technical working group

  • named the Arapaho Work Group drew up the standard. For initial drafts, the AWG consisted only

  • of Intel engineers; subsequently the AWG expanded to include industry partners.

  • PCI Express is a technology under constant development and improvement. As of 2013 the

  • PCI Express implementation has reached version 4.

  • PCI Express 1.0a In 2003, PCI-SIG introduced PCIe 1.0a, with

  • a per-lane data rate of 250 MB/s and a transfer rate of 2.5 gigatransfers per second. Transfer

  • rate is expressed in transfers per second instead of bits per second because the number

  • of transfers includes the overhead bits, which do not provide additional throughput.

  • PCIe 1.x uses an 8b/10b encoding scheme that results in a 20 percent/10) overhead on the

  • raw bit rate. It uses a 2.5 GHz clock rate, therefore delivering an effective 250 000

  • 000 bytes per second maximum data rate. PCI Express 1.1

  • In 2005, PCI-SIG introduced PCIe 1.1. This updated specification includes clarifications

  • and several improvements, but is fully compatible with PCI Express 1.0a. No changes were made

  • to the data rate. PCI Express 2.0

  • PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January

  • 2007. The PCIe 2.0 standard doubles the transfer rate compared with PCIe 1.0 to 5 GT/s and

  • the per-lane throughput rises from 250 MB/s to 500 MB/s. This means a 32-lane PCIe connector

  • can support throughput up to 16 GB/s aggregate. PCIe 2.0 motherboard slots are fully backward

  • compatible with PCIe v1.x cards. PCIe 2.0 cards are also generally backward compatible

  • with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall, graphic

  • cards or motherboards designed for v2.0 will work with the other being v1.1 or v1.0a.

  • The PCI-SIG also said that PCIe 2.0 features improvements to the point-to-point data transfer

  • protocol and its software architecture. Intel's first PCIe 2.0 capable chipset was

  • the X38 and boards began to ship from various vendors as of October 21, 2007. AMD started

  • supporting PCIe 2.0 with its AMD 700 chipset series and nVidia started with the MCP72.

  • All of Intel's prior chipsets, including the Intel P35 chipset, supported PCIe 1.1 or 1.0a.

  • Like 1.x, PCIe 2.0 uses an 8b/10b encoding scheme, therefore delivering, per-lane, an

  • effectiveGbit/s max transfer rate from itsGT/s raw data rate.

  • PCI Express 2.1 PCI Express 2.1 supports a large proportion

  • of the management, support, and troubleshooting systems planned for full implementation in

  • PCI Express 3.0. However, the speed is the same as PCI Express 2.0. Unfortunately, the

  • increase in power from the slot breaks backward compatibility between PCI Express 2.1 cards

  • and some older motherboards with 1.0/1.0a, but most motherboards with PCI Express 1.1

  • connectors are provided with a BIOS update by their manufacturers through utilities to

  • support backward compatibility of cards with PCIe 2.1.

  • PCI Express 3.x PCI Express 3.0 Base specification revision

  • 3.0 was made available in November 2010, after multiple delays. In August 2007, PCI-SIG announced

  • that PCI Express 3.0 would carry a bit rate of 8 gigatransfers per second, and that it

  • would be backward compatible with existing PCI Express implementations. At that time,

  • it was also announced that the final specification for PCI Express 3.0 would be delayed until

  • 2011. New features for the PCI Express 3.0 specification include a number of optimizations

  • for enhanced signaling and data integrity, including transmitter and receiver equalization,

  • PLL improvements, clock data recovery, and channel enhancements for currently supported

  • topologies. Following a six-month technical analysis of

  • the feasibility of scaling the PCI Express interconnect bandwidth, PCI-SIG's analysis

  • found that 8 gigatransfers per second can be manufactured in mainstream silicon process

  • technology, and can be deployed with existing low-cost materials and infrastructure, while

  • maintaining full compatibility to the PCI Express protocol stack.

  • PCI Express 3.0 upgrades the encoding scheme to 128b/130b from the previous 8b/10b encoding,

  • reducing the overhead to approximately 1.54%/130), as opposed to the 20% overhead of PCI Express

  • 2.0. This is achieved by a technique called "scrambling" that applies a known binary polynomial

  • to a data stream in a feedback topology. Because the scrambling polynomial is known, the data

  • can be recovered by running it through a feedback topology using the inverse polynomial. PCI

  • Express 3.0's 8 GT/s bit rate effectively delivers 985 MB/s per lane, practically doubling

  • the lane bandwidth relative to PCI Express 2.0.

  • On November 18, 2010, the PCI Special Interest Group officially published the finalized PCI

  • Express 3.0 specification to its members to build devices based on this new version of

  • PCI Express. PCI Express 3.1 specification is scheduled

  • to be released in late 2013 or early 2014, consolidating various improvements to the

  • published PCI Express 3.1 specification in three areas – power management, performance

  • and functionality. PCI Express 4.0

  • On November 29, 2011, PCI-SIG announced PCI Express 4.0 featuring 16 GT/s, still based

  • on copper technology. Additionally, active and idle power optimizations are to be investigated.

  • Final specifications are expected to be released in 2014 or 2015.

  • Extensions and future directions Some vendors offer PCIe over fiber products,

  • but these generally find use only in specific cases where transparent PCIe bridging is preferable

  • to using a more mainstream standard that may require additional software to support it;

  • current implementations focus on distance rather than raw bandwidth and typically do

  • not implement a full ×16 link. Thunderbolt was developed by Intel as a general-purpose

  • high speed interface combining a ×4 PCIe link with DisplayPort and was originally intended

  • to be an all-fiber interface, but due to early difficulties in creating a consumer-friendly

  • fiber interconnect, most early implementations are hybrid copper-fiber systems. A notable

  • exception, the Sony VAIO Z VPC-Z2, uses a nonstandard USB port with an optical component

  • to connect to an outboard PCIe display adapter. Apple has been the primary driver of Thunderbolt

  • adoption through 2011, though several other vendors have announced new products and systems

  • featuring Thunderbolt. Mobile PCIe specification allows PCI Express

  • architecture to operate over the MIPI Alliance's M-PHY physical layer technology. Building

  • on top of already existing widespread adoption of M-PHY and its low-power design, Mobile

  • PCIe allows PCI Express to be used in tablets and smartphones.

  • A proposed extension called OCuLink, as a competitor to Thunderbolt, was reported in

  • the press in September 2013. It is "the cable version of PCI Express", but despite what

  • its name might suggest it is intended to be copper-based, and up to four lanes wide. Its

  • target launch date is mid-2014. Hardware protocol summary

  • The PCIe link is built around dedicated unidirectional couples of serial, point-to-point connections

  • known as lanes. This is in sharp contrast to the earlier PCI connection, which is a

  • bus-based system where all the devices share the same bidirectional, 32-bit or 64-bit parallel

  • bus. PCI Express is a layered protocol, consisting

  • of a transaction layer, a data link layer, and a physical layer. The Data Link Layer

  • is subdivided to include a media access control sublayer. The Physical Layer is subdivided

  • into logical and electrical sublayers. The Physical logical-sublayer contains a physical

  • coding sublayer. The terms are borrowed from the IEEE 802 networking protocol model.

  • Physical layer The PCIe Physical Layer specification is divided

  • into two sub-layers, corresponding to electrical and logical specifications. The logical sublayer

  • is sometimes further divided into a MAC sublayer and a PCS, although this division is not formally

  • part of the PCIe specification. A specification published by Intel, the PHY Interface for

  • PCI Express, defines the MAC/PCS functional partitioning and the interface between these

  • two sub-layers. The PIPE specification also identifies the physical media attachment layer,

  • which includes the serializer/deserializer and other analog circuitry; however, since

  • SerDes implementations vary greatly among ASIC vendors, PIPE does not specify an interface

  • between the PCS and PMA. At the electrical level, each lane consists

  • of two unidirectional LVDS or PCML pairs at 2.525 Gbit/s. Transmit and receive are separate

  • differential pairs, for a total of four data wires per lane.

  • A connection between any two PCIe devices is known as a link, and is built up from a

  • collection of one or more lanes. All devices must minimally support single-lane link. Devices

  • may optionally support wider links composed of 2, 4, 8, 12, 16, or 32 lanes. This allows

  • for very good compatibility in two ways: A PCIe card physically fits in any slot that

  • is at least as large as it is; A slot of a large physical size can be wired

  • electrically with fewer lanes as long as it provides the ground connections required by

  • the larger physical slot size. In both cases, PCIe negotiates the highest

  • mutually supported number of lanes. Many graphics cards, motherboards and BIOS versions are

  • verified to support ×1, ×4, ×8 and ×16 connectivity on the same connection.

  • Even though the two would be signal-compatible, it is not usually possible to place a physically

  • larger PCIe card into a smaller slot – though if the PCIe slots are altered or a

  • riser is used most motherboards will allow this. Typically, this technique is used for

  • connecting multiple monitors to a single computer. The width of a PCIe connector is 8.8 mm,

  • while the height is 11.25 mm, and the length is variable. The fixed section of the connector

  • is 11.65 mm in length and contains two rows of 11, while the length of the other section

  • is variable depending on the number of lanes. The pins are spaced atmm intervals, and

  • the thickness of the card going into the connector is 1.8 mm.

  • Data transmission PCIe sends all control messages, including

  • interrupts, over the same links used for data. The serial protocol can never be blocked,

  • so latency is still comparable to conventional PCI, which has dedicated interrupt lines.

  • Data transmitted on multiple-lane links is interleaved, meaning that each successive

  • byte is sent down successive lanes. The PCIe specification refers to this interleaving

  • as data striping. While requiring significant hardware complexity to synchronize the incoming

  • striped data, striping can significantly reduce the latency of the nth byte on a link. Due

  • to padding requirements, striping may not necessarily reduce the latency of small data

  • packets on a link. As with other high data rate serial transmission

  • protocols, the clock is embedded in the signal. At the physical level, PCI Express 2.0 utilizes

  • the 8b/10b encoding scheme to ensure that strings of consecutive ones or consecutive

  • zeros are limited in length. This coding was used to prevent the receiver from losing track

  • of where the bit edges are. In this coding scheme every eight payload bits of data are

  • replaced with 10 bits of transmit data, causing a 20% overhead in the electrical bandwidth.

  • To improve the available bandwidth, PCI Express version 3.0 employs 128b/130b encoding instead:

  • similar but with much lower overhead. Many other protocols use a different form

  • of encoding known as scrambling to embed clock information into data streams. The PCIe specification

  • also defines a scrambling algorithm, but it is used to reduce electromagnetic interference

  • by preventing repeating data patterns in the transmitted data stream.

  • Data link layer The Data Link Layer performs three vital services

  • for the PCIe express link: sequence the transaction layer packets that

  • are generated by the transaction layer, ensure reliable delivery of TLPs between two

  • endpoints via an acknowledgement protocol that explicitly requires replay of unacknowledged/bad

  • TLPs, initialize and manage flow control credits

  • On the transmit side, the data link layer generates an incrementing sequence number

  • for each outgoing TLP. It serves as a unique identification tag for each transmitted TLP,

  • and is inserted into the header of the outgoing TLP. A 32-bit cyclic redundancy check code

  • is also appended to the end of each outgoing TLP.

  • On the receive side, the received TLP's LCRC and sequence number are both validated in

  • the link layer. If either the LCRC check fails, or the sequence-number is out of range, then

  • the bad TLP, as well as any TLPs received after the bad TLP, are considered invalid

  • and discarded. The receiver sends a negative acknowledgement message with the sequence-number

  • of the invalid TLP, requesting re-transmission of all TLPs forward of that sequence-number.

  • If the received TLP passes the LCRC check and has the correct sequence number, it is

  • treated as valid. The link receiver increments the sequence-number, and forwards the valid

  • TLP to the receiver's transaction layer. An ACK message is sent to remote transmitter,

  • indicating the TLP was successfully received If the transmitter receives a NAK message,

  • or no acknowledgement is received until a timeout period expires, the transmitter must

  • retransmit all TLPs that lack a positive acknowledgement. Barring a persistent malfunction of the device

  • or transmission medium, the link-layer presents a reliable connection to the transaction layer,

  • since the transmission protocol ensures delivery of TLPs over an unreliable medium.

  • In addition to sending and receiving TLPs generated by the transaction layer, the data-link

  • layer also generates and consumes DLLPs, data link layer packets. ACK and NAK signals are

  • communicated via DLLPs, as are flow control credit information, some power management

  • messages and flow control credit information. In practice, the number of in-flight, unacknowledged

  • TLPs on the link is limited by two factors: the size of the transmitter's replay buffer,

  • and the flow control credits issued by the receiver to a transmitter. PCI Express requires

  • all receivers to issue a minimum number of credits, to guarantee a link allows sending

  • PCIConfig TLPs and message TLPs. Transaction layer

  • PCI Express implements split transactions, allowing the link to carry other traffic while

  • the target device gathers data for the response. PCI Express uses credit-based flow control.

  • In this scheme, a device advertises an initial amount of credit for each received buffer

  • in its transaction layer. The device at the opposite end of the link, when sending transactions

  • to this device, counts the number of credits each TLP consumes from its account. The sending

  • device may only transmit a TLP when doing so does not make its consumed credit count

  • exceed its credit limit. When the receiving device finishes processing the TLP from its

  • buffer, it signals a return of credits to the sending device, which increases the credit

  • limit by the restored amount. The credit counters are modular counters, and the comparison of

  • consumed credits to credit limit requires modular arithmetic. The advantage of this

  • scheme is that the latency of credit return does not affect performance, provided that

  • the credit limit is not encountered. This assumption is generally met if each device

  • is designed with adequate buffer sizes. PCIe 1.x is often quoted to support a data

  • rate of 250 MB/s in each direction, per lane. This figure is a calculation from the physical

  • signaling rate divided by the encoding overhead This means a sixteen lane PCIe card would

  • then be theoretically capable of 16×250 MB/s = 4 GB/s in each direction. While this is

  • correct in terms of data bytes, more meaningful calculations are based on the usable data

  • payload rate, which depends on the profile of the traffic, which is a function of the

  • high-level application and intermediate protocol levels.

  • Like other high data rate serial interconnect systems, PCIe has a protocol and processing

  • overhead due to the additional transfer robustness. Long continuous unidirectional transfers can

  • approach >95% of PCIe's raw data rate. These transfers also benefit the most from increased

  • number of lanes But in more typical applications, the traffic profile is characterized as short

  • data packets with frequent enforced acknowledgements. This type of traffic reduces the efficiency

  • of the link, due to overhead from packet parsing and forced interrupts. Being a protocol for

  • devices connected to the same printed circuit board, it does not require the same tolerance

  • for transmission errors as a protocol for communication over longer distances, and thus,

  • this loss of efficiency is not particular to PCIe.

  • Applications

  • PCI Express operates in consumer, server, and industrial applications, as a motherboard-level

  • interconnect, a passive backplane interconnect and as an expansion card interface for add-in

  • boards. In virtually all modern PCs, from consumer

  • laptops and desktops to enterprise data servers, the PCIe bus serves as the primary motherboard-level

  • interconnect, connecting the host system-processor with both integrated-peripherals and add-on

  • peripherals. In most of these systems, the PCIe bus co-exists with one or more legacy

  • PCI buses, for backward compatibility with the large body of legacy PCI peripherals.

  • As of 2013 PCI Express has replaced AGP as the default interface for graphics cards on

  • new systems. Almost all models of graphics cards released since 2010 by AMD and Nvidia

  • use PCI Express. Nvidia uses the high-bandwidth data transfer of PCIe for its Scalable Link

  • Interface technology, which allows multiple graphics cards of the same chipset and model

  • number to run in tandem, allowing increased performance. AMD has also developed a multi-GPU

  • system based on PCIe called CrossFire. AMD and Nvidia have released motherboard chipsets

  • that support as many as four PCIe ×16 slots, allowing tri-GPU and quad-GPU card configurations.

  • External GPUs Theoretically, external PCIe could give a

  • notebook the graphics power of a desktop, by connecting a notebook with any PCIe desktop

  • video card; possible with an ExpressCard interface or a Thunderbolt interface. The ExpressCard

  • interface provides bit rates ofGbit/s, whereas the Thunderbolt interface provides

  • bit rates of up to 10 Gbit/s. There are now card hubs that can connect to

  • a laptop through an ExpressCard slot, though they are currently rare, obscure, or unavailable

  • on the open market. These hubs can accept full-sized cards. Examples include MSI GUS,

  • Village Instrument's ViDock, the Asus XG Station, Bplus PE4H V3.2 adapter, as well as more improvised

  • DIY devices. In 2008, AMD announced the ATI XGP technology,

  • based on a proprietary cabling system that is compatible with PCIe ×8 signal transmissions.

  • This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks.

  • Fujitsu launched their AMILO GraphicBooster enclosure for XGP soon thereafter. Around

  • 2010 Acer launched the Dynavivid graphics dock for XGP.

  • Thunderbolt has given opportunity to new and faster products to connect with a PCIe card

  • externally. Magma has released the ExpressBox 3T, which can hold up to three PCIe cards.

  • MSI also released the Thunderbolt GUS II, a PCIe chassis dedicated for video cards.

  • Other products such as the Sonnet’s Echo Express and mLogic’s mLink are Thunderbolt

  • PCIe chassis in a smaller form factor. However, all these products require the use of a Thunderbolt

  • port, making them incompatible with the vast majority of computers.

  • For the professional market, Nvidia has developed the Quadro Plex external PCIe family of GPUs

  • that can be used for advanced graphic applications. These video cards require a PCI Express ×8

  • or ×16 slot for the host-side card which connects to the Plex via a VHDCI carrying

  • 8 PCIe lanes. Storage devices

  • PCI Express protocol can be used as data interface to flash memory devices, such as memory cards

  • and solid-state drives. XQD card is a memory card format utilizing

  • PCI Express, developed by the CompactFlash Association, with transfer rates of up to

  • 500 MB/s. Many high-performance, enterprise-class SSDs

  • are designed as PCI Express RAID controller cards with flash memory chips placed directly

  • on the circuit board, utilizing proprietary interfaces and custom drivers to communicate

  • with the operating system; this allows much higher transfer rates and IOPS when compared

  • to Serial ATA or SAS drives. For example, in 2011 OCZ and Marvell co-developed a native

  • PCI Express solid-state drive controller for a PCI Express 3.0 ×16 slot with maximum capacity

  • of 12 TB and a performance of to 7.2 GB/s sequential transfers and up to 2.52 million

  • IOPS in random transfers. SATA Express is an interface for connecting

  • SSDs, by providing multiple PCI Express lanes as a pure PCI Express connection to the attached

  • storage device. M.2 is a specification for internally mounted computer expansion cards

  • and associated connectors, which also uses multiple PCI Express lanes.

  • PCI Express storage devices can implement both AHCI logical interface for backward compatibility,

  • and NVM Express logical interface for much faster I/O operations provided by utilizing

  • internal parallelism offered by such devices. Enterprise-class SSDs can also implement SCSI

  • over PCI Express. Cluster interconnect

  • Certain data-center applications require the use of fiber-optic interconnects due to the

  • distance limitations inherent in copper cabling. Typically, a network-oriented standard such

  • as Ethernet or Fibre Channel suffices for these applications, but in some cases the

  • overhead introduced by routable protocols is undesirable and a lower-level interconnect,

  • such as InfiniBand, RapidIO, or NUMAlink is needed. Local-bus standards such as PCIe and

  • HyperTransport can in principle be used for this purpose, but as of 2012 no major vendors

  • offer systems in this vein. Competing protocols

  • Several communications standards have emerged based on high bandwidth serial architectures.

  • These include InfiniBand, RapidIO, HyperTransport, QPI, StarFabric, and MIPI LLI. The differences

  • are based on the tradeoffs between flexibility and extensibility vs latency and overhead.

  • An example of such a tradeoff is adding complex header information to a transmitted packet

  • to allow for complex routing. The additional overhead reduces the effective bandwidth of

  • the interface and complicates bus discovery and initialization software. Also making the

  • system hot-pluggable requires that software track network topology changes. Examples of

  • buses suited for this purpose are InfiniBand and StarFabric.

  • Another example is making the packets shorter to decrease latency. Smaller packets mean

  • packet headers consume a higher percentage of the packet, thus decreasing the effective

  • bandwidth. Examples of bus protocols designed for this purpose are RapidIO and HyperTransport.

  • PCI Express falls somewhere in the middle, targeted by design as a system interconnect

  • rather than a device interconnect or routed network protocol. Additionally, its design

  • goal of software transparency constrains the protocol and raises its latency somewhat.

  • Development tools When developing or troubleshooting the PCI

  • Express bus, examination of hardware signals can be very important to find the problems.

  • Oscilloscopes, logic analyzers and bus analyzers are tools that collect, analyze, decode, store

  • signals so people can view the high-speed waveforms at their leisure.

  • See also

  • References

  • Further reading Budruk, Ravi; Anderson, Don; Shanley, Tom,

  • Winkles, JosephJoe’, ed., PCI Express System Architecture, Mind share PC system

  • architecture, Addison-Wesley, ISBN 978-0-321-15630-3  1120 pp.

  • Solari, Edward; Congdon, Brad, Complete PCI Express Reference: Design Implications for

  • Hardware and Software Developers, Intel, ISBN 978-0-9717861-9-6 , 1056 pp.

  • Wilen, Adam; Schade, Justin P; Thornburg, Ron, Introduction to PCI Express: A

  • Hardware and Software Developer's Guide, Intel, ISBN 978-0-9702846-9-3 , 325 pp.

  • External links PCI-SIG , the industry organization that

  • maintains and develops the various PCI standards PCI-Express Specification, PCI-SIG 

  • PCI Express Base Specification Revision 1.0. PCI-SIG. 29 April 2002.). 

  • "PCI Express Architecture", Developer Network, Intel 

  • Introduction to PCI Protocol, Electro Friends  An introduction to how PCIe works at the TLP

  • level, Xilly Bus 

PCI Express, officially abbreviated as PCIe, is a high-speed serial computer expansion

字幕與單字

單字即點即查 點擊單字可以查詢單字解釋

B2 中高級 美國腔

PCI Express (PCI Express)

  • 108 6
    Griffin 發佈於 2021 年 01 月 14 日
影片單字