Tuesday, November 4, 2008

8 Hints for Debugging and Validating High-Speed Buses

Who Should Read This Application Note?

Digital R & D engineers designing boards and subsystems with increasingly faster digital signals.

Hint 1: Coordinate Your Tools Effectively

Beyond the high-speed data transfer rates, system performance is improved by distributing computing intelligence into the I/O system. This complex architecture can be a challenge to debug, but it can be accomplished with a system-level approach and the proper tools.

Three debugging concepts to follow are:
  1. Obtain broad visibility into all parts of your system.
  2. Cross-correlate activities in different parts of your system.
  3. Use a variety of stimuli, both artificially generated and from a real-world environment, to test your system.
As you prepare for system-level validation, you must select the appropriate tools for your test
bench. Three basic types of test tools are most commonly needed for effective debugging and
validation of InfiniBand systems at the higher protocol layers:—
  1. a protocol analyzer,
  2. a logic analyzer, and
  3. a traffic generator.
Picture below shows a typical system with the appropriate test equipment connected to the various links.





A protocol analyzer is optimized for protocol measurements, and focuses on providing a comprehensive view of information and data transfer on the link. A logic analyzer is most useful for looking at levels of the protocol up to the transport layer, and is optimized for providing cross-bus or multi-bus correlation of information. Traffic generators create controlled InfiniBand
traffic for system validation, providing a controlled rather than a real-world stimulus.

It is important to understand the strengths of your tools, to know which one to select for a
particular task, and to be able to use them efficiently. If you can coordinate your tools effectively,
tracing your really tough system debugging problems to their root cause can be fairly straightforward.

Hint 2: Use Frequency-Domain Instruments for Physical-Layer Testing of High-Speed Components

Traditional two-port VNA systems are designed for characterizing unbalanced (single-ended)
devices, and provide the test data in a format that may be of limited use in signal integrity
applications. New four-port (two-channel) Agilent physical layer test systems have been
developed specifically for differential devices, bringing the precision and accuracy of the VNA to the world of high-speed, balanced devices. These high-dynamic-range test systems let you see elusive EMI problems that previously might have gone undetected.

With a single set of connections, you can measure the single-ended, differential-mode, common-mode, and mode-conversion behavior of your DUT. When characterizing devices such as a pair of printed-circuit traces, you can analyze each trace by itself to measure total delay and skew, or analyze them together as a balanced pair.

The format of the data can be changed depending on what is most meaningful for a given type
of device, or for the type of information that is needed. These formats include time domain
(TDR/TDT), frequency domain (S-matrix), eye diagram, and RLCG extraction (transmission
line parameters).


Several additional capabilities are increasingly important as device speeds increase. These include de-embedding the effects of test fixtures and probes, simulating the effects of compensation networks on a DUT, translating the performance to an alternate reference impedance, and examining the effects of phase skew. These types of analysis are most conveniently done in the frequency domain, so working with frequency parameters is more natural. Frequency-domain measurements provide the type of information needed to perform physical-layer characterization of high-speed differential devices, and this methodology is becoming increasingly necessary. The VNA-based system’s combined characteristics of accuracy, dynamic range, comprehensiveness, and flexibility make it ideal for such applications.

Hint 3: Understand Sources of Error in High-Speed Systems

Higher clock speeds are the obvious technology trend, but the many related changes have an equal or greater impact on designs. Faster clock speeds require smaller voltage swings and shorter setup and hold times. Data-valid windows become orders of magnitude smaller. Jitter caused by noise, crosstalk, and intersymbol interference further reduces their size, creating errors. Because the noise margins are so small, noise and timing budgets can no longer
tolerate phenomena that were previously ignored. Picture below is an eye diagram of a single data channel. The first data bit has a data-valid window with an amplitude of 700 mV and a
data-valid time of 1.5 ns. But a few bits later, a small amount of jitter has reduced the data-valid
window to 1 ns.

Today’s high-speed bus designer needs to carefully consider proper termination of signals and correct impedance matching. Higher frequencies result in shorter wavelengths, so board layout becomes critical. Traces need to be treated as transmission lines and even package leads need to be considered as potential antennas. Designers have to watch for breaks in the ground plane, match impedances to reduce reflections, and worry about trace separation, trace length, and even discontinuities in the FR4 material.


Buses are no longer clocked with one or two signals; source-synchronous bus designs can have a dozen or even more clocks or strobes moving the data. DDR (double data rate) is a source-synchronous bus. Instead of one clock, it has 19—one for the control signals and 18 strobes (or
clocks) for the data groups. Each data group has its own strobe that is most likely skewed from
the other data groups.

So an increase in bus speeds requires many changes to design methods. These same changes
affect the way buses are tested and measured. One serious issue affecting testing is probing.
Whenever a measurement is made, the connection of the test equipment affects the measurement to some extent. The effect of probing on a circuit becomes more pronounced as the frequency increases. Probing is no longer just clipping a probe to a test point; the probing system must be specifically designed for the bus under test.

Hint 4: Characterize Your Differential-Impedance Circuit Board

Differential-impedance circuit boards are becoming more common as low-voltage differential signaling (LVDS) devices proliferate. Yet there is much confusion in the industry about what differential impedance means, how to design for it, and how to leverage its benefits for noise rejection. Knowing the general features of differential-impedance transmission lines and how they can be characterized with traditional time domain reflectometry (TDR) instruments
can help in the design and testing of high-speed digital systems.

Dual-channel TDR can be used to analyze features and real-world effects of high-speed differential transmission lines under a variety of conditions such as a gap in the return path or skew between the two channels. It can be hooked up in a variety of ways to maximize the amount of information obtained. One channel of the TDR plug-in can be used to perform conventional TDR analysis. Using a second TDR channel allows analysis of the properties of differential transmission line pairs.

With time domain transmission (TDT), the first channel generates the exciting source into one end of a transmission line and the second TDR channel is the receiver at the other end. In this
way, the TDR and TDT response of the device under test (DUT) can be measured simultaneously. The TDR response gives information about the impedance of the DUT, and the TDT gives information about the signal propagation time, signal quality, and rise-time degradation. In this mode, the TDT is emulating what a receiver might see at the far end.


A frequent problem with differential drivers for differential-pair lines is skew between the two channels. This arises due to driver mismatch, different rise and fall times, different interconnect delays due to routing differences, or different loads on the two lines of the differential pair. Any signal imbalance at the receivers will create a common signal.

A variable skew can be introduced between the two driven TDR step generators, emulating what would happen if there were a skew in the drivers. In the example shown in Picture below, the common signal increases steadily as the skew increases from 0 to 100 ps, comparable to the rise time. If the skew is longer than 100 ps, the common signal at the receiver is basically constant. This suggests that to minimize the common signal, the skew should be kept to just a small fraction of the rise time.


Dual-channel TDR and a dual-channel amplifier module can provide full characterization of differential pairs of transmission lines, enabling you to better determine how your designs are working at the electrical level and suggesting ideas for how to improve them.

Hint 5: Validate Complex Systems with Test Cards

Validation of computer systems and subsystems is becoming increasingly complex as I/O
systems and peripherals become more intelligent. The role of data transfer initiator is being
delegated to the I/O systems instead of the CPU, causing data traffic to move in several
directions simultaneously and freeing the CPU for data processing tasks. Increasing bandwidth needs require performance optimization in the subsystems, further increasing the complexity of the whole system.

Validation of these complex systems is becoming a very difficult task. It is necessary to test systems and subsystems under real-life conditions while ensuring that corner-cases are covered, to confront the system under test (SUT) with all possible combinations of traffic, and to generate peak load conditions. Furthermore, tests must be reproducible to enable debugging.

How can you ensure that your products are well tested, corner-cases are covered, and the time spent is within reasonable limits? The solution is a combination of test cards and software unning on the SUT. A test card is a device that is used specifically for testing. It is designed for a specific slot-based or cable-based I/O system—PCI, PCI-X, or others—and operates like any other device designed for that system. It can allocate system resources, generate any type of traffic that is allowed, and also react to a transaction initiated somewhere else. The test card has an external interface, so it can be controlled with an external controlling host, making it independent of the type of system and the operating system used. When used with an “internal” connection (directly from the SUT), it is possible to coordinate tests using the test card with other test tools or programs.


It is extremely useful if the test card has analyzing as well as exercising capabilities. These
include the ability to monitor the transaction protocol, gather performance metrics, and provide
traces of the ongoing transactions.

A test card placed at a central position within the system also can serve as a window into the
system. It can read or write registers in system memory or the system’s I/O space. It can
read or write the configuration data of other devices including bridge devices, and can dump the
contents of memory areas. This is possible even when the system itself is no longer operating.

Some system validation procedures cannot be accomplished by using test cards alone. In these cases, you can use a combined approach, or “validation framework”. The validation framework handles all aspects of test-card testing— setup, running, and analysis— along with tests written
specifically for devices that require CPU interaction. All testing is handled by a single piece of software, which has an application programming interface (API) that allows you to add new tests and to configure existing ones for different needs Picture below.


Validating complex server and workstation systems has become as complex as the systems
themselves. Using a combination of test cards and specific tests combined within a validation
framework is an approach that can significantly reduce testing time and improve testing reproducibility.

Hint 6: Integrate Your Tools

USB 2.0’s high-speed data rate permits bandwidth-hungry peripherals to use a USB system, but it does create debugging problems. Signal integrity becomes critical at its data rate (480 Mb/s) and RF and even microwave design techniques must be used. Yet the digital world (data domain) also creates issues that affect the designer: interaction of devices, multi-speed traffic, and large amounts of data. Because the analog and digital behaviors are interrelated, designers of USB 2.0 devices need to take a holistic approach to debugging, validating, and characterizing their designs.

Many types of tools are needed to make accurate analog scope measurements, current measurements, and to capture and analyze data. The complexities of oscilloscopes, MATLAB®‚
software, logic analyzers, and breakout boards turn debugging, performance characterization,
and specification compliance testing into daunting tasks. Fortunately, tools are available
that simplify the process by taking advantage of the integration of PCs and test equipment. For example, MATLAB software is integrated into an oscilloscope; it eliminates the need to make a measurement and then transfer the data to a PC to do voltage or current analysis as shown in picture below. Today’s logic analyzers and oscilloscopes are integrated so that cross-triggering
and even sharing of data on a single display simplify the debug process.


Another key test and debug technique is cross-bus analysis. It may be desirable to look atmultiple USB hubs concurrently, to study a hub and the PCI bus coming out of a USB/PCI adapter card, or to observe the interaction between the USB, PCI, and PC memory system. Logic analyzers are capable of looking at multiple buses simultaneously, having one bus trigger other buses, and having all of the time-correlated data presented on a single display.

Hint 7: Save Time Making Signal Integrity Measurements with Eye Scan

Oscilloscopes have traditionally been used to characterize and validate the signal integrity of
prototype circuits. However, making multiple eye-diagram measurements with an oscilloscope can be a slow process due to the time required to connect and move probes. Measuring the hundreds of nodes on a complex system requires moving the probes hundreds of times. With eye scan, the connection is instantaneous if a connector has been designed into the board, so you can measure the signal integrity behavior on tens or hundreds of signal nodes much more quickly than with an oscilloscope. This enables you to acquire comprehensive signal integrity information on all buses in a design under a wide variety of operating conditions in a reasonable amount of time. Eye scan is a measurement tool that can reduce the time required to verify signal integrity in complex high-speed designs. When running eye scan, the logic analyzer scans all incoming signals for activity in a time range centered on the clock and over the entire voltage range of the signal. The results are displayed in a graph similar to an oscilloscope eye diagram as shown in picture below. Display colors correspond to the amount of signal activity detected.


Eye scan examines regions of time and voltage for signal transitions. The time regions are defined relative to active clock transitions in the user’s system. The scan proceeds first along the
time axis. When the specified range of time has been scanned, the voltage threshold is incremented and the time range is scanned again at a new threshold. This is repeated until
all time and voltage regions have been scanned. The user can adjust the scan range and resolution in both time and voltage.

Hint 8: Use LVDS for High-Speed Interconnects

Low-voltage differential signaling (LVDS) offers high-speed data transfer for innumerable types of interconnects. It uses high-speed analog circuit techniques to provide multi-gigabit data
transfers on copper cables or printed circuit board traces. LVDS, a generic interface standard, moves information on a board, between boards, modules, shelves, and racks, or box to box. The equivalent circuit structure of the LVDS physical layer is shown in picture below.


Low-voltage signals have many advantages, including fast bit rates, lower power, and better
noise performance. Because its voltage change between logic states is only 300 mV, LVDS can
change states very fast. Low voltage swing reduces power consumption because it lowers the voltage across the termination resistors and lowers the overall power dissipation. To improve noise immunity, LVDS uses differential data transmission. Differential signals have the advantage of tolerating interference from outside sources such as inductive radiation from
electric motors or crosstalk from neighboring transmission lines. A differential receiver responds
only to the difference between the two inputs, so when noise appears commonly to both inputs,
the input differential signal amplitude is undisturbed. LVDS can also tolerate minor impedance mismatches in transmission paths. As long as the differential signal passes through balanced discontinuities in closely-coupled transmission paths, the signal can maintain integrity. The effect of non-impedance-controlled connectors, printed circuit board vias, and chip packaging is not as
detrimental to differential signals as it is to single-ended signals. The final LVDS system benefit is its integration capability.

LVDS is now spawning follow-on technologies that expand its applications. Bus LVDS (BLVDS) allows the low-voltage differential signals to work in bidirectional and multidrop configurations. Another LVDS derivative, ground-referenced LVDS (GLVDS), moves the differential signal’s common-mode voltage close to ground, enabling chips operating from very low supply voltages to communicate over a high-speed standard interface.

When using LVDS for high-speed interconnects, you must verify the integrity of the complete
signal path, which can be a difficult and time-consuming process for a complex device. A parallel bit-error-ratio tester, can speed up this process, allowing you to quickly verify the signal integrity of the physical layer.

No comments:

Post a Comment